title
stringlengths
3
71
text
stringlengths
643
117k
relevans
float64
0.76
0.83
popularity
float64
0.94
1
ranking
float64
0.76
0.83
Structural functionalism
Structural functionalism, or simply functionalism, is "a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability". This approach looks at society through a macro-level orientation, which is a broad focus on the social structures that shape society as a whole, and believes that society has evolved like organisms. This approach looks at both social structure and social functions. Functionalism addresses society as a whole in terms of the function of its constituent elements; namely norms, customs, traditions, and institutions. A common analogy called the organic or biological analogy, popularized by Herbert Spencer, presents these parts of society as human body "organs" that work toward the proper functioning of the "body" as a whole. In the most basic terms, it simply emphasizes "the effort to impute, as rigorously as possible, to each feature, custom, or practice, its effect on the functioning of a supposedly stable, cohesive system". For Talcott Parsons, "structural-functionalism" came to describe a particular stage in the methodological development of social science, rather than a specific school of thought. Theory In sociology, classical theories are defined by a tendency towards biological analogy and notions of social evolutionism: While one may regard functionalism as a logical extension of the organic analogies for societies presented by political philosophers such as Rousseau, sociology draws firmer attention to those institutions unique to industrialized capitalist society (or modernity). Auguste Comte believed that society constitutes a separate "level" of reality, distinct from both biological and inorganic matter. Explanations of social phenomena had therefore to be constructed within this level, individuals being merely transient occupants of comparatively stable social roles. In this view, Comte was followed by Émile Durkheim. A central concern for Durkheim was the question of how certain societies maintain internal stability and survive over time. He proposed that such societies tend to be segmented, with equivalent parts held together by shared values, common symbols or (as his nephew Marcel Mauss held), systems of exchanges. Durkheim used the term "mechanical solidarity" to refer to these types of "social bonds, based on common sentiments and shared moral values, that are strong among members of pre-industrial societies". In modern, complex societies, members perform very different tasks, resulting in a strong interdependence. Based on the metaphor above of an organism in which many parts function together to sustain the whole, Durkheim argued that complex societies are held together by "organic solidarity", i.e. "social bonds, based on specialization and interdependence, that are strong among members of industrial societies". The central concern of structural functionalism may be regarded as a continuation of the Durkheimian task of explaining the apparent stability and internal cohesion needed by societies to endure over time. Societies are seen as coherent, bounded and fundamentally relational constructs that function like organisms, with their various (or social institutions) working together in an unconscious, quasi-automatic fashion toward achieving an overall social equilibrium. All social and cultural phenomena are therefore seen as functional in the sense of working together, and are effectively deemed to have "lives" of their own. They are primarily analyzed in terms of this function. The individual is significant not in and of themselves, but rather in terms of their status, their position in patterns of social relations, and the behaviours associated with their status. Therefore, the social structure is the network of statuses connected by associated roles. Functionalism also has an anthropological basis in the work of theorists such as Marcel Mauss, Bronisław Malinowski and Radcliffe-Brown. The prefix 'structural' emerged in Radcliffe-Brown's specific usage. Radcliffe-Brown proposed that most stateless, "primitive" societies, lacking strong centralized institutions, are based on an association of corporate-descent groups, i.e. the respective society's recognised kinship groups. Structural functionalism also took on Malinowski's argument that the basic building block of society is the nuclear family, and that the clan is an outgrowth, not vice versa. It is simplistic to equate the perspective directly with political conservatism. The tendency to emphasize "cohesive systems", however, leads functionalist theories to be contrasted with "conflict theories" which instead emphasize social problems and inequalities. Prominent theorists Auguste Comte Auguste Comte, the "Father of Positivism", pointed out the need to keep society unified as many traditions were diminishing. He was the first person to coin the term sociology. Comte suggests that sociology is the product of a three-stage development: Theological stage: From the beginning of human history until the end of the European Middle Ages, people took a religious view that society expressed God's will. In the theological state, the human mind, seeking the essential nature of beings, the first and final causes (the origin and purpose) of all effects—in short, absolute knowledge—supposes all phenomena to be produced by the immediate action of supernatural beings. Metaphysical stage: People began seeing society as a natural system as opposed to the supernatural. This began with enlightenment and the ideas of Hobbes, Locke, and Rousseau. Perceptions of society reflected the failings of a selfish human nature rather than the perfection of God. Positive or scientific stage: Describing society through the application of the scientific approach, which draws on the work of scientists. Herbert Spencer Herbert Spencer (1820–1903) was a British philosopher famous for applying the theory of natural selection to society. He was in many ways the first true sociological functionalist. In fact, while Durkheim is widely considered the most important functionalist among positivist theorists, it is known that much of his analysis was culled from reading Spencer's work, especially his Principles of Sociology (1874–96). In describing society, Spencer alludes to the analogy of a human body. Just as the structural parts of the human body—the skeleton, muscles, and various internal organs—function independently to help the entire organism survive, social structures work together to preserve society. While reading Spencer's massive volumes can be tedious (long passages explicating the organic analogy, with reference to cells, simple organisms, animals, humans and society), there are some important insights that have quietly influenced many contemporary theorists, including Talcott Parsons, in his early work The Structure of Social Action (1937). Cultural anthropology also consistently uses functionalism. This evolutionary model, unlike most 19th century evolutionary theories, is cyclical, beginning with the differentiation and increasing complication of an organic or "super-organic" (Spencer's term for a social system) body, followed by a fluctuating state of equilibrium and disequilibrium (or a state of adjustment and adaptation), and, finally, the stage of disintegration or dissolution. Following Thomas Malthus' population principles, Spencer concluded that society is constantly facing selection pressures (internal and external) that force it to adapt its internal structure through differentiation. Every solution, however, causes a new set of selection pressures that threaten society's viability. Spencer was not a determinist in the sense that he never said that Selection pressures will be felt in time to change them; They will be felt and reacted to; or The solutions will always work. In fact, he was in many ways a political sociologist, and recognized that the degree of centralized and consolidated authority in a given polity could make or break its ability to adapt. In other words, he saw a general trend towards the centralization of power as leading to stagnation and ultimately, pressures to decentralize. More specifically, Spencer recognized three functional needs or prerequisites that produce selection pressures: they are regulatory, operative (production) and distributive. He argued that all societies need to solve problems of control and coordination, production of goods, services and ideas, and, finally, to find ways of distributing these resources. Initially, in tribal societies, these three needs are inseparable, and the kinship system is the dominant structure that satisfies them. As many scholars have noted, all institutions are subsumed under kinship organization, but, with increasing population (both in terms of sheer numbers and density), problems emerge with regard to feeding individuals, creating new forms of organization—consider the emergent division of labour—coordinating and controlling various differentiated social units, and developing systems of resource distribution. The solution, as Spencer sees it, is to differentiate structures to fulfill more specialized functions; thus, a chief or "big man" emerges, soon followed by a group of lieutenants, and later kings and administrators. The structural parts of society (e.g. families, work) function interdependently to help society function. Therefore, social structures work together to preserve society. Talcott Parsons Talcott Parsons began writing in the 1930s and contributed to sociology, political science, anthropology, and psychology. Structural functionalism and Parsons have received much criticism. Numerous critics have pointed out Parsons' underemphasis of political and monetary struggle, the basics of social change, and the by and large "manipulative" conduct unregulated by qualities and standards. Structural functionalism, and a large portion of Parsons' works, appear to be insufficient in their definitions concerning the connections amongst institutionalized and non-institutionalized conduct, and the procedures by which institutionalization happens. Parsons was heavily influenced by Durkheim and Max Weber, synthesizing much of their work into his action theory, which he based on the system-theoretical concept and the methodological principle of voluntary action. He held that "the social system is made up of the actions of individuals". His starting point, accordingly, is the interaction between two individuals faced with a variety of choices about how they might act, choices that are influenced and constrained by a number of physical and social factors. Parsons determined that each individual has expectations of the other's action and reaction to their own behavior, and that these expectations would (if successful) be "derived" from the accepted norms and values of the society they inhabit. As Parsons himself emphasized, in a general context there would never exist any perfect "fit" between behaviors and norms, so such a relation is never complete or "perfect". Social norms were always problematic for Parsons, who never claimed (as has often been alleged) that social norms were generally accepted and agreed upon, should this prevent some kind of universal law. Whether social norms were accepted or not was for Parsons simply a historical question. As behaviors are repeated in more interactions, and these expectations are entrenched or institutionalized, a role is created. Parsons defines a "role" as the normatively-regulated participation "of a person in a concrete process of social interaction with specific, concrete role-partners". Although any individual, theoretically, can fulfill any role, the individual is expected to conform to the norms governing the nature of the role they fulfill. Furthermore, one person can and does fulfill many different roles at the same time. In one sense, an individual can be seen to be a "composition" of the roles he inhabits. Certainly, today, when asked to describe themselves, most people would answer with reference to their societal roles. Parsons later developed the idea of roles into collectivities of roles that complement each other in fulfilling functions for society. Some roles are bound up in institutions and social structures (economic, educational, legal and even gender-based). These are functional in the sense that they assist society in operating and fulfilling its functional needs so that society runs smoothly. Contrary to prevailing myth, Parsons never spoke about a society where there was no conflict or some kind of "perfect" equilibrium. A society's cultural value-system was in the typical case never completely integrated, never static and most of the time, like in the case of the American society, in a complex state of transformation relative to its historical point of departure. To reach a "perfect" equilibrium was not any serious theoretical question in Parsons analysis of social systems, indeed, the most dynamic societies had generally cultural systems with important inner tensions like the US and India. These tensions were a source of their strength according to Parsons rather than the opposite. Parsons never thought about system-institutionalization and the level of strains (tensions, conflict) in the system as opposite forces per se. The key processes for Parsons for system reproduction are socialization and social control. Socialization is important because it is the mechanism for transferring the accepted norms and values of society to the individuals within the system. Parsons never spoke about "perfect socialization"in any society socialization was only partial and "incomplete" from an integral point of view. Parsons states that "this point ... is independent of the sense in which [the] individual is concretely autonomous or creative rather than 'passive' or 'conforming', for individuality and creativity, are to a considerable extent, phenomena of the institutionalization of expectations"; they are culturally constructed. Socialization is supported by the positive and negative sanctioning of role behaviours that do or do not meet these expectations. A punishment could be informal, like a snigger or gossip, or more formalized, through institutions such as prisons and mental homes. If these two processes were perfect, society would become static and unchanging, but in reality, this is unlikely to occur for long. Parsons recognizes this, stating that he treats "the structure of the system as problematic and subject to change", and that his concept of the tendency towards equilibrium "does not imply the empirical dominance of stability over change". He does, however, believe that these changes occur in a relatively smooth way. Individuals in interaction with changing situations adapt through a process of "role bargaining". Once the roles are established, they create norms that guide further action and are thus institutionalized, creating stability across social interactions. Where the adaptation process cannot adjust, due to sharp shocks or immediate radical change, structural dissolution occurs and either new structures (or therefore a new system) are formed, or society dies. This model of social change has been described as a "moving equilibrium", and emphasizes a desire for social order. Davis and Moore Kingsley Davis and Wilbert E. Moore (1945) gave an argument for social stratification based on the idea of "functional necessity" (also known as the Davis-Moore hypothesis). They argue that the most difficult jobs in any society have the highest incomes in order to motivate individuals to fill the roles needed by the division of labour. Thus, inequality serves social stability. This argument has been criticized as fallacious from a number of different angles: the argument is both that the individuals who are the most deserving are the highest rewarded, and that a system of unequal rewards is necessary, otherwise no individuals would perform as needed for the society to function. The problem is that these rewards are supposed to be based upon objective merit, rather than subjective "motivations." The argument also does not clearly establish why some positions are worth more than others, even when they benefit more people in society, e.g., teachers compared to athletes and movie stars. Critics have suggested that structural inequality (inherited wealth, family power, etc.) is itself a cause of individual success or failure, not a consequence of it. Robert Merton Robert K. Merton made important refinements to functionalist thought. He fundamentally agreed with Parsons' theory but acknowledged that Parsons' theory could be questioned, believing that it was over generalized. Merton tended to emphasize middle range theory rather than a grand theory, meaning that he was able to deal specifically with some of the limitations in Parsons' thinking. Merton believed that any social structure probably has many functions, some more obvious than others. He identified three main limitations: functional unity, universal functionalism and indispensability. He also developed the concept of deviance and made the distinction between manifest and latent functions. Manifest functions referred to the recognized and intended consequences of any social pattern. Latent functions referred to unrecognized and unintended consequences of any social pattern. Merton criticized functional unity, saying that not all parts of a modern complex society work for the functional unity of society. Consequently, there is a social dysfunction referred to as any social pattern that may disrupt the operation of society. Some institutions and structures may have other functions, and some may even be generally dysfunctional, or be functional for some while being dysfunctional for others. This is because not all structures are functional for society as a whole. Some practices are only functional for a dominant individual or a group. There are two types of functions that Merton discusses the "manifest functions" in that a social pattern can trigger a recognized and intended consequence. The manifest function of education includes preparing for a career by getting good grades, graduation and finding good job. The second type of function is "latent functions", where a social pattern results in an unrecognized or unintended consequence. The latent functions of education include meeting new people, extra-curricular activities, school trips. Another type of social function is "social dysfunction" which is any undesirable consequences that disrupts the operation of society. The social dysfunction of education includes not getting good grades, a job. Merton states that by recognizing and examining the dysfunctional aspects of society we can explain the development and persistence of alternatives. Thus, as Holmwood states, "Merton explicitly made power and conflict central issues for research within a functionalist paradigm." Merton also noted that there may be functional alternatives to the institutions and structures currently fulfilling the functions of society. This means that the institutions that currently exist are not indispensable to society. Merton states "just as the same item may have multiple functions, so may the same function be diversely fulfilled by alternative items." This notion of functional alternatives is important because it reduces the tendency of functionalism to imply approval of the status quo. Merton's theory of deviance is derived from Durkheim's idea of anomie. It is central in explaining how internal changes can occur in a system. For Merton, anomie means a discontinuity between cultural goals and the accepted methods available for reaching them. Merton believes that there are 5 situations facing an actor. Conformity occurs when an individual has the means and desire to achieve the cultural goals socialized into them. Innovation occurs when an individual strives to attain the accepted cultural goals but chooses to do so in novel or unaccepted method. Ritualism occurs when an individual continues to do things as prescribed by society but forfeits the achievement of the goals. Retreatism is the rejection of both the means and the goals of society. Rebellion is a combination of the rejection of societal goals and means and a substitution of other goals and means. Thus it can be seen that change can occur internally in society through either innovation or rebellion. It is true that society will attempt to control these individuals and negate the changes, but as the innovation or rebellion builds momentum, society will eventually adapt or face dissolution. Almond and Powell In the 1970s, political scientists Gabriel Almond and Bingham Powell introduced a structural-functionalist approach to comparing political systems. They argued that, in order to understand a political system, it is necessary to understand not only its institutions (or structures) but also their respective functions. They also insisted that these institutions, to be properly understood, must be placed in a meaningful and dynamic historical context. This idea stood in marked contrast to prevalent approaches in the field of comparative politics—the state-society theory and the dependency theory. These were the descendants of David Easton's system theory in international relations, a mechanistic view that saw all political systems as essentially the same, subject to the same laws of "stimulus and response"—or inputs and outputs—while paying little attention to unique characteristics. The structural-functional approach is based on the view that a political system is made up of several key components, including interest groups, political parties and branches of government. In addition to structures, Almond and Powell showed that a political system consists of various functions, chief among them political socialization, recruitment and communication: socialization refers to the way in which societies pass along their values and beliefs to succeeding generations, and in political terms describe the process by which a society inculcates civic virtues, or the habits of effective citizenship; recruitment denotes the process by which a political system generates interest, engagement and participation from citizens; and communication refers to the way that a system promulgates its values and information. Unilineal descent In their attempt to explain the social stability of African "primitive" stateless societies where they undertook their fieldwork, Evans-Pritchard (1940) and Meyer Fortes (1945) argued that the Tallensi and the Nuer were primarily organized around unilineal descent groups. Such groups are characterized by common purposes, such as administering property or defending against attacks; they form a permanent social structure that persists well beyond the lifespan of their members. In the case of the Tallensi and the Nuer, these corporate groups were based on kinship which in turn fitted into the larger structures of unilineal descent; consequently Evans-Pritchard's and Fortes' model is called "descent theory". Moreover, in this African context territorial divisions were aligned with lineages; descent theory therefore synthesized both blood and soil as the same. Affinal ties with the parent through whom descent is not reckoned, however, are considered to be merely complementary or secondary (Fortes created the concept of "complementary filiation"), with the reckoning of kinship through descent being considered the primary organizing force of social systems. Because of its strong emphasis on unilineal descent, this new kinship theory came to be called "descent theory". With no delay, descent theory had found its critics. Many African tribal societies seemed to fit this neat model rather well, although Africanists, such as Paul Richards, also argued that Fortes and Evans-Pritchard had deliberately downplayed internal contradictions and overemphasized the stability of the local lineage systems and their significance for the organization of society. However, in many Asian settings the problems were even more obvious. In Papua New Guinea, the local patrilineal descent groups were fragmented and contained large amounts of non-agnates. Status distinctions did not depend on descent, and genealogies were too short to account for social solidarity through identification with a common ancestor. In particular, the phenomenon of cognatic (or bilateral) kinship posed a serious problem to the proposition that descent groups are the primary element behind the social structures of "primitive" societies. Leach's (1966) critique came in the form of the classical Malinowskian argument, pointing out that "in Evans-Pritchard's studies of the Nuer and also in Fortes's studies of the Tallensi unilineal descent turns out to be largely an ideal concept to which the empirical facts are only adapted by means of fictions". People's self-interest, manoeuvring, manipulation and competition had been ignored. Moreover, descent theory neglected the significance of marriage and affinal ties, which were emphasized by Lévi-Strauss's structural anthropology, at the expense of overemphasizing the role of descent. To quote Leach: "The evident importance attached to matrilateral and affinal kinship connections is not so much explained as explained away." Biological Biological functionalism is an anthropological paradigm, asserting that all social institutions, beliefs, values and practices serve to address pragmatic concerns. In many ways, the theorem derives from the longer-established structural functionalism, yet the two theorems diverge from one another significantly. While both maintain the fundamental belief that a social structure is composed of many interdependent frames of reference, biological functionalists criticise the structural view that a social solidarity and collective conscience is required in a functioning system. By that fact, biological functionalism maintains that our individual survival and health is the driving provocation of actions, and that the importance of social rigidity is negligible. Everyday application Although the actions of humans without doubt do not always engender positive results for the individual, a biological functionalist would argue that the intention was still self-preservation, albeit unsuccessful. An example of this is the belief in luck as an entity; while a disproportionately strong belief in good luck may lead to undesirable results, such as a huge loss in money from gambling, biological functionalism maintains that the newly created ability of the gambler to condemn luck will allow them to be free of individual blame, thus serving a practical and individual purpose. In this sense, biological functionalism maintains that while bad results often occur in life, which do not serve any pragmatic concerns, an entrenched cognitive psychological motivation was attempting to create a positive result, in spite of its eventual failure. Decline Structural functionalism reached the peak of its influence in the 1940s and 1950s, and by the 1960s was in rapid decline. By the 1980s, its place was taken in Europe by more conflict-oriented approaches, and more recently by structuralism. While some of the critical approaches also gained popularity in the United States, the mainstream of the discipline has instead shifted to a myriad of empirically oriented middle-range theories with no overarching theoretical orientation. To most sociologists, functionalism is now "as dead as a dodo". As the influence of functionalism in the 1960s began to wane, the linguistic and cultural turns led to a myriad of new movements in the social sciences: "According to Giddens, the orthodox consensus terminated in the late 1960s and 1970s as the middle ground shared by otherwise competing perspectives gave way and was replaced by a baffling variety of competing perspectives. This third generation of social theory includes phenomenologically inspired approaches, critical theory, ethnomethodology, symbolic interactionism, structuralism, post-structuralism, and theories written in the tradition of hermeneutics and ordinary language philosophy." While absent from empirical sociology, functionalist themes remained detectable in sociological theory, most notably in the works of Luhmann and Giddens. There are, however, signs of an incipient revival, as functionalist claims have recently been bolstered by developments in multilevel selection theory and in empirical research on how groups solve social dilemmas. Recent developments in evolutionary theory—especially by biologist David Sloan Wilson and anthropologists Robert Boyd and Peter Richerson—have provided strong support for structural functionalism in the form of multilevel selection theory. In this theory, culture and social structure are seen as a Darwinian (biological or cultural) adaptation at the group level. Criticisms In the 1960s, functionalism was criticized for being unable to account for social change, or for structural contradictions and conflict (and thus was often called "consensus theory"). Also, it ignores inequalities including race, gender, class, which cause tension and conflict. The refutation of the second criticism of functionalism, that it is static and has no concept of change, has already been articulated above, concluding that while Parsons' theory allows for change, it is an orderly process of change [Parsons, 1961:38], a moving equilibrium. Therefore, referring to Parsons' theory of society as static is inaccurate. It is true that it does place emphasis on equilibrium and the maintenance or quick return to social order, but this is a product of the time in which Parsons was writing (post-World War II, and the start of the cold war). Society was in upheaval and fear abounded. At the time social order was crucial, and this is reflected in Parsons' tendency to promote equilibrium and social order rather than social change. Furthermore, Durkheim favoured a radical form of guild socialism along with functionalist explanations. Also, Marxism, while acknowledging social contradictions, still uses functionalist explanations. Parsons' evolutionary theory describes the differentiation and reintegration systems and subsystems and thus at least temporary conflict before reintegration (ibid). "The fact that functional analysis can be seen by some as inherently conservative and by others as inherently radical suggests that it may be inherently neither one nor the other." Stronger criticisms include the epistemological argument that functionalism is tautologous, that is, it attempts to account for the development of social institutions solely through recourse to the effects that are attributed to them, and thereby explains the two circularly. However, Parsons drew directly on many of Durkheim's concepts in creating his theory. Certainly Durkheim was one of the first theorists to explain a phenomenon with reference to the function it served for society. He said, "the determination of function is…necessary for the complete explanation of the phenomena." However Durkheim made a clear distinction between historical and functional analysis, saying, "When ... the explanation of a social phenomenon is undertaken, we must seek separately the efficient cause which produces it and the function it fulfills." If Durkheim made this distinction, then it is unlikely that Parsons did not. However Merton does explicitly state that functional analysis does not seek to explain why the action happened in the first instance, but why it continues or is reproduced. By this particular logic, it can be argued that functionalists do not necessarily explain the original cause of a phenomenon with reference to its effect. Yet the logic stated in reverse, that social phenomena are (re)produced because they serve ends, is unoriginal to functionalist thought. Thus functionalism is either undefinable or it can be defined by the teleological arguments which functionalist theorists normatively produced before Merton. Another criticism describes the ontological argument that society cannot have "needs" as a human being does, and even if society does have needs they need not be met. Anthony Giddens argues that functionalist explanations may all be rewritten as historical accounts of individual human actions and consequences (see Structuration). A further criticism directed at functionalism is that it contains no sense of agency, that individuals are seen as puppets, acting as their role requires. Yet Holmwood states that the most sophisticated forms of functionalism are based on "a highly developed concept of action," and as was explained above, Parsons took as his starting point the individual and their actions. His theory did not however articulate how these actors exercise their agency in opposition to the socialization and inculcation of accepted norms. As has been shown above, Merton addressed this limitation through his concept of deviance, and so it can be seen that functionalism allows for agency. It cannot, however, explain why individuals choose to accept or reject the accepted norms, why and in what circumstances they choose to exercise their agency, and this does remain a considerable limitation of the theory. Further criticisms have been levelled at functionalism by proponents of other social theories, particularly conflict theorists, Marxists, feminists and postmodernists. Conflict theorists criticized functionalism's concept of systems as giving far too much weight to integration and consensus, and neglecting independence and conflict. Lockwood, in line with conflict theory, suggested that Parsons' theory missed the concept of system contradiction. He did not account for those parts of the system that might have tendencies to mal-integration. According to Lockwood, it was these tendencies that come to the surface as opposition and conflict among actors. However Parsons thought that the issues of conflict and cooperation were very much intertwined and sought to account for both in his model. In this however he was limited by his analysis of an ‘ideal type' of society which was characterized by consensus. Merton, through his critique of functional unity, introduced into functionalism an explicit analysis of tension and conflict. Yet Merton's functionalist explanations of social phenomena continued to rest on the idea that society is primarily co-operative rather than conflicted, which differentiates Merton from conflict theorists. Marxism, which was revived soon after the emergence of conflict theory, criticized professional sociology (functionalism and conflict theory alike) for being partisan to advanced welfare capitalism. Gouldner thought that Parsons' theory specifically was an expression of the dominant interests of welfare capitalism, that it justified institutions with reference to the function they fulfill for society. It may be that Parsons' work implied or articulated that certain institutions were necessary to fulfill the functional prerequisites of society, but whether or not this is the case, Merton explicitly states that institutions are not indispensable and that there are functional alternatives. That he does not identify any alternatives to the current institutions does reflect a conservative bias, which as has been stated before is a product of the specific time that he was writing in. As functionalism's prominence was ending, feminism was on the rise, and it attempted a radical criticism of functionalism. It believed that functionalism neglected the suppression of women within the family structure. Holmwood shows, however, that Parsons did in fact describe the situations where tensions and conflict existed or were about to take place, even if he did not articulate those conflicts. Some feminists agree, suggesting that Parsons provided accurate descriptions of these situations. On the other hand, Parsons recognized that he had oversimplified his functional analysis of women in relation to work and the family, and focused on the positive functions of the family for society and not on its dysfunctions for women. Merton, too, although addressing situations where function and dysfunction occurred simultaneously, lacked a "feminist sensibility". Postmodernism, as a theory, is critical of claims of objectivity. Therefore, the idea of grand theory and grand narrative that can explain society in all its forms is treated with skepticism. This critique focuses on exposing the danger that grand theory can pose when not seen as a limited perspective, as one way of understanding society. Jeffrey Alexander (1985) sees functionalism as a broad school rather than a specific method or system, such as Parsons, who is capable of taking equilibrium (stability) as a reference-point rather than assumption and treats structural differentiation as a major form of social change. The name 'functionalism' implies a difference of method or interpretation that does not exist. This removes the determinism criticized above. Cohen argues that rather than needs a society has dispositional facts: features of the social environment that support the existence of particular social institutions but do not cause them. Influential theorists Kingsley Davis Michael Denton Émile Durkheim David Keen Niklas Luhmann Bronisław Malinowski Robert K. Merton Wilbert E. Moore George Murdock Talcott Parsons Alfred Reginald Radcliffe-Brown Herbert Spencer Fei Xiaotong See also Causation (sociology) Functional structuralism Historicism Neofunctionalism (sociology) New institutional economics Pure sociology Sociotechnical system Systems theory Vacancy chain Dennis Wrong (critic of structural functionalism) Notes References Barnard, A. 2000. History and Theory in Anthropology. Cambridge: CUP. Barnard, A., and Good, A. 1984. Research Practices in the Study of Kinship. London: Academic Press. Barnes, J. 1971. Three Styles in the Study of Kinship. London: Butler & Tanner. Elster, J., (1990), “Merton's Functionalism and the Unintended Consequences of Action”, in Clark, J., Modgil, C. & Modgil, S., (eds) Robert Merton: Consensus and Controversy, Falmer Press, London, pp. 129–35 Gingrich, P., (1999) “Functionalism and Parsons” in Sociology 250 Subject Notes, University of Regina, accessed, 24/5/06, uregina.ca Holy, L. 1996. Anthropological Perspectives on Kinship. London: Pluto Press. Homans, George Casper (1962). Sentiments and Activities. New York: The Free Press of Glencoe. Hoult, Thomas Ford (1969). Dictionary of Modern Sociology. Kuper, A. 1996. Anthropology and Anthropologists. London: Routledge. Layton, R. 1997. An Introduction to Theory in Anthropology. Cambridge: CUP. Leach, E. 1954. Political Systems of Highland Burma. London: Bell. Leach, E. 1966. Rethinking Anthropology. Northampton: Dickens. Lenski, Gerhard (1966). "Power and Privilege: A Theory of Social Stratification." New York: McGraw-Hill. Lenski, Gerhard (2005). "Evolutionary-Ecological Theory." Boulder, CO: Paradigm. Levi-Strauss, C. 1969. The Elementary Structures of Kinship. London: Eyre and Spottis-woode. Maryanski, Alexandra (1998). "Evolutionary Sociology." Advances in Human Ecology. 7:1–56. Maryanski, Alexandra and Jonathan Turner (1992). "The Social Cage: Human Nature and the Evolution of Society." Stanford: Stanford University Press. Marshall, Gordon (1994). The Concise Oxford Dictionary of Sociology. Parsons, T., (1961) Theories of Society: foundations of modern sociological theory, Free Press, New York Perey, Arnold (2005) "Malinowski, His Diary, and Men Today (with a note on the nature of Malinowskian functionalism) Ritzer, George and Douglas J. Goodman (2004). Sociological Theory, 6th ed. New York: McGraw-Hill. Sanderson, Stephen K. (1999). "Social Transformations: A General Theory of Historical Development." Lanham, MD: Rowman & Littlefield. Turner, Jonathan (1995). "Macrodynamics: Toward a Theory on the Organization of Human Populations." New Brunswick: Rutgers University Press. Turner, Jonathan and Jan Stets (2005). "The Sociology of Emotions." Cambridge. Cambridge University Press. Comparative politics Functionalism (social theory) History of sociology Sociological theories Anthropology Cognition
0.785224
0.997675
0.783398
Systems theory
Systems theory is the transdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or artificial. Every system has causal boundaries, is influenced by its context, defined by its structure, function and role, and expressed through its relations with other systems. A system is "more than the sum of its parts" when it expresses synergy or emergent behavior. Changing one component of a system may affect other components or the whole system. It may be possible to predict these changes in patterns of behavior. For systems that learn and adapt, the growth and the degree of adaptation depend upon how well the system is engaged with its environment and other contexts influencing its organization. Some systems support other systems, maintaining the other system to prevent failure. The goals of systems theory are to model a system's dynamics, constraints, conditions, and relations; and to elucidate principles (such as purpose, measure, methods, tools) that can be discerned and applied to other systems at every level of nesting, and in a wide range of fields for achieving optimized equifinality. General systems theory is about developing broadly applicable concepts and principles, as opposed to concepts and principles specific to one domain of knowledge. It distinguishes dynamic or active systems from static or passive systems. Active systems are activity structures or components that interact in behaviours and processes or interrelate through formal contextual boundary conditions (attractors). Passive systems are structures and components that are being processed. For example, a computer program is passive when it is a file stored on the hardrive and active when it runs in memory. The field is related to systems thinking, machine logic, and systems engineering. Overview Systems theory is manifest in the work of practitioners in many disciplines, for example the works of physician Alexander Bogdanov, biologist Ludwig von Bertalanffy, linguist Béla H. Bánáthy, and sociologist Talcott Parsons; in the study of ecological systems by Howard T. Odum, Eugene Odum; in Fritjof Capra's study of organizational theory; in the study of management by Peter Senge; in interdisciplinary areas such as human resource development in the works of Richard A. Swanson; and in the works of educators Debora Hammond and Alfonso Montuori. As a transdisciplinary, interdisciplinary, and multiperspectival endeavor, systems theory brings together principles and concepts from ontology, the philosophy of science, physics, computer science, biology, and engineering, as well as geography, sociology, political science, psychotherapy (especially family systems therapy), and economics. Systems theory promotes dialogue between autonomous areas of study as well as within systems science itself. In this respect, with the possibility of misinterpretations, von Bertalanffy believed a general theory of systems "should be an important regulative device in science," to guard against superficial analogies that "are useless in science and harmful in their practical consequences." Others remain closer to the direct systems concepts developed by the original systems theorists. For example, Ilya Prigogine, of the Center for Complex Quantum Systems at the University of Texas, has studied emergent properties, suggesting that they offer analogues for living systems. The distinction of autopoiesis as made by Humberto Maturana and Francisco Varela represent further developments in this field. Important names in contemporary systems science include Russell Ackoff, Ruzena Bajcsy, Béla H. Bánáthy, Gregory Bateson, Anthony Stafford Beer, Peter Checkland, Barbara Grosz, Brian Wilson, Robert L. Flood, Allenna Leonard, Radhika Nagpal, Fritjof Capra, Warren McCulloch, Kathleen Carley, Michael C. Jackson, Katia Sycara, and Edgar Morin among others. With the modern foundations for a general theory of systems following World War I, Ervin László, in the preface for Bertalanffy's book, Perspectives on General System Theory, points out that the translation of "general system theory" from German into English has "wrought a certain amount of havoc": Theorie (or Lehre) "has a much broader meaning in German than the closest English words 'theory' and 'science'," just as Wissenschaft (or 'Science'). These ideas refer to an organized body of knowledge and "any systematically presented set of concepts, whether empirically, axiomatically, or philosophically" represented, while many associate Lehre with theory and science in the etymology of general systems, though it also does not translate from the German very well; its "closest equivalent" translates to 'teaching', but "sounds dogmatic and off the mark." An adequate overlap in meaning is found within the word "nomothetic", which can mean "having the capability to posit long-lasting sense." While the idea of a "general systems theory" might have lost many of its root meanings in the translation, by defining a new way of thinking about science and scientific paradigms, systems theory became a widespread term used for instance to describe the interdependence of relationships created in organizations. A system in this frame of reference can contain regularly interacting or interrelating groups of activities. For example, in noting the influence in the evolution of "an individually oriented industrial psychology [into] a systems and developmentally oriented organizational psychology," some theorists recognize that organizations have complex social systems; separating the parts from the whole reduces the overall effectiveness of organizations. This difference, from conventional models that center on individuals, structures, departments and units, separates in part from the whole, instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function. László explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" which reduced the parts from the whole, or understood the whole without relation to the parts. The relationship between organisations and their environments can be seen as the foremost source of complexity and interdependence. In most cases, the whole has properties that cannot be known from analysis of the constituent elements in isolation. Béla H. Bánáthy, who argued—along with the founders of the systems society—that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at the International Society for the System Sciences, Bánáthy defines a perspective that iterates this view: Applications Art Biology Systems biology is a movement that draws on several trends in bioscience research. Proponents describe systems biology as a biology-based interdisciplinary study field that focuses on complex interactions in biological systems, claiming that it uses a new perspective (holism instead of reduction). Particularly from the year 2000 onwards, the biosciences use the term widely and in a variety of contexts. An often stated ambition of systems biology is the modelling and discovery of emergent properties which represents properties of a system whose theoretical description requires the only possible useful techniques to fall under the remit of systems biology. It is thought that Ludwig von Bertalanffy may have created the term systems biology in 1928. Subdisciplines of systems biology include: Systems neuroscience Systems pharmacology Ecology Systems ecology is an interdisciplinary field of ecology that takes a holistic approach to the study of ecological systems, especially ecosystems; it can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems. Chemistry Systems chemistry is the science of studying networks of interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties. Systems chemistry is also related to the origin of life (abiogenesis). Engineering Systems engineering is an interdisciplinary approach and means for enabling the realisation and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts. Systems engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user's needs. User-centered design process Systems thinking is a crucial part of user-centered design processes and is necessary to understand the whole impact of a new human computer interaction (HCI) information system. Overlooking this and developing software without insights input from the future users (mediated by user experience designers) is a serious design flaw that can lead to complete failure of information systems, increased stress and mental illness for users of information systems leading to increased costs and a huge waste of resources. It is currently surprisingly uncommon for organizations and governments to investigate the project management decisions leading to serious design flaws and lack of usability. The Institute of Electrical and Electronics Engineers estimates that roughly 15% of the estimated $1 trillion used to develop information systems every year is completely wasted and the produced systems are discarded before implementation by entirely preventable mistakes. According to the CHAOS report published in 2018 by the Standish Group, a vast majority of information systems fail or partly fail according to their survey: Mathematics System dynamics is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, and time delays. Social sciences and humanities Systems theory in anthropology Systems theory in archaeology Systems theory in political science Psychology Systems psychology is a branch of psychology that studies human behaviour and experience in complex systems. It received inspiration from systems theory and systems thinking, as well as the basics of theoretical work from Roger Barker, Gregory Bateson, Humberto Maturana and others. It makes an approach in psychology in which groups and individuals receive consideration as systems in homeostasis. Systems psychology "includes the domain of engineering psychology, but in addition seems more concerned with societal systems and with the study of motivational, affective, cognitive and group behavior that holds the name engineering psychology." In systems psychology, characteristics of organizational behaviour (such as individual needs, rewards, expectations, and attributes of the people interacting with the systems) "considers this process in order to create an effective system." Informatics System theory has been applied in the field of neuroinformatics and connectionist cognitive science. Attempts are being made in neurocognition to merge connectionist cognitive neuroarchitectures with the approach of system theory and dynamical systems theory. History Precursors Systems thinking can date back to antiquity, whether considering the first systems of written communication with Sumerian cuneiform to Maya numerals, or the feats of engineering with the Egyptian pyramids. Differentiated from Western rationalist traditions of philosophy, C. West Churchman often identified with the I Ching as a systems approach sharing a frame of reference similar to pre-Socratic philosophy and Heraclitus. Ludwig von Bertalanffy traced systems concepts to the philosophy of Gottfried Leibniz and Nicholas of Cusa's coincidentia oppositorum. While modern systems can seem considerably more complicated, they may embed themselves in history. Figures like James Joule and Sadi Carnot represent an important step to introduce the systems approach into the (rationalist) hard sciences of the 19th century, also known as the energy transformation. Then, the thermodynamics of this century, by Rudolf Clausius, Josiah Gibbs and others, established the system reference model as a formal scientific object. Similar ideas are found in learning theories that developed from the same fundamental concepts, emphasising how understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffy's organismic psychology paralleled the learning theory of Jean Piaget. Some consider interdisciplinary perspectives critical in breaking away from industrial age models and thinking, wherein history represents history and math represents math, while the arts and sciences specialization remain separate and many treat teaching as behaviorist conditioning. The contemporary work of Peter Senge provides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning, including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life." In this way, some systems theorists attempt to provide alternatives to, and evolved ideation from orthodox theories which have grounds in classical assumptions, including individuals such as Max Weber and Émile Durkheim in sociology and Frederick Winslow Taylor in scientific management. The theorists sought holistic methods by developing systems concepts that could integrate with different areas. Some may view the contradiction of reductionism in conventional theory (which has as its subject a single part) as simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts as not static and constant but dynamic processes. Some questioned the conventional closed systems with the development of open systems perspectives. The shift originated from absolute and universal authoritative principles and knowledge to relative and general conceptual and perceptual knowledge and still remains in the tradition of theorists that sought to provide means to organize human life. In other words, theorists rethought the preceding history of ideas; they did not lose them. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanistic metaphor for the mind from interpretations of Newtonian mechanics by Enlightenment philosophers and later psychologists that laid the foundations of modern organizational theory and management by the late 19th century. Founding and early development Where assumptions in Western science from Plato and Aristotle to Isaac Newton's Principia (1687) have historically influenced all areas from the hard to social sciences (see, David Easton's seminal development of the "political system" as an analytical construct), the original systems theorists explored the implications of 20th-century advances in terms of systems. Between 1929 and 1951, Robert Maynard Hutchins at the University of Chicago had undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by the Ford Foundation with the university's interdisciplinary Division of the Social Sciences established in 1931. Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. "General systems theory" (GST; German: allgemeine Systemlehre) was coined in the 1940s by Ludwig von Bertalanffy, who sought a new approach to the study of living systems. Bertalanffy developed the theory via lectures beginning in 1937 and then via publications beginning in 1946. According to Mike C. Jackson (2000), Bertalanffy promoted an embryonic form of GST as early as the 1920s and 1930s, but it was not until the early 1950s that it became more widely known in scientific circles. Jackson also claimed that Bertalanffy's work was informed by Alexander Bogdanov's three-volume Tectology (1912–1917), providing the conceptual base for GST. A similar position is held by Richard Mattessich (1978) and Fritjof Capra (1996). Despite this, Bertalanffy never even mentioned Bogdanov in his works. The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or a system. Second, all systems, whether electrical, biological, or social, have common patterns, behaviors, and properties that the observer can analyze and use to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of the sciences. System philosophy, methodology and application are complementary to this science. Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as the interwar period, publishing "An Outline for General Systems Theory" in the British Journal for the Philosophy of Science by 1950. In 1954, von Bertalanffy, along with Anatol Rapoport, Ralph W. Gerard, and Kenneth Boulding, came together at the Center for Advanced Study in the Behavioral Sciences in Palo Alto to discuss the creation of a "society for the advancement of General Systems Theory." In December that year, a meeting of around 70 people was held in Berkeley to form a society for the exploration and development of GST. The Society for General Systems Research (renamed the International Society for Systems Science in 1988) was established in 1956 thereafter as an affiliate of the American Association for the Advancement of Science (AAAS), specifically catalyzing systems theory as an area of study. The field developed from the work of Bertalanffy, Rapoport, Gerard, and Boulding, as well as other theorists in the 1950s like William Ross Ashby, Margaret Mead, Gregory Bateson, and C. West Churchman, among others. Bertalanffy's ideas were adopted by others, working in mathematics, psychology, biology, game theory, and social network analysis. Subjects that were studied included those of complexity, self-organization, connectionism and adaptive systems. In fields like cybernetics, researchers such as Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster examined complex systems mathematically; Von Neumann discovered cellular automata and self-reproducing systems, again with only pencil and paper. Aleksandr Lyapunov and Jules Henri Poincaré worked on the foundations of chaos theory without any computer at all. At the same time, Howard T. Odum, known as a radiation ecologist, recognized that the study of general systems required a language that could depict energetics, thermodynamics and kinetics at any system scale. To fulfill this role, Odum developed a general system, or universal language, based on the circuit language of electronics, known as the Energy Systems Language. The Cold War affected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize that theories defined in association with systems theory had deviated from the initial general systems theory view. Economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses of power always prove consequential and that systems theory might address such issues. Since the end of the Cold War, a renewed interest in systems theory emerged, combined with efforts to strengthen an ethical view on the subject. In sociology, systems thinking also began in the 20th century, including Talcott Parsons' action theory and Niklas Luhmann's social systems theory. According to Rudolf Stichweh (2011):Since its beginnings the social sciences were an important part of the establishment of systems theory... [T]he two most influential suggestions were the comprehensive sociological versions of systems theory which were proposed by Talcott Parsons since the 1950s and by Niklas Luhmann since the 1970s.Elements of systems thinking can also be seen in the work of James Clerk Maxwell, particularly control theory. General systems research and systems inquiry Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. Ludwig von Bertalanffy began developing his 'general systems theory' via lectures in 1937 and then via publications from 1946. The concept received extensive focus in his 1968 book, General System Theory: Foundations, Development, Applications. There are many definitions of a general system, some properties that definitions include are: an overall goal of the system, parts of the system and relationships between these parts, and emergent properties of the interaction between the parts of the system that are not performed by any part on its own. Derek Hitchins defines a system in terms of entropy as a collection of parts and relationships between the parts where the parts of their interrelationships decrease entropy. Bertalanffy aimed to bring together under one heading the organismic science that he had observed in his work as a biologist. He wanted to use the word system for those principles that are common to systems in general. In General System Theory (1968), he wrote: In the preface to von Bertalanffy's Perspectives on General System Theory, Ervin László stated: Bertalanffy outlines systems inquiry into three major domains: philosophy, science, and technology. In his work with the Primer Group, Béla H. Bánáthy generalized the domains into four integratable domains of systemic inquiry: philosophy: the ontology, epistemology, and axiology of systems theory: a set of interrelated concepts and principles applying to all systems methodology: the set of models, strategies, methods and tools that instrumentalize systems theory and philosophy application: the application and interaction of the domains These operate in a recursive relationship, he explained; integrating 'philosophy' and 'theory' as knowledge, and 'method' and 'application' as action; systems inquiry is thus knowledgeable action. Properties of general systems General systems may be split into a hierarchy of systems, where there is less interactions between the different systems than there is the components in the system. The alternative is heterarchy where all components within the system interact with one another. Sometimes an entire system will be represented inside another system as a part, sometimes referred to as a holon. These hierarchies of system are studied in hierarchy theory. The amount of interaction between parts of systems higher in the hierarchy and parts of the system lower in the hierarchy is reduced. If all the parts of a system are tightly coupled (interact with one another a lot) then the system cannot be decomposed into different systems. The amount of coupling between parts of a system may differ temporally, with some parts interacting more often than other, or for different processes in a system. Herbert A. Simon distinguished between decomposable, nearly decomposable and nondecomposable systems. Russell L. Ackoff distinguished general systems by how their goals and subgoals could change over time. He distinguished between goal-maintaining, goal-seeking, multi-goal and reflective (or goal-changing) systems. System types and fields Theoretical fields Chaos theory Complex system Control theory Dynamical systems theory Earth system science Ecological systems theory Living systems theory Sociotechnical system Systemics Urban metabolism World-systems theory Cybernetics Cybernetics is the study of the communication and control of regulatory feedback both in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks. The terms systems theory and cybernetics have been widely used as synonyms. Some authors use the term cybernetic systems to denote a proper subset of the class of general systems, namely those systems that include feedback loops. However, Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. In cybernetics, complex systems have been examined mathematically by such researchers as W. Ross Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster. Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (such as Wiener's Cybernetics in 1948 and Bertalanffy's General System Theory in 1968). Cybernetics arose more from engineering fields and GST from biology. If anything, it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Bertalanffy specifically made the point of distinguishing between the areas in noting the influence of cybernetics:Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems.... [T]he model is of wide application but should not be identified with 'systems theory' in general ... [and] warning is necessary against its incautious expansion to fields for which its concepts are not made.Cybernetics, catastrophe theory, chaos theory and complexity theory have the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions. Cellular automata, neural networks, artificial intelligence, and artificial life are related fields, but do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, from pure mathematics in the beginning to pure computer science today. Since the beginning of chaos theory, when Edward Lorenz accidentally discovered a strange attractor with his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of computers today. System types Biological Anatomical systems Nervous Sensory Ecological systems Living systems Complex Complex adaptive system Conceptual Coordinate Deterministic (philosophy) Digital ecosystem Experimental Writing Coupled human–environment Database Deterministic (science) Mathematical Dynamical system Formal system Economic Energy Holarchical Information Legal Measurement Imperial Metric Multi-agent Nonlinear Operating Planetary Political Social Star Complex adaptive systems Complex adaptive systems (CAS), coined by John H. Holland, Murray Gell-Mann, and others at the interdisciplinary Santa Fe Institute, are special cases of complex systems: they are complex in that they are diverse and composed of multiple, interconnected elements; they are adaptive in that they have the capacity to change and learn from experience. In contrast to control systems, in which negative feedback dampens and reverses disequilibria, CAS are often subject to positive feedback, which magnifies and perpetuates changes, converting local irregularities into global features. See also List of types of systems theory Glossary of systems theory Autonomous agency theory Bibliography of sociology Cellular automata Chaos theory Complexity Emergence Engaged theory Fractal Grey box model Irreducible complexity Meta-systems Multidimensional systems Open and closed systems in social science Pattern language Recursion (computer science) Reductionism Redundancy (engineering) Reversal theory Social rule system theory Sociotechnical system Sociology and complexity science Structure–organization–process Systemantics System identification Systematics – study of multi-term systems Systemics Systemography Systems science Theoretical ecology Tektology User-in-the-loop Viable system theory Viable systems approach World-systems theory Structuralist economics Dependency theory Hierarchy theory Organizations List of systems sciences organizations References Further reading Ashby, W. Ross. 1956. An Introduction to Cybernetics. Chapman & Hall. —— 1960. Design for a Brain: The Origin of Adaptive Behavior (2nd ed.). Chapman & Hall. Bateson, Gregory. 1972. Steps to an Ecology of Mind: Collected essays in Anthropology, Psychiatry, Evolution, and Epistemology. University of Chicago Press. von Bertalanffy, Ludwig. 1968. General System Theory: Foundations, Development, Applications New York: George Braziller Burks, Arthur. 1970. Essays on Cellular Automata. University of Illinois Press. Cherry, Colin. 1957. On Human Communication: A Review, a Survey, and a Criticism. Cambridge: The MIT Press. Churchman, C. West. 1971. The Design of Inquiring Systems: Basic Concepts of Systems and Organizations. New York: Basic Books. Checkland, Peter. 1999. Systems Thinking, Systems Practice: Includes a 30-Year Retrospective. Wiley. Gleick, James. 1997. Chaos: Making a New Science, Random House. Haken, Hermann. 1983. Synergetics: An Introduction – 3rd Edition, Springer. Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge: The MIT Press. Luhmann, Niklas. 2013. Introduction to Systems Theory, Polity. Macy, Joanna. 1991. Mutual Causality in Buddhism and General Systems Theory: The Dharma of Natural Systems. SUNY Press. Maturana, Humberto, and Francisco Varela. 1980. Autopoiesis and Cognition: The Realization of the Living. Springer Science & Business Media. Miller, James Grier. 1978. Living Systems. Mcgraw-Hill. von Neumann, John. 1951 "The General and Logical Theory of Automata." pp. 1–41 in Cerebral Mechanisms in Behavior. —— 1956. "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." Automata Studies 34: 43–98. von Neumann, John, and Arthur Burks, eds. 1966. Theory of Self-Reproducing Automata. Illinois University Press. Parsons, Talcott. 1951. The Social System. The Free Press. Prigogine, Ilya. 1980. From Being to Becoming: Time and Complexity in the Physical Sciences. W H Freeman & Co. Simon, Herbert A. 1962. "The Architecture of Complexity." Proceedings of the American Philosophical Society, 106. —— 1996. The Sciences of the Artificial (3rd ed.), vol. 136. The MIT Press. Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. . Adapted from Shannon, Claude. 1948. "A Mathematical Theory of Communication." Bell System Technical Journal 27(3): 379–423. . Thom, René. 1972. Structural Stability and Morphogenesis: An Outline of a General Theory of Models. Reading, Massachusetts Volk, Tyler. 1995. Metapatterns: Across Space, Time, and Mind. New York: Columbia University Press. Weaver, Warren. 1948. "Science and Complexity." The American Scientist, pp. 536–544. Wiener, Norbert. 1965. Cybernetics: Or the Control and Communication in the Animal and the Machine (2nd ed.). Cambridge: The MIT Press. Wolfram, Stephen. 2002. A New Kind of Science. Wolfram Media. Zadeh, Lofti. 1962. "From Circuit Theory to System Theory." Proceedings of the IRE 50(5): 856–865. External links Systems Thinking at Wikiversity Systems theory at Principia Cybernetica Web Introduction to systems thinking – 55 slides Organizations International Society for the System Sciences New England Complex Systems Institute System Dynamics Society Emergence Interdisciplinary subfields of sociology Complex systems theory Systems science
0.784592
0.998331
0.783282
Complex system
A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, complex software and electronic systems, social and economic organizations (like cities), an ecosystem, a living cell, and, ultimately, for some authors, the entire universe. Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions. The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them. As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study of self-organization and critical phenomena from physics, of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology. Key concepts Adaptation Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the stock market, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities. Features Complex systems may have the following features: Complex systems may be open Complex systems are usually open systems — that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics. Complex systems may exhibit critical transitions Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak. Complex systems may be nested The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells – all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts. Dynamic network of multiplicity As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions. May produce emergent phenomena Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, empirical food webs display regular, scale-invariant features across aquatic and terrestrial ecosystems when studied at the level of clustered 'trophic' species. Another example is offered by the termites in a mound which have physiology, biochemistry and biological development at one level of analysis, whereas their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level. Relationships are non-linear In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity. Relationships contain feedback loops Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered. History In 1948, Dr. Warren Weaver published an essay on "Science and Complexity", exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole." While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems. Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis. The 2021 Nobel Prize in Physics was awarded to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi for their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate. Applications Complexity in practice The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions. Complexity of cities Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay. As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities. Complexity economics Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann. Recurrence quantification analysis has been employed to detect the characteristic of business cycles and economic development. To this end, Orlando et al. developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al., over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics. Complexity and education Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics". Complexity in healthcare research and practice Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops. Complexity science in healthcare frames knowledge translation as a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems, complexity science advocates for continuous stakeholder engagement, transdisciplinary collaboration, and flexible strategies to effectively translate research into practice. Complexity and biology Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field of fractal physiology, bodily signals, such as heart rate or brain activity, are characterized using entropy or fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses. Complexity and chaos theory Complex systems theory is related to chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy. The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos". When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions. For recent examples in economics and business see Stoop et al. who discussed Android's market position, Orlando who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al. who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model. Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations. Complexity and network science A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies, airline networks, and biological networks. Notable scholars See also References Further reading Complexity Explained. L.A.N. Amaral and J.M. Ottino, Complex networks — augmenting the framework for the study of complex system, 2004. Walter Clemens, Jr., Complexity Science and World Affairs, SUNY Press, 2013. A. Gogolin, A. Nersesyan and A. Tsvelik, Theory of strongly correlated systems , Cambridge University Press, 1999. Nigel Goldenfeld and Leo P. Kadanoff, Simple Lessons from Complexity , 1999 Kelly, K. (1995). Out of Control, Perseus Books Group. Syed M. Mehmud (2011), A Healthcare Exchange Complexity Model Preiser-Kapeller, Johannes, "Calculating Byzantium. Social Network Analysis and Complexity Sciences as tools for the exploration of medieval social dynamics". August 2010 Stefan Thurner, Peter Klimek, Rudolf Hanel: Introduction to the Theory of Complex Systems, Oxford University Press, 2018, SFI @30, Foundations & Frontiers (2014). External links (Interdisciplinary Description of Complex Systems) Complex systems in scholarpedia. Complex Systems Society (Australian) Complex systems research network. Complex Systems Modeling based on Luis M. Rocha, 1999. CRM Complex systems research group The Center for Complex Systems Research, Univ. of Illinois at Urbana-Champaign Complex dynamics Mathematical modeling
0.786261
0.996138
0.783224
Agronomy
Agronomy is the science and technology of producing and using plants by agriculture for food, fuel, fiber, chemicals, recreation, or land conservation. Agronomy has come to include research of plant genetics, plant physiology, meteorology, and soil science. It is the application of a combination of sciences such as biology, chemistry, economics, ecology, earth science, and genetics. Professionals of agronomy are termed agronomists. History Agronomy has a long and rich history dating to the Neolithic Revolution. Some of the earliest practices of agronomy are found in ancient civilizations, including Ancient Egypt, Mesopotamia, China and India. They developed various techniques for the management of soil fertility, irrigation and crop rotation. During the 18th and 19th centuries, advances in science led to the development of modern agronomy. German chemist Justus von Liebig and John Bennett Lawes, an English entrepreneur, contributed to the understanding of plant nutrition and soil chemistry. Their work laid for the establishment of modern fertilizers and agricultural practices. Agronomy continued to evolve with the development of new technology and practices in the 20th century. From the 1960s, the Green Revolution saw the introduction of high-yield variety of crops, modern fertilizers and improvement of agricultural practices. It led to an increase of global food production to help reduce hunger and poverty in many parts of the world. Plant breeding This topic of agronomy involves selective breeding of plants to produce the best crops for various conditions. Plant breeding has increased crop yields and has improved the nutritional value of numerous crops, including corn, soybeans, and wheat. It has also resulted in the development of new types of plants. For example, a hybrid grain named triticale was produced by crossbreeding rye and wheat. Triticale contains more usable protein than does either rye or wheat. Agronomy has also been instrumental for fruit and vegetable production research. Furthermore, the application of plant breeding for turfgrass development has resulted in a reduction in the demand for fertilizer and water inputs (requirements), as well as turf-types with higher disease resistance. Biotechnology Agronomists use biotechnology to extend and expedite the development of desired characteristics. Biotechnology is often a laboratory activity requiring field testing of new crop varieties that are developed. In addition to increasing crop yields agronomic biotechnology is being applied increasingly for novel uses other than food. For example, oilseed is at present used mainly for margarine and other food oils, but it can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals. Soil science Agronomists study sustainable ways to make soils more productive and profitable. They classify soils and analyze them to determine whether they contain nutrients vital for plant growth. Common macronutrients analyzed include compounds of nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. Soil is also assessed for several micronutrients, like zinc and boron. The percentage of organic matter, soil pH, and nutrient holding capacity (cation exchange capacity) are tested in a regional laboratory. Agronomists will interpret these laboratory reports and make recommendations to modify soil nutrients for optimal plant growth. Soil conservation Additionally, agronomists develop methods to preserve soil and decrease the effects of [erosion] by wind and water. For example, a technique known as contour plowing may be used to prevent soil erosion and conserve rainfall. Researchers of agronomy also seek ways to use the soil more effectively for solving other problems. Such problems include the disposal of human and animal manure, water pollution, and pesticide accumulation in the soil, as well as preserving the soil for future generations such as the burning of paddocks after crop production. Pasture management techniques include no-till farming, planting of soil-binding grasses along contours on steep slopes, and using contour drains of depths as much as 1 metre. Agroecology Agroecology is the management of agricultural systems with an emphasis on ecological and environmental applications. This topic is associated closely with work for sustainable agriculture, organic farming, and alternative food systems and the development of alternative cropping systems. Theoretical modeling Theoretical production ecology is the quantitative study of the growth of crops. The plant is treated as a kind of biological factory, which processes light, carbon dioxide, water, and nutrients into harvestable products. The main parameters considered are temperature, sunlight, standing crop biomass, plant production distribution, and nutrient and water supply. See also Agricultural engineering Agricultural policy Agroecology Agrology Agrophysics Crop farming Food systems Horticulture Green Revolution Vegetable farming References Bibliography Wendy B. Murphy, The Future World of Agriculture, Watts, 1984. Antonio Saltini, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, , , , External links The American Society of Agronomy (ASA) Crop Science Society of America (CSSA) Soil Science Society of America (SSSA) European Society for Agronomy The National Agricultural Library (NAL) – Comprehensive agricultural library. Information System for Agriculture and Food Research . Applied sciences Plant agriculture
0.785906
0.996466
0.783129
Balance of nature
The balance of nature, also known as ecological balance, is a theory that proposes that ecological systems are usually in a stable equilibrium or homeostasis, which is to say that a small change (the size of a particular population, for example) will be corrected by some negative feedback that will bring the parameter back to its original "point of balance" with the rest of the system. The balance is sometimes depicted as easily disturbed and delicate, while other times it is inversely portrayed as powerful enough to correct any imbalances by itself. The concept has been described as "normative", as well as teleological, as it makes a claim about how nature should be: nature is balanced because "it is supposed to be balanced". The theory has been employed to describe how populations depend on each other, for example in predator-prey systems, or relationships between herbivores and their food source. It is also sometimes applied to the relationship between the Earth's ecosystem, the composition of the atmosphere, and weather. The theory has been discredited by scientists working in ecology, as it has been found that constant disturbances leading to chaotic and dynamic changes are the norm in nature. During the later half of the 20th century, it was superseded by catastrophe theory, chaos theory, and thermodynamics. Nevertheless, the idea maintains popularity amongst conservationists, environmentalists and the general public. History of the theory The concept that nature maintains its condition is of ancient provenance. Herodotus asserted that predators never excessively consume prey populations and described this balance as "wonderful". Two of Plato's dialogues, the Timaeus and Protagoras myths, support the balance of nature concept. Cicero advanced the theory of "a balance of nature generated by different reproductive rates and traits among species, as well as interactions among species". The balance of nature concept once ruled ecological research and governed the management of natural resources. This led to a doctrine popular among some conservationists that nature was best left to its own devices, and that human intervention into it was by definition unacceptable. The theory was a central theme in the 1962 book Silent Spring by Rachel Carson, widely-considered to be the most important environmental book of the 20th century. The controversial Gaia hypothesis was developed in the 1970s by James Lovelock and Lynn Margulis. It asserts that living beings interact with Earth to form a complex system which self-regulates to maintain the balance of nature. The validity of a balance of nature was already questioned in the early 1900s, but the general abandonment of the theory by scientists working in ecology only happened in the last quarter of that century, when studies showed that it did not match what could be observed among plant and animal populations. Predator-prey interactions Predator-prey populations tend to show chaotic behavior within limits, where the sizes of populations change in a way that may appear random but is, in fact, obeying deterministic laws based only on the relationship between a population and its food source illustrated by the Lotka–Volterra equation. An experimental example of this was shown in an eight-year study on small Baltic Sea creatures such as plankton, which were isolated from the rest of the ocean. Each member of the food web was shown to take turns multiplying and declining, even though the scientists kept the outside conditions constant. An article in the journal Nature stated: "Advanced mathematical techniques proved the indisputable presence of chaos in this food web ... short-term prediction is possible, but long-term prediction is not." Human intervention Although some conservationist organizations argue that human activity is incompatible with a balanced ecosystem, there are numerous examples in history showing that several modern-day habitats originate from human activity: some of Latin America's rain forests owe their existence to humans planting and transplanting them, while the abundance of grazing animals in the Serengeti plain of Africa is thought by some ecologists to be partly due to human-set fires that created savanna habitats. One of the best-known and often misunderstood examples of ecosystem balance being enhanced by human activity is the Australian Aboriginal practice of fire-stick farming. This uses low-intensity fire when there is sufficient humidity to limit its action, to reduce the quantity of ground-level combustible material, to lessen the intensity and devastation of forest fires caused by lightning at the end of the dry season. Several plant species are adapted to fire, some even requiring its extreme heat to germinate their seeds. Continued popularity of the theory Despite being discredited among ecologists, the theory is widely held to be true by the general public, conservationists and environmentalists, with one author calling it an "enduring myth". Environmental and conservation organizations such as the WWF, Sierra Club and Canadian Wildlife Federation continue to promote the theory, as do animal rights organizations such as PETA. Kim Cuddington considers the balance of nature to be a "foundational metaphor in ecology", which is still in active use by ecologists. She argues that many ecologists see nature as a "beneficent force" and that they also view the universe as being innately predictable; Cuddington asserts that the balance of nature acts as a "shorthand for the paradigm expressing this worldview". Douglas Allchin and Alexander J. Werth assert that although "ecologists formally eschew the concept of balance of nature, it remains a widely adopted preconception and a feature of language that seems not to disappear entirely." At least in Midwestern America, the balance of nature idea was shown to be widely held by both science majors and the general student population. In a study at the University of Patras, educational sciences students were asked to reason about the future of ecosystems which suffered human-driven disturbances. Subjects agreed that it was very likely for the ecosystems to fully recover their initial state, referring to either a 'recovery process' which restores the initial 'balance', or specific 'recovery mechanisms' as an ecosystem's inherent characteristic. In a 2017 study, Ampatzidis and Ergazaki discuss the learning objectives and design criteria that a learning environment for non-biology major students should meet to support them challenge the balance of nature concept. In a 2018 study, the same authors report on the theoretical output of a design research study, which concerns the design of a learning environment for helping students challenge their beliefs regarding the balance of nature and reach an up-to-date understanding about ecosystems' contingency. In popular culture In Ursula K. Le Guin's Earthsea fantasy series, using magic means to "respect and preserve the immanent metaphysical balance of nature." The balance of nature (referred to as "the circle of life") is a major theme of the 1994 film, The Lion King. In one scene, the character Mufasa describes to his son Simba how everything exists in a state of delicate balance. The character Agent Smith, in the 1999 film The Matrix, describes humanity as a virus, claiming that humans fail to reach an equilibrium with their surrounding environment; unlike other mammals. The disruption of the balance of nature is a common theme in Hayao Miyazaki's films: Nausicaä of the Valley of the Wind, released in 1984, is set in a post-apocalyptic world where humans have upset the balance of nature through war; the 1997 film Princess Mononoke, depicts irresponsible activities by humans as having damaged the balance of nature; in the 2008 film Ponyo, the titular character disturbs the balance of nature when she seeks to become human. The titular character of the 2014 film Godzilla fights other sea monsters known as "MUTOs" in a bid to restore the balance of nature. In the 2018 film Avengers: Infinity War, the villain Thanos seeks to restore the balance of nature by eliminating half of the beings in the universe. See also Ecological footprint Social metabolism References Nature Ecology Obsolete scientific theories Teleology
0.790144
0.990916
0.782967
Pedology
Pedology (from Greek: πέδον, pedon, "soil"; and λόγος, logos, "study") is a discipline within soil science which focuses on understanding and characterizing soil formation, evolution, and the theoretical frameworks for modeling soil bodies, often in the context of the natural environment. Pedology is often seen as one of two main branches of soil inquiry, the other being edaphology which is traditionally more agronomically oriented and focuses on how soil properties influence plant communities (natural or cultivated). In studying the fundamental phenomenology of soils, e.g. soil formation (aka pedogenesis), pedologists pay particular attention to observing soil morphology and the geographic distributions of soils, and the placement of soil bodies into larger temporal and spatial contexts. In so doing, pedologists develop systems of soil classification, soil maps, and theories for characterizing temporal and spatial interrelations among soils . There are a few noteworthy sub-disciplines of pedology; namely pedometrics and soil geomorphology. Pedometrics focuses on the development of techniques for quantitative characterization of soils, especially for the purposes of mapping soil properties whereas soil geomorphology studies the interrelationships between geomorphic processes and soil formation. Overview Soil is not only a support for vegetation, but it is also the pedosphere, the locus of numerous interactions between climate (water, air, temperature), soil life (micro-organisms, plants, animals) and its residues, the mineral material of the original and added rock, and its position in the landscape. During its formation and genesis, the soil profile slowly deepens and develops characteristic layers, called 'horizons', while a steady state balance is approached. Soil users (such as agronomists) showed initially little concern in the dynamics of soil. They saw it as medium whose chemical, physical and biological properties were useful for the services of agronomic productivity. On the other hand, pedologists and geologists did not initially focus on the agronomic applications of the soil characteristics (edaphic properties) but upon its relation to the nature and history of landscapes. Today, there is an integration of the two disciplinary approaches as part of landscape and environmental sciences. Pedologists are now also interested in the practical applications of a good understanding of pedogenesis processes (the evolution and functioning of soils), like interpreting its environmental history and predicting consequences of changes in land use, while agronomists understand that the cultivated soil is a complex medium, often resulting from several thousands of years of evolution. They understand that the current balance is fragile and that only a thorough knowledge of its history makes it possible to ensure its sustainable use. Concepts Important pedological concepts include: Complexity in soil genesis is more common than simplicity. Soils lie at the interface of Earth's atmosphere, biosphere, hydrosphere and lithosphere. Therefore, a thorough understanding of soils requires some knowledge of meteorology, climatology, ecology, biology, hydrology, geomorphology, geology and many other earth sciences and natural sciences. Contemporary soils carry imprints of pedogenic processes that were active in the past, although in many cases these imprints are difficult to observe or quantify. Thus, knowledge of paleoecology, palaeogeography, glacial geology and paleoclimatology is important for the recognition and understanding of soil genesis and constitute a basis for predicting future soil changes. Five major, external factors of formation (climate, organisms, relief, parent material and time), and several smaller, less identifiable ones, drive pedogenic processes and create soil patterns. Characteristics of soils and soil landscapes, e.g., the number, sizes, shapes and arrangements of soil bodies, each of which is characterized on the basis of soil horizons, degree of internal homogeneity, slope, aspect, landscape position, age and other properties and relationships, can be observed and measured. Distinctive bioclimatic regimes or combinations of pedogenic processes produce distinctive soils. Thus, distinctive, observable morphological features, e.g., illuvial clay accumulation in B horizons, are produced by certain combinations of pedogenic processes operative over varying periods of time. Pedogenic (soil-forming) processes act to both create and destroy order (anisotropy) within soils; these processes can proceed simultaneously. The resulting soil profile reflects the balance of these processes, present and past. The geological Principle of Uniformitarianism applies to soils, i.e., pedogenic processes active in soils today have been operating for long periods of time, back to the time of appearance of organisms on the land surface. These processes do, however, have varying degrees of expression and intensity over space and time. A succession of different soils may have developed, eroded and/or regressed at any particular site, as soil genetic factors and site factors, e.g., vegetation, sedimentation, geomorphology, change. There are very few old soils (in a geological sense) because they can be destroyed or buried by geological events, or modified by shifts in climate by virtue of their vulnerable position at the surface of the earth. Little of the soil continuum dates back beyond the Tertiary period and most soils and land surfaces are no older than the Pleistocene Epoch. However, preserved/lithified soils (paleosols) are an almost ubiquitous feature in terrestrial (land-based) environments throughout most of geologic time. Since they record evidence of ancient climate change, they present immense utility in understanding climate evolution throughout geologic history. Knowledge and understanding of the genesis of a soil is important in its classification and mapping. Soil classification systems cannot be based entirely on perceptions of genesis, however, because genetic processes are seldom observed and because pedogenic processes change over time. Knowledge of soil genesis is imperative and basic to soil use and management. Human influence on, or adjustment to, the factors and processes of soil formation can be best controlled and planned using knowledge about soil genesis. Soils are natural clay factories (clay includes both clay mineral structures and particles less than 2 μm in diameter). Shales worldwide are, to a considerable extent, simply soil clays that have been formed in the pedosphere and eroded and deposited in the ocean basins, to become lithified at a later date. Notable pedologists Olivier de Serres Vasily V. Dokuchaev Friedrich Albert Fallou Konstantin D. Glinka Eugene W. Hilgard Francis D. Hole Hans Jenny Curtis F. Marbut Bernard Palissy See also Agricultural sciences basic topics List of soil topics Pedogenesis References External links Physical geography Soil science
0.792626
0.987719
0.782892
Convergent evolution
Convergent evolution is the independent evolution of similar features in species of different periods or epochs in time. Convergent evolution creates analogous structures that have similar form or function but were not present in the last common ancestor of those groups. The cladistic term for the same phenomenon is homoplasy. The recurrent evolution of flight is a classic example, as flying insects, birds, pterosaurs, and bats have independently evolved the useful capacity of flight. Functionally similar features that have arisen through convergent evolution are analogous, whereas homologous structures or traits have a common origin but can have dissimilar functions. Bird, bat, and pterosaur wings are analogous structures, but their forelimbs are homologous, sharing an ancestral state despite serving different functions. The opposite of convergence is divergent evolution, where related species evolve different traits. Convergent evolution is similar to parallel evolution, which occurs when two independent species evolve in the same direction and thus independently acquire similar characteristics; for instance, gliding frogs have evolved in parallel from multiple types of tree frog. Many instances of convergent evolution are known in plants, including the repeated development of C4 photosynthesis, seed dispersal by fleshy fruits adapted to be eaten by animals, and carnivory. Overview In morphology, analogous traits arise when different species live in similar ways and/or a similar environment, and so face the same environmental factors. When occupying similar ecological niches (that is, a distinctive way of life) similar problems can lead to similar solutions. The British anatomist Richard Owen was the first to identify the fundamental difference between analogies and homologies. In biochemistry, physical and chemical constraints on mechanisms have caused some active site arrangements such as the catalytic triad to evolve independently in separate enzyme superfamilies. In his 1989 book Wonderful Life, Stephen Jay Gould argued that if one could "rewind the tape of life [and] the same conditions were encountered again, evolution could take a very different course." Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum" body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans. Distinctions Cladistics In cladistics, a homoplasy is a trait shared by two or more taxa for any reason other than that they share a common ancestry. Taxa which do share ancestry are part of the same clade; cladistics seeks to arrange them according to their degree of relatedness to describe their phylogeny. Homoplastic traits caused by convergence are therefore, from the point of view of cladistics, confounding factors which could lead to an incorrect analysis. Atavism In some cases, it is difficult to tell whether a trait has been lost and then re-evolved convergently, or whether a gene has simply been switched off and then re-enabled later. Such a re-emerged trait is called an atavism. From a mathematical standpoint, an unused gene (selectively neutral) has a steadily decreasing probability of retaining potential functionality over time. The time scale of this process varies greatly in different phylogenies; in mammals and birds, there is a reasonable probability of remaining in the genome in a potentially functional state for around 6 million years. Parallel vs. convergent evolution When two species are similar in a particular character, evolution is defined as parallel if the ancestors were also similar, and convergent if they were not. Some scientists have argued that there is a continuum between parallel and convergent evolution, while others maintain that despite some overlap, there are still important distinctions between the two. When the ancestral forms are unspecified or unknown, or the range of traits considered is not clearly specified, the distinction between parallel and convergent evolution becomes more subjective. For instance, the striking example of similar placental and marsupial forms is described by Richard Dawkins in The Blind Watchmaker as a case of convergent evolution, because mammals on each continent had a long evolutionary history prior to the extinction of the dinosaurs under which to accumulate relevant differences. At molecular level Proteins Protease active sites The enzymology of proteases provides some of the clearest examples of convergent evolution. These examples reflect the intrinsic chemical constraints on enzymes, leading evolution to converge on equivalent solutions independently and repeatedly. Serine and cysteine proteases use different amino acid functional groups (alcohol or thiol) as a nucleophile. In order to activate that nucleophile, they orient an acidic and a basic residue in a catalytic triad. The chemical and physical constraints on enzyme catalysis have caused identical triad arrangements to evolve independently more than 20 times in different enzyme superfamilies. Threonine proteases use the amino acid threonine as their catalytic nucleophile. Unlike cysteine and serine, threonine is a secondary alcohol (i.e. has a methyl group). The methyl group of threonine greatly restricts the possible orientations of triad and substrate, as the methyl clashes with either the enzyme backbone or the histidine base. Consequently, most threonine proteases use an N-terminal threonine in order to avoid such steric clashes. Several evolutionarily independent enzyme superfamilies with different protein folds use the N-terminal residue as a nucleophile. This commonality of active site but difference of protein fold indicates that the active site evolved convergently in those families. Cone snail and fish insulin Conus geographus produces a distinct form of insulin that is more similar to fish insulin protein sequences than to insulin from more closely related molluscs, suggesting convergent evolution, though with the possibility of horizontal gene transfer. Ferrous iron uptake via protein transporters in land plants and chlorophytes Distant homologues of the metal ion transporters ZIP in land plants and chlorophytes have converged in structure, likely to take up Fe2+ efficiently. The IRT1 proteins from Arabidopsis thaliana and rice have extremely different amino acid sequences from Chlamydomonass IRT1, but their three-dimensional structures are similar, suggesting convergent evolution. Na+,K+-ATPase and Insect resistance to cardiotonic steroids Many examples of convergent evolution exist in insects in terms of developing resistance at a molecular level to toxins. One well-characterized example is the evolution of resistance to cardiotonic steroids (CTSs) via amino acid substitutions at well-defined positions of the α-subunit of Na+,K+-ATPase (ATPalpha). Variation in ATPalpha has been surveyed in various CTS-adapted species spanning six insect orders. Among 21 CTS-adapted species, 58 (76%) of 76 amino acid substitutions at sites implicated in CTS resistance occur in parallel in at least two lineages. 30 of these substitutions (40%) occur at just two sites in the protein (positions 111 and 122). CTS-adapted species have also recurrently evolved neo-functionalized duplications of ATPalpha, with convergent tissue-specific expression patterns. Nucleic acids Convergence occurs at the level of DNA and the amino acid sequences produced by translating structural genes into proteins. Studies have found convergence in amino acid sequences in echolocating bats and the dolphin; among marine mammals; between giant and red pandas; and between the thylacine and canids. Convergence has also been detected in a type of non-coding DNA, cis-regulatory elements, such as in their rates of evolution; this could indicate either positive selection or relaxed purifying selection. In animal morphology Bodyplans Swimming animals including fish such as herrings, marine mammals such as dolphins, and ichthyosaurs (of the Mesozoic) all converged on the same streamlined shape. A similar shape and swimming adaptations are even present in molluscs, such as Phylliroe. The fusiform bodyshape (a tube tapered at both ends) adopted by many aquatic animals is an adaptation to enable them to travel at high speed in a high drag environment. Similar body shapes are found in the earless seals and the eared seals: they still have four legs, but these are strongly modified for swimming. The marsupial fauna of Australia and the placental mammals of the Old World have several strikingly similar forms, developed in two clades, isolated from each other. The body, and especially the skull shape, of the thylacine (Tasmanian tiger or Tasmanian wolf) converged with those of Canidae such as the red fox, Vulpes vulpes. Echolocation As a sensory adaptation, echolocation has evolved separately in cetaceans (dolphins and whales) and bats, but from the same genetic mutations. Electric fishes The Gymnotiformes of South America and the Mormyridae of Africa independently evolved passive electroreception (around 119 and 110 million years ago, respectively). Around 20 million years after acquiring that ability, both groups evolved active electrogenesis, producing weak electric fields to help them detect prey. Eyes One of the best-known examples of convergent evolution is the camera eye of cephalopods (such as squid and octopus), vertebrates (including mammals) and cnidaria (such as jellyfish). Their last common ancestor had at most a simple photoreceptive spot, but a range of processes led to the progressive refinement of camera eyes—with one sharp difference: the cephalopod eye is "wired" in the opposite direction, with blood and nerve vessels entering from the back of the retina, rather than the front as in vertebrates. As a result, vertebrates have a blind spot. Flight Birds and bats have homologous limbs because they are both ultimately derived from terrestrial tetrapods, but their flight mechanisms are only analogous, so their wings are examples of functional convergence. The two groups have independently evolved their own means of powered flight. Their wings differ substantially in construction. The bat wing is a membrane stretched across four extremely elongated fingers and the legs. The airfoil of the bird wing is made of feathers, strongly attached to the forearm (the ulna) and the highly fused bones of the wrist and hand (the carpometacarpus), with only tiny remnants of two fingers remaining, each anchoring a single feather. So, while the wings of bats and birds are functionally convergent, they are not anatomically convergent. Birds and bats also share a high concentration of cerebrosides in the skin of their wings. This improves skin flexibility, a trait useful for flying animals; other mammals have a far lower concentration. The extinct pterosaurs independently evolved wings from their fore- and hindlimbs, while insects have wings that evolved separately from different organs. Flying squirrels and sugar gliders are much alike in their body plans, with gliding wings stretched between their limbs, but flying squirrels are placental mammals while sugar gliders are marsupials, widely separated within the mammal lineage from the placentals. Hummingbird hawk-moths and hummingbirds have evolved similar flight and feeding patterns. Insect mouthparts Insect mouthparts show many examples of convergent evolution. The mouthparts of different insect groups consist of a set of homologous organs, specialised for the dietary intake of that insect group. Convergent evolution of many groups of insects led from original biting-chewing mouthparts to different, more specialised, derived function types. These include, for example, the proboscis of flower-visiting insects such as bees and flower beetles, or the biting-sucking mouthparts of blood-sucking insects such as fleas and mosquitos. Opposable thumbs Opposable thumbs allowing the grasping of objects are most often associated with primates, like humans and other apes, monkeys, and lemurs. Opposable thumbs also evolved in giant pandas, but these are completely different in structure, having six fingers including the thumb, which develops from a wrist bone entirely separately from other fingers. Primates Convergent evolution in humans includes blue eye colour and light skin colour. When humans migrated out of Africa, they moved to more northern latitudes with less intense sunlight. It was beneficial to them to reduce their skin pigmentation. It appears certain that there was some lightening of skin colour before European and East Asian lineages diverged, as there are some skin-lightening genetic differences that are common to both groups. However, after the lineages diverged and became genetically isolated, the skin of both groups lightened more, and that additional lightening was due to different genetic changes. Lemurs and humans are both primates. Ancestral primates had brown eyes, as most primates do today. The genetic basis of blue eyes in humans has been studied in detail and much is known about it. It is not the case that one gene locus is responsible, say with brown dominant to blue eye colour. However, a single locus is responsible for about 80% of the variation. In lemurs, the differences between blue and brown eyes are not completely known, but the same gene locus is not involved. In plants The annual life-cycle While most plant species are perennial, about 6% follow an annual life cycle, living for only one growing season. The annual life cycle independently emerged in over 120 plant families of angiosperms. The prevalence of annual species increases under hot-dry summer conditions in the four species-rich families of annuals (Asteraceae, Brassicaceae, Fabaceae, and Poaceae), indicating that the annual life cycle is adaptive. Carbon fixation C4 photosynthesis, one of the three major carbon-fixing biochemical processes, has arisen independently up to 40 times. About 7,600 plant species of angiosperms use carbon fixation, with many monocots including 46% of grasses such as maize and sugar cane, and dicots including several species in the Chenopodiaceae and the Amaranthaceae. Fruits Fruits with a wide variety of structural origins have converged to become edible. Apples are pomes with five carpels; their accessory tissues form the apple's core, surrounded by structures from outside the botanical fruit, the receptacle or hypanthium. Other edible fruits include other plant tissues; the fleshy part of a tomato is the walls of the pericarp. This implies convergent evolution under selective pressure, in this case the competition for seed dispersal by animals through consumption of fleshy fruits. Seed dispersal by ants (myrmecochory) has evolved independently more than 100 times, and is present in more than 11,000 plant species. It is one of the most dramatic examples of convergent evolution in biology. Carnivory Carnivory has evolved multiple times independently in plants in widely separated groups. In three species studied, Cephalotus follicularis, Nepenthes alata and Sarracenia purpurea, there has been convergence at the molecular level. Carnivorous plants secrete enzymes into the digestive fluid they produce. By studying phosphatase, glycoside hydrolase, glucanase, RNAse and chitinase enzymes as well as a pathogenesis-related protein and a thaumatin-related protein, the authors found many convergent amino acid substitutions. These changes were not at the enzymes' catalytic sites, but rather on the exposed surfaces of the proteins, where they might interact with other components of the cell or the digestive fluid. The authors also found that homologous genes in the non-carnivorous plant Arabidopsis thaliana tend to have their expression increased when the plant is stressed, leading the authors to suggest that stress-responsive proteins have often been co-opted in the repeated evolution of carnivory. Methods of inference Phylogenetic reconstruction and ancestral state reconstruction proceed by assuming that evolution has occurred without convergence. Convergent patterns may, however, appear at higher levels in a phylogenetic reconstruction, and are sometimes explicitly sought by investigators. The methods applied to infer convergent evolution depend on whether pattern-based or process-based convergence is expected. Pattern-based convergence is the broader term, for when two or more lineages independently evolve patterns of similar traits. Process-based convergence is when the convergence is due to similar forces of natural selection. Pattern-based measures Earlier methods for measuring convergence incorporate ratios of phenotypic and phylogenetic distance by simulating evolution with a Brownian motion model of trait evolution along a phylogeny. More recent methods also quantify the strength of convergence. One drawback to keep in mind is that these methods can confuse long-term stasis with convergence due to phenotypic similarities. Stasis occurs when there is little evolutionary change among taxa. Distance-based measures assess the degree of similarity between lineages over time. Frequency-based measures assess the number of lineages that have evolved in a particular trait space. Process-based measures Methods to infer process-based convergence fit models of selection to a phylogeny and continuous trait data to determine whether the same selective forces have acted upon lineages. This uses the Ornstein–Uhlenbeck process to test different scenarios of selection. Other methods rely on an a priori specification of where shifts in selection have occurred. See also : the presence of multiple alleles in ancestral populations might lead to the impression that convergent evolution has occurred. Iterative evolution – The repeated evolution of a specific trait or body plan from the same ancestral lineage at different points in time. Breeding back – A form of selective breeding to recreate the traits of an extinct species, but the genome will differ from the original species. Orthogenesis (contrastable with convergent evolution; involves teleology) Contingency (evolutionary biology) – effect of evolutionary history on outcomes Notes References Further reading External links Convergent evolution Evolutionary biology terminology
0.784227
0.998231
0.78284
Limnology
Limnology ( ; ) is the study of inland aquatic ecosystems. The study of limnology includes aspects of the biological, chemical, physical, and geological characteristics of fresh and saline, natural and man-made bodies of water. This includes the study of lakes, reservoirs, ponds, rivers, springs, streams, wetlands, and groundwater. Water systems are often categorized as either running (lotic) or standing (lentic). Limnology includes the study of the drainage basin, movement of water through the basin and biogeochemical changes that occur en route. A more recent sub-discipline of limnology, termed landscape limnology, studies, manages, and seeks to conserve these ecosystems using a landscape perspective, by explicitly examining connections between an aquatic ecosystem and its drainage basin. Recently, the need to understand global inland waters as part of the Earth system created a sub-discipline called global limnology. This approach considers processes in inland waters on a global scale, like the role of inland aquatic ecosystems in global biogeochemical cycles. Limnology is closely related to aquatic ecology and hydrobiology, which study aquatic organisms and their interactions with the abiotic (non-living) environment. While limnology has substantial overlap with freshwater-focused disciplines (e.g., freshwater biology), it also includes the study of inland salt lakes. History The term limnology was coined by François-Alphonse Forel (1841–1912) who established the field with his studies of Lake Geneva. Interest in the discipline rapidly expanded, and in 1922 August Thienemann (a German zoologist) and Einar Naumann (a Swedish botanist) co-founded the International Society of Limnology (SIL, from Societas Internationalis Limnologiae). Forel's original definition of limnology, "the oceanography of lakes", was expanded to encompass the study of all inland waters, and influenced Benedykt Dybowski's work on Lake Baikal. Prominent early American limnologists included G. Evelyn Hutchinson and Ed Deevey. At the University of Wisconsin-Madison, Edward A. Birge, Chancey Juday, Charles R. Goldman, and Arthur D. Hasler contributed to the development of the Center for Limnology. General limnology Physical properties Physical properties of aquatic ecosystems are determined by a combination of heat, currents, waves and other seasonal distributions of environmental conditions. The morphometry of a body of water depends on the type of feature (such as a lake, river, stream, wetland, estuary etc.) and the structure of the earth surrounding the body of water. Lakes, for instance, are classified by their formation, and zones of lakes are defined by water depth. River and stream system morphometry is driven by underlying geology of the area as well as the general velocity of the water. Stream morphometry is also influenced by topography (especially slope) as well as precipitation patterns and other factors such as vegetation and land development. Connectivity between streams and lakes relates to the landscape drainage density, lake surface area and lake shape. Other types of aquatic systems which fall within the study of limnology are estuaries. Estuaries are bodies of water classified by the interaction of a river and the ocean or sea. Wetlands vary in size, shape, and pattern however the most common types, marshes, bogs and swamps, often fluctuate between containing shallow, freshwater and being dry depending on the time of year. The volume and quality of water in underground aquifers rely on the vegetation cover, which fosters recharge and aids in maintaining water quality. Light interactions Light zonation is the concept of how the amount of sunlight penetration into water influences the structure of a body of water. These zones define various levels of productivity within an aquatic ecosystems such as a lake. For instance, the depth of the water column which sunlight is able to penetrate and where most plant life is able to grow is known as the photic or euphotic zone. The rest of the water column which is deeper and does not receive sufficient amounts of sunlight for plant growth is known as the aphotic zone. The amount of solar energy present underwater and the spectral quality of the light that are present at various depths have a significant impact on the behavior of many aquatic organisms. For example, zooplankton's vertical migration is influenced by solar energy levels. Thermal stratification Similar to light zonation, thermal stratification or thermal zonation is a way of grouping parts of the water body within an aquatic system based on the temperature of different lake layers. The less turbid the water, the more light is able to penetrate, and thus heat is conveyed deeper in the water. Heating declines exponentially with depth in the water column, so the water will be warmest near the surface but progressively cooler as moving downwards. There are three main sections that define thermal stratification in a lake. The epilimnion is closest to the water surface and absorbs long- and shortwave radiation to warm the water surface. During cooler months, wind shear can contribute to cooling of the water surface. The thermocline is an area within the water column where water temperatures rapidly decrease. The bottom layer is the hypolimnion, which tends to have the coldest water because its depth restricts sunlight from reaching it. In temperate lakes, fall-season cooling of surface water results in turnover of the water column, where the thermocline is disrupted, and the lake temperature profile becomes more uniform. In cold climates, when water cools below 4oC (the temperature of maximum density) many lakes can experience an inverse thermal stratification in winter. These lakes are often dimictic, with a brief spring overturn in addition to longer fall overturn. The relative thermal resistance is the energy needed to mix these strata of different temperatures. Lake Heat Budget An annual heat budget, also shown as θa, is the total amount of heat needed to raise the water from its minimum winter temperature to its maximum summer temperature. This can be calculated by integrating the area of the lake at each depth interval (Az) multiplied by the difference between the summer (θsz) and winter (θwz) temperatures or Az(θsz-θwz) Chemical properties The chemical composition of water in aquatic ecosystems is influenced by natural characteristics and processes including precipitation, underlying soil and bedrock in the drainage basin, erosion, evaporation, and sedimentation. All bodies of water have a certain composition of both organic and inorganic elements and compounds. Biological reactions also affect the chemical properties of water. In addition to natural processes, human activities strongly influence the chemical composition of aquatic systems and their water quality. Allochthonous sources of carbon or nutrients come from outside the aquatic system (such as plant and soil material). Carbon sources from within the system, such as algae and the microbial breakdown of aquatic particulate organic carbon, are autochthonous. In aquatic food webs, the portion of biomass derived from allochthonous material is then named "allochthony". In streams and small lakes, allochthonous sources of carbon are dominant while in large lakes and the ocean, autochthonous sources dominate. Oxygen and carbon dioxide Dissolved oxygen and dissolved carbon dioxide are often discussed together due their coupled role in respiration and photosynthesis. Dissolved oxygen concentrations can be altered by physical, chemical, and biological processes and reaction. Physical processes including wind mixing can increase dissolved oxygen concentrations, particularly in surface waters of aquatic ecosystems. Because dissolved oxygen solubility is linked to water temperatures, changes in temperature affect dissolved oxygen concentrations as warmer water has a lower capacity to "hold" oxygen as colder water. Biologically, both photosynthesis and aerobic respiration affect dissolved oxygen concentrations. Photosynthesis by autotrophic organisms, such as phytoplankton and aquatic algae, increases dissolved oxygen concentrations while simultaneously reducing carbon dioxide concentrations, since carbon dioxide is taken up during photosynthesis. All aerobic organisms in the aquatic environment take up dissolved oxygen during aerobic respiration, while carbon dioxide is released as a byproduct of this reaction. Because photosynthesis is light-limited, both photosynthesis and respiration occur during the daylight hours, while only respiration occurs during dark hours or in dark portions of an ecosystem. The balance between dissolved oxygen production and consumption is calculated as the aquatic metabolism rate. Vertical changes in the concentrations of dissolved oxygen are affected by both wind mixing of surface waters and the balance between photosynthesis and respiration of organic matter. These vertical changes, known as profiles, are based on similar principles as thermal stratification and light penetration. As light availability decreases deeper in the water column, photosynthesis rates also decrease, and less dissolved oxygen is produced. This means that dissolved oxygen concentrations generally decrease as you move deeper into the body of water because of photosynthesis is not replenishing dissolved oxygen that is being taken up through respiration. During periods of thermal stratification, water density gradients prevent oxygen-rich surface waters from mixing with deeper waters. Prolonged periods of stratification can result in the depletion of bottom-water dissolved oxygen; when dissolved oxygen concentrations are below 2 milligrams per liter, waters are considered hypoxic. When dissolved oxygen concentrations are approximately 0 milligrams per liter, conditions are anoxic. Both hypoxic and anoxic waters reduce available habitat for organisms that respire oxygen, and contribute to changes in other chemical reactions in the water. Nitrogen and phosphorus Nitrogen and phosphorus are ecologically significant nutrients in aquatic systems. Nitrogen is generally present as a gas in aquatic ecosystems however most water quality studies tend to focus on nitrate, nitrite and ammonia levels. Most of these dissolved nitrogen compounds follow a seasonal pattern with greater concentrations in the fall and winter months compared to the spring and summer. Phosphorus has a different role in aquatic ecosystems as it is a limiting factor in the growth of phytoplankton because of generally low concentrations in the water. Dissolved phosphorus is also crucial to all living things, is often very limiting to primary productivity in freshwater, and has its own distinctive ecosystem cycling. Biological properties Role in ecology Lakes "are relatively easy to sample, because they have clear-cut boundaries (compared to terrestrial ecosystems) and because field experiments are relatively easy to perform.", which make then especially useful for ecologists who try to understand ecological dynamics. Lake trophic classification One way to classify lakes (or other bodies of water) is with the trophic state index. An oligotrophic lake is characterized by relatively low levels of primary production and low levels of nutrients. A eutrophic lake has high levels of primary productivity due to very high nutrient levels. Eutrophication of a lake can lead to algal blooms. Dystrophic lakes have high levels of humic matter and typically have yellow-brown, tea-coloured waters. These categories do not have rigid specifications; the classification system can be seen as more of a spectrum encompassing the various levels of aquatic productivity. Tropical limnology Tropical limnology is a unique and important subfield of limnology that focuses on the distinct physical, chemical, biological, and cultural aspects of freshwater systems in tropical regions. The physical and chemical properties of tropical aquatic environments are different from those in temperate regions, with warmer and more stable temperatures, higher nutrient levels, and more complex ecological interactions. Moreover, the biodiversity of tropical freshwater systems is typically higher, human impacts are often more severe, and there are important cultural and socioeconomic factors that influence the use and management of these systems. Professional organizations People who study limnology are called limnologists. These scientists largely study the characteristics of inland fresh-water systems such as lakes, rivers, streams, ponds and wetlands. They may also study non-oceanic bodies of salt water, such as the Great Salt Lake. There are many professional organizations related to limnology and other aspects of the aquatic science, including the Association for the Sciences of Limnology and Oceanography, the Asociación Ibérica de Limnología, the International Society of Limnology, the Polish Limnological Society, the Society of Canadian Limnologists, and the Freshwater Biological Association. See also References Further reading Gerald A. Cole, Textbook of Limnology, 4th ed. (Waveland Press, 1994) Stanley Dodson, Introduction to Limnology (2005), A.J.Horne and C.R. Goldman: Limnology (1994), G. E. Hutchinson, A Treatise on Limnology, 3 vols. (1957–1975) - classic but dated H.B.N. Hynes, The Ecology of Running Waters (1970) Jacob Kalff, Limnology (Prentice Hall, 2001) B. Moss, Ecology of Fresh Waters (Blackwell, 1998) Robert G. Wetzel and Gene E. Likens, Limnological Analyses, 3rd ed. (Springer-Verlag, 2000) Patrick E. O'Sullivan and Colin S. Reynolds The Lakes Handbook: Limnology and limnetic ecology .01 Hydrography Lakes Rivers Systems ecology Aquatic ecology Water
0.78864
0.99251
0.782733
Bottom–up and top–down design
Bottom–up and top–down are both strategies of information processing and ordering knowledge, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership. A top–down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top–down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top–down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top–down approach starts with the big picture, then breaks down into smaller segments. A bottom–up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom–up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose. Product design and development During the development of new products, designers and engineers rely on both bottom–up and top–down approaches. The bottom–up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top–down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top–down approach is taken and almost everything is custom designed. Computer science Software development Part of this section is from the Perl Design Patterns Book. In the software development process, the top–down and bottom–up approaches play a key role. Top–down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top–down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete. Bottom–up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom–up approach. Top–down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top–down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top–down programming was not strictly what he promoted. Top–down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used. Modern software design approaches usually combine top–down and bottom–up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom–up flavor. Programming Top–down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained. In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top–level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design". Parsing Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler. Nanotechnology Top–down and bottom–up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom–up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top–down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications. A top–down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top–down secondary approaches to engineer nanostructures. Bottom–up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom–up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top–down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Neuroscience and psychology These terms are also employed in cognitive sciences including neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically, sensory input is considered bottom–up, and higher cognitive processes, which have more information from other sources, are considered top–down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19). According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that top–down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach." Conversely, psychology defines bottom–up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom–up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom–up processing occurs "when a stimulus is presented long and clearly enough." Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom–up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top–down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom–up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top–down influence. The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom–up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top–down information. In cognition, two thinking approaches are distinguished. "Top–down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom–up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition. Studies in task switching and response selection show that there are differences through the two types of processing. Top–down processing primarily focuses on the attention side, such as task repetition (Schneider, 2015).  Bottom–up processing focuses on item-based learning, such as finding the same object over and over again (Schneider, 2015). Implications for understanding attentional control of response selection in conflict situations are discussed (Schneider, 2015). This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top–down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom–up methods to produce usable interfaces (Zacks & Tversky, 2003). Schooling Undergraduate (or bachelor) students are taught the basis of top–down bottom–up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom–up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations (Koch, 2022). Management and organization In the fields of management and organization, the terms "top–down" and "bottom–up" are used to describe how decisions are made and/or how change is implemented. A "top–down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff. A bottom–up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom–up" decision. A bottom–up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers". Positive aspects of top–down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them (e.g., Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g., Dubois 2002). A bottom–up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change. Public health Both top–down and bottom–up approaches are used in public health. There are many examples of top–down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom–up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare. Architecture Often the École des Beaux-Arts school of design is said to have primarily promoted top–down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project. By contrast, the Bauhaus focused on bottom–up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design). Ecology In ecology top–down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top–down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top–down control has in this example; when the population of otters decreased, the population of the urchins increased. Bottom–up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface. There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems. Philosophy and ethics Top–down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom–up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top–down with bottom–up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements. See also The Cathedral and the Bazaar Pseudocode References cited https://philpapers.org/rec/COHTNO Citations and notes Further reading Corpeño, E (2021). "The Top-Down Approach to Problem Solving: How to Stop Struggling in Class and Start Learning". . Goldstein, E.B. (2010). Sensation and Perception. USA: Wadsworth. Galotti, K. (2008). Cognitive Psychology: In and out of the laboratory. USA: Wadsworth. Dubois, Hans F.W. 2002. Harmonization of the European vaccination policy and the role TQM and reengineering could play. Quality Management in Health Care 10(2): 47–57. J. A. Estes, M. T. Tinker, T. M. Williams, D. F. Doak "Killer Whale Predation on Sea Otters Linking Oceanic and Nearshore Ecosystems", Science, October 16, 1998: Vol. 282. no. 5388, pp. 473 – 476 Luiz Carlos Bresser-Pereira, José María Maravall, and Adam Przeworski, 1993. Economic reforms in new democracies. Cambridge: Cambridge University Press. . External links "Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971) Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998). Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003. K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989. Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches Dichotomies Information science Neuropsychology Software design Hierarchy
0.785558
0.995917
0.782351
Evolutionary developmental biology
Evolutionary developmental biology (informally, evo-devo) is a field of biological research that compares the developmental processes of different organisms to infer how developmental processes evolved. The field grew from 19th-century beginnings, where embryology faced a mystery: zoologists did not know how embryonic development was controlled at the molecular level. Charles Darwin noted that having similar embryos implied common ancestry, but little progress was made until the 1970s. Then, recombinant DNA technology at last brought embryology together with molecular genetics. A key early discovery was of homeotic genes that regulate development in a wide range of eukaryotes. The field is composed of multiple core evolutionary concepts. One is deep homology, the finding that dissimilar organs such as the eyes of insects, vertebrates and cephalopod molluscs, long thought to have evolved separately, are controlled by similar genes such as pax-6, from the evo-devo gene toolkit. These genes are ancient, being highly conserved among phyla; they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Another is that species do not differ much in their structural genes, such as those coding for enzymes; what does differ is the way that gene expression is regulated by the toolkit genes. These genes are reused, unchanged, many times in different parts of the embryo and at different stages of development, forming a complex cascade of control, switching other regulatory genes as well as structural genes on and off in a precise pattern. This multiple pleiotropic reuse explains why these genes are highly conserved, as any change would have many adverse consequences which natural selection would oppose. New morphological features and ultimately new species are produced by variations in the toolkit, either when genes are expressed in a new pattern, or when toolkit genes acquire additional functions. Another possibility is the neo-Lamarckian theory that epigenetic changes are later consolidated at gene level, something that may have been important early in the history of multicellular life. History Early theories Philosophers began to think about how animals acquired form in the womb in classical antiquity. Aristotle asserts in his Physics treatise that according to Empedocles, order "spontaneously" appears in the developing embryo. In his The Parts of Animals treatise, he argues that Empedocles' theory was wrong. In Aristotle's account, Empedocles stated that the vertebral column is divided into vertebrae because, as it happens, the embryo twists about and snaps the column into pieces. Aristotle argues instead that the process has a predefined goal: that the "seed" that develops into the embryo began with an inbuilt "potential" to become specific body parts, such as vertebrae. Further, each sort of animal gives rise to animals of its own kind: humans only have human babies. Recapitulation A recapitulation theory of evolutionary development was proposed by Étienne Serres in 1824–26, echoing the 1808 ideas of Johann Friedrich Meckel. They argued that the embryos of 'higher' animals went through or recapitulated a series of stages, each of which resembled an animal lower down the great chain of being. For example, the brain of a human embryo looked first like that of a fish, then in turn like that of a reptile, bird, and mammal before becoming clearly human. The embryologist Karl Ernst von Baer opposed this, arguing in 1828 that there was no linear sequence as in the great chain of being, based on a single body plan, but a process of epigenesis in which structures differentiate. Von Baer instead recognized four distinct animal body plans: radiate, like starfish; molluscan, like clams; articulate, like lobsters; and vertebrate, like fish. Zoologists then largely abandoned recapitulation, though Ernst Haeckel revived it in 1866. Evolutionary morphology From the early 19th century through most of the 20th century, embryology faced a mystery. Animals were seen to develop into adults of widely differing body plan, often through similar stages, from the egg, but zoologists knew almost nothing about how embryonic development was controlled at the molecular level, and therefore equally little about how developmental processes had evolved. Charles Darwin argued that a shared embryonic structure implied a common ancestor. For example, Darwin cited in his 1859 book On the Origin of Species the shrimp-like larva of the barnacle, whose sessile adults looked nothing like other arthropods; Linnaeus and Cuvier had classified them as molluscs. Darwin also noted Alexander Kowalevsky's finding that the tunicate, too, was not a mollusc, but in its larval stage had a notochord and pharyngeal slits which developed from the same germ layers as the equivalent structures in vertebrates, and should therefore be grouped with them as chordates. 19th century zoology thus converted embryology into an evolutionary science, connecting phylogeny with homologies between the germ layers of embryos. Zoologists including Fritz Müller proposed the use of embryology to discover phylogenetic relationships between taxa. Müller demonstrated that crustaceans shared the Nauplius larva, identifying several parasitic species that had not been recognized as crustaceans. Müller also recognized that natural selection must act on larvae, just as it does on adults, giving the lie to recapitulation, which would require larval forms to be shielded from natural selection. Two of Haeckel's other ideas about the evolution of development have fared better than recapitulation: he argued in the 1870s that changes in the timing (heterochrony) and changes in the positioning within the body (heterotopy) of aspects of embryonic development would drive evolution by changing the shape of a descendant's body compared to an ancestor's. It took a century before these ideas were shown to be correct. In 1917, D'Arcy Thompson wrote a book on the shapes of animals, showing with simple mathematics how small changes to parameters, such as the angles of a gastropod's spiral shell, can radically alter an animal's form, though he preferred a mechanical to evolutionary explanation. But without molecular evidence, progress stalled. In 1952, Alan Turing published his paper "The Chemical Basis of Morphogenesis", on the development of patterns in animals' bodies. He suggested that morphogenesis could be explained by a reaction–diffusion system, a system of reacting chemicals able to diffuse through the body. He modelled catalysed chemical reactions using partial differential equations, showing that patterns emerged when the chemical reaction produced both a catalyst (A) and an inhibitor (B) that slowed down production of A. If A and B then diffused at different rates, A dominated in some places, and B in others. The Russian biochemist Boris Belousov had run experiments with similar results, but was unable to publish them because scientists thought at that time that creating visible order violated the second law of thermodynamics. The modern synthesis of the early 20th century In the so-called modern synthesis of the early 20th century, between 1918 and 1930 Ronald Fisher brought together Darwin's theory of evolution, with its insistence on natural selection, heredity, and variation, and Gregor Mendel's laws of genetics into a coherent structure for evolutionary biology. Biologists assumed that an organism was a straightforward reflection of its component genes: the genes coded for proteins, which built the organism's body. Biochemical pathways (and, they supposed, new species) evolved through mutations in these genes. It was a simple, clear and nearly comprehensive picture: but it did not explain embryology. Sean B. Carroll has commented that had evo-devo's insights been available, embryology would certainly have played a central role in the synthesis. The evolutionary embryologist Gavin de Beer anticipated evolutionary developmental biology in his 1930 book Embryos and Ancestors, by showing that evolution could occur by heterochrony, such as in the retention of juvenile features in the adult. This, de Beer argued, could cause apparently sudden changes in the fossil record, since embryos fossilise poorly. As the gaps in the fossil record had been used as an argument against Darwin's gradualist evolution, de Beer's explanation supported the Darwinian position. However, despite de Beer, the modern synthesis largely ignored embryonic development to explain the form of organisms, since population genetics appeared to be an adequate explanation of how forms evolved. The lac operon In 1961, Jacques Monod, Jean-Pierre Changeux and François Jacob discovered the lac operon in the bacterium Escherichia coli. It was a cluster of genes, arranged in a feedback control loop so that its products would only be made when "switched on" by an environmental stimulus. One of these products was an enzyme that splits a sugar, lactose; and lactose itself was the stimulus that switched the genes on. This was a revelation, as it showed for the first time that genes, even in organisms as small as a bacterium, are subject to precise control. The implication was that many other genes were also elaborately regulated. The birth of evo-devo and a second synthesis In 1977, a revolution in thinking about evolution and developmental biology began, with the arrival of recombinant DNA technology in genetics, the book Ontogeny and Phylogeny by Stephen J. Gould and the paper "Evolution and Tinkering" by François Jacob. Gould laid to rest Haeckel's interpretation of evolutionary embryology, while Jacob set out an alternative theory. This led to a second synthesis, at last including embryology as well as molecular genetics, phylogeny, and evolutionary biology to form evo-devo. In 1978, Edward B. Lewis discovered homeotic genes that regulate embryonic development in Drosophila fruit flies, which like all insects are arthropods, one of the major phyla of invertebrate animals. Bill McGinnis quickly discovered homeotic gene sequences, homeoboxes, in animals in other phyla, in vertebrates such as frogs, birds, and mammals; they were later also found in fungi such as yeasts, and in plants. There were evidently strong similarities in the genes that controlled development across all the eukaryotes. In 1980, Christiane Nüsslein-Volhard and Eric Wieschaus described gap genes which help to create the segmentation pattern in fruit fly embryos; they and Lewis won a Nobel Prize for their work in 1995. Later, more specific similarities were discovered: for example, the Distal-less gene was found in 1989 to be involved in the development of appendages or limbs in fruit flies, the fins of fish, the wings of chickens, the parapodia of marine annelid worms, the ampullae and siphons of tunicates, and the tube feet of sea urchins. It was evident that the gene must be ancient, dating back to the last common ancestor of bilateral animals (before the Ediacaran Period, which began some 635 million years ago). Evo-devo had started to uncover the ways that all animal bodies were built during development. The control of body structure Deep homology Roughly spherical eggs of different animals give rise to unique morphologies, from jellyfish to lobsters, butterflies to elephants. Many of these organisms share the same structural genes for bodybuilding proteins like collagen and enzymes, but biologists had expected that each group of animals would have its own rules of development. The surprise of evo-devo is that the shaping of bodies is controlled by a rather small percentage of genes, and that these regulatory genes are ancient, shared by all animals. The giraffe does not have a gene for a long neck, any more than the elephant has a gene for a big body. Their bodies are patterned by a system of switching which causes development of different features to begin earlier or later, to occur in this or that part of the embryo, and to continue for more or less time. The puzzle of how embryonic development was controlled began to be solved using the fruit fly Drosophila melanogaster as a model organism. The step-by-step control of its embryogenesis was visualized by attaching fluorescent dyes of different colours to specific types of protein made by genes expressed in the embryo. A dye such as green fluorescent protein, originally from a jellyfish, was typically attached to an antibody specific to a fruit fly protein, forming a precise indicator of where and when that protein appeared in the living embryo. Using such a technique, in 1994 Walter Gehring found that the pax-6 gene, vital for forming the eyes of fruit flies, exactly matches an eye-forming gene in mice and humans. The same gene was quickly found in many other groups of animals, such as squid, a cephalopod mollusc. Biologists including Ernst Mayr had believed that eyes had arisen in the animal kingdom at least 40 times, as the anatomy of different types of eye varies widely. For example, the fruit fly's compound eye is made of hundreds of small lensed structures (ommatidia); the human eye has a blind spot where the optic nerve enters the eye, and the nerve fibres run over the surface of the retina, so light has to pass through a layer of nerve fibres before reaching the detector cells in the retina, so the structure is effectively "upside-down"; in contrast, the cephalopod eye has the retina, then a layer of nerve fibres, then the wall of the eye "the right way around". The evidence of pax-6, however, was that the same genes controlled the development of the eyes of all these animals, suggesting that they all evolved from a common ancestor. Ancient genes had been conserved through millions of years of evolution to create dissimilar structures for similar functions, demonstrating deep homology between structures once thought to be purely analogous. This notion was later extended to the evolution of embryogenesis and has caused a radical revision of the meaning of homology in evolutionary biology. Gene toolkit A small fraction of the genes in an organism's genome control the organism's development. These genes are called the developmental-genetic toolkit. They are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. Most toolkit genes are parts of signalling pathways: they encode transcription factors, cell adhesion proteins, cell surface receptor proteins and signalling ligands that bind to them, and secreted morphogens that diffuse through the embryo. All of these help to define the fate of undifferentiated cells in the embryo. Together, they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Among the most important toolkit genes are the Hox genes. These transcription factors contain the homeobox protein-binding DNA motif, also found in other toolkit genes, and create the basic pattern of the body along its front-to-back axis. Hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. Pax-6, already mentioned, is a classic toolkit gene. Although other toolkit genes are involved in establishing the plant bodyplan, homeobox genes are also found in plants, implying they are common to all eukaryotes. The embryo's regulatory networks The protein products of the regulatory toolkit are reused not by duplication and modification, but by a complex mosaic of pleiotropy, being applied unchanged in many independent developmental processes, giving pattern to many dissimilar body structures. The loci of these pleiotropic toolkit genes have large, complicated and modular cis-regulatory elements. For example, while a non-pleiotropic rhodopsin gene in the fruit fly has a cis-regulatory element just a few hundred base pairs long, the pleiotropic eyeless cis-regulatory region contains 6 cis-regulatory elements in over 7000 base pairs. The regulatory networks involved are often very large. Each regulatory protein controls "scores to hundreds" of cis-regulatory elements. For instance, 67 fruit fly transcription factors controlled on average 124 target genes each. All this complexity enables genes involved in the development of the embryo to be switched on and off at exactly the right times and in exactly the right places. Some of these genes are structural, directly forming enzymes, tissues and organs of the embryo. But many others are themselves regulatory genes, so what is switched on is often a precisely-timed cascade of switching, involving turning on one developmental process after another in the developing embryo. Such a cascading regulatory network has been studied in detail in the development of the fruit fly embryo. The young embryo is oval in shape, like a rugby ball. A small number of genes produce messenger RNAs that set up concentration gradients along the long axis of the embryo. In the early embryo, the bicoid and hunchback genes are at high concentration near the anterior end, and give pattern to the future head and thorax; the caudal and nanos genes are at high concentration near the posterior end, and give pattern to the hindmost abdominal segments. The effects of these genes interact; for instance, the Bicoid protein blocks the translation of caudal messenger RNA, so the Caudal protein concentration becomes low at the anterior end. Caudal later switches on genes which create the fly's hindmost segments, but only at the posterior end where it is most concentrated. The Bicoid, Hunchback and Caudal proteins in turn regulate the transcription of gap genes such as giant, knirps, Krüppel, and tailless in a striped pattern, creating the first level of structures that will become segments. The proteins from these in turn control the pair-rule genes, which in the next stage set up 7 bands across the embryo's long axis. Finally, the segment polarity genes such as engrailed split each of the 7 bands into two, creating 14 future segments. This process explains the accurate conservation of toolkit gene sequences, which has resulted in deep homology and functional equivalence of toolkit proteins in dissimilar animals (seen, for example, when a mouse protein controls fruit fly development). The interactions of transcription factors and cis-regulatory elements, or of signalling proteins and receptors, become locked in through multiple usages, making almost any mutation deleterious and hence eliminated by natural selection. The mechanism that sets up every animal's front-back axis is the same, implying a common ancestor. There is a similar mechanism for the back-belly axis for bilaterian animals, but it is reversed between arthropods and vertebrates. Another process, gastrulation of the embryo, is driven by Myosin II molecular motors, which are not conserved across species. The process may have been started by movements of sea water in the environment, later replaced by the evolution of tissue movements in the embryo. The origins of novelty Among the more surprising and, perhaps, counterintuitive (from a neo-Darwinian viewpoint) results of recent research in evolutionary developmental biology is that the diversity of body plans and morphology in organisms across many phyla are not necessarily reflected in diversity at the level of the sequences of genes, including those of the developmental genetic toolkit and other genes involved in development. Indeed, as John Gerhart and Marc Kirschner have noted, there is an apparent paradox: "where we most expect to find variation, we find conservation, a lack of change". So, if the observed morphological novelty between different clades does not come from changes in gene sequences (such as by mutation), where does it come from? Novelty may arise by mutation-driven changes in gene regulation. Variations in the toolkit Variations in the toolkit may have produced a large part of the morphological evolution of animals. The toolkit can drive evolution in two ways. A toolkit gene can be expressed in a different pattern, as when the beak of Darwin's large ground-finch was enlarged by the BMP gene, or when snakes lost their legs as distal-less became under-expressed or not expressed at all in the places where other reptiles continued to form their limbs. Or, a toolkit gene can acquire a new function, as seen in the many functions of that same gene, distal-less, which controls such diverse structures as the mandible in vertebrates, legs and antennae in the fruit fly, and eyespot pattern in butterfly wings. Given that small changes in toolbox genes can cause significant changes in body structures, they have often enabled the same function convergently or in parallel. distal-less generates wing patterns in the butterflies Heliconius erato and Heliconius melpomene, which are Müllerian mimics. In so-called facilitated variation, their wing patterns arose in different evolutionary events, but are controlled by the same genes. Developmental changes can contribute directly to speciation. Consolidation of epigenetic changes Evolutionary innovation may sometimes begin in Lamarckian style with epigenetic alterations of gene regulation or phenotype generation, subsequently consolidated by changes at the gene level. Epigenetic changes include modification of DNA by reversible methylation, as well as nonprogrammed remoulding of the organism by physical and other environmental effects due to the inherent plasticity of developmental mechanisms. The biologists Stuart A. Newman and Gerd B. Müller have suggested that organisms early in the history of multicellular life were more susceptible to this second category of epigenetic determination than are modern organisms, providing a basis for early macroevolutionary changes. Developmental bias Development in specific lineages can be biased either positively, towards a given trajectory or phenotype, or negatively, away from producing certain types of change; either may be absolute (the change is always or never produced) or relative. Evidence for any such direction in evolution is however hard to acquire and can also result from developmental constraints that limit diversification. For example, in the gastropods, the snail-type shell is always built as a tube that grows both in length and in diameter; selection has created a wide variety of shell shapes such as flat spirals, cowries and tall turret spirals within these constraints. Among the centipedes, the Lithobiomorpha always have 15 trunk segments as adults, probably the result of a developmental bias towards an odd number of trunk segments. Another centipede order, the Geophilomorpha, the number of segments varies in different species between 27 and 191, but the number is always odd, making this an absolute constraint; almost all the odd numbers in that range are occupied by one or another species. Ecological evolutionary developmental biology Ecological evolutionary developmental biology integrates research from developmental biology and ecology to examine their relationship with evolutionary theory. Researchers study concepts and mechanisms such as developmental plasticity, epigenetic inheritance, genetic assimilation, niche construction and symbiosis. See also Arthropod head problem Cell signaling Evolution & Development (journal) Human evolutionary developmental biology Just So Stories (as seen by evolutionary developmental biologists) Plant evolutionary developmental biology Recapitulation theory Notes References Sources External links Subfields of evolutionary biology Developmental biology Extended evolutionary synthesis
0.790466
0.989683
0.782311
Biosemiotics
Biosemiotics (from the Greek βίος bios, "life" and σημειωτικός sēmeiōtikos, "observant of signs") is a field of semiotics and biology that studies the prelinguistic meaning-making, biological interpretation processes, production of signs and codes and communication processes in the biological realm. Biosemiotics integrates the findings of biology and semiotics and proposes a paradigmatic shift in the scientific view of life, in which semiosis (sign process, including meaning and interpretation) is one of its immanent and intrinsic features. The term biosemiotic was first used by Friedrich S. Rothschild in 1962, but Thomas Sebeok, Thure von Uexküll, Jesper Hoffmeyer and many others have implemented the term and field. The field is generally divided between theoretical and applied biosemiotics. Insights from biosemiotics have also been adopted in the humanities and social sciences, including human-animal studies, human-plant studies and cybersemiotics. Definition Biosemiotics is the study of meaning making processes in the living realm, or, to elaborate, a study of signification, communication and habit formation of living processes semiosis (creating and changing sign relations) in living nature the biological basis of all signs and sign interpretation interpretative processes, codes and cognition in organisms Main branches According to the basic types of semiosis under study, biosemiotics can be divided into vegetative semiotics (also endosemiotics, or phytosemiotics), the study of semiosis at the cellular and molecular level (including the translation processes related to genome and the organic form or phenotype); vegetative semiosis occurs in all organisms at their cellular and tissue level; vegetative semiotics includes prokaryote semiotics, sign-mediated interactions in bacteria communities such as quorum sensing and quorum quenching. zoosemiotics or animal semiotics, or the study of animal forms of knowing; animal semiosis occurs in the organisms with neuromuscular system, also includes anthroposemiotics, the study of semiotic behavior in humans. According to the dominant aspect of semiosis under study, the following labels have been used: biopragmatics, biosemantics, and biosyntactics. History Apart from Charles Sanders Peirce (1839–1914) and Charles W. Morris (1903–1979), early pioneers of biosemiotics were Jakob von Uexküll (1864–1944), Heini Hediger (1908–1992), Giorgio Prodi (1928–1987), Marcel Florkin (1900–1979) and Friedrich S. Rothschild (1899–1995); the founding fathers of the contemporary interdiscipline were Thomas Sebeok (1920–2001) and Thure von Uexküll (1908–2004). In the 1980s a circle of mathematicians active in Theoretical Biology, René Thom (Institut des Hautes Etudes Scientifiques), Yannick Kergosien (Dalhousie University and Institut des Hautes Etudes Scientifiques), and Robert Rosen (Dalhousie University, also a former member of the Buffalo group with Howard H. Pattee), explored the relations between Semiotics and Biology using such headings as "Nature Semiotics", "Semiophysics", or "Anticipatory Systems" and taking a modeling approach. The contemporary period (as initiated by Copenhagen-Tartu school) include biologists Jesper Hoffmeyer, Kalevi Kull, Claus Emmeche, Terrence Deacon, semioticians Martin Krampen, Paul Cobley, philosophers Donald Favareau, John Deely, John Collier and complex systems scientists Howard H. Pattee, Michael Conrad, Luis M. Rocha, Cliff Joslyn and León Croizat. In 2001, an annual international conference for biosemiotic research known as the Gatherings in Biosemiotics was inaugurated, and has taken place every year since. In 2004, a group of biosemioticians – Marcello Barbieri, Claus Emmeche, Jesper Hoffmeyer, Kalevi Kull, and Anton Markoš – decided to establish an international journal of biosemiotics. Under their editorship, the Journal of Biosemiotics was launched by Nova Science Publishers in 2005 (two issues published), and with the same five co-editors Biosemiotics was launched by Springer in 2008. The book series Biosemiotics (Springer), edited by Claus Emmeche, Donald Favareau, Kalevi Kull, and Alexei Sharov, began in 2007 and 27 volumes have been published in the series by 2024. The International Society for Biosemiotic Studies was established in 2005 by Donald Favareau and the five editors listed above. A collective programmatic paper on the basic theses of biosemiotics appeared in 2009. and in 2010, an 800 page textbook and anthology, Essential Readings in Biosemiotics, was published, with bibliographies and commentary by Donald Favareau. One of roots for biosemiotics has been medical semiotics. In 2016, Springer published Biosemiotic Medicine: Healing in the World of Meaning, edited by Farzad Goli as part of Studies in Neuroscience, Consciousness and Spirituality. In the humanities Since the work of Jakob von Uexküll and Martin Heidegger, several scholars in the humanities have engaged with or appropriated ideas from biosemiotics in their own projects; conversely, biosemioticians have critically engaged with or reformulated humanistic theories using ideas from biosemiotics and complexity theory. For instance, Andreas Weber has reformulated some of Hans Jonas's ideas using concepts from biosemiotics, and biosemiotics have been used to interpret the poetry of John Burnside. Since 2021, the American philosopher Jason Josephson Storm has drawn on biosemiotics and empirical research on animal communication to propose hylosemiotics, a theory of ontology and communication that Storm believes could allow the humanities to move beyond the linguistic turn. John Deely's work also represents an engagement between humanistic and biosemiotic approaches. Deely was trained as a historian and not a biologist but discussed biosemiotics and zoosemiotics extensively in his introductory works on semiotics and clarified terms that are relevant for biosemiotics. Although his idea of physiosemiotics was criticized by practicing biosemioticians, Paul Cobley, Donald Favareau, and Kalevi Kull wrote that "the debates on this conceptual point between Deely and the biosemiotics community were always civil and marked by a mutual admiration for the contributions of the other towards the advancement of our understanding of sign relations." See also Animal communication Biocommunication (science) Cognitive biology Ecosemiotics Mimicry Naturalization of intentionality Phytosemiotics Plant communication Zoosemiotics References Bibliography Alexander, V. N. (2011). The Biologist's Mistress: Rethinking Self-Organization in Art, Literature and Nature. Litchfield Park AZ: Emergent Publications. Barbieri, Marcello (ed.) (2008). The Codes of Life: The Rules of Macroevolution. Berlin: Springer. Emmeche, Claus; Kull, Kalevi (eds.) (2011). Towards a Semiotic Biology: Life is the Action of Signs. London: Imperial College Press. Emmeche, Claus; Kalevi Kull and Frederik Stjernfelt. (2002): Reading Hoffmeyer, Rethinking Biology. (Tartu Semiotics Library 3). Tartu: Tartu University Press. Favareau, D. (ed.) (2010). Essential Readings in Biosemiotics: Anthology and Commentary. Berlin: Springer. Favareau, D. (2006). The evolutionary history of biosemiotics. In "Introduction to Biosemiotics: The New Biological Synthesis." Marcello Barbieri (Ed.) Berlin: Springer. pp 1–67. Hoffmeyer, Jesper. (1996): Signs of Meaning in the Universe. Bloomington: Indiana University Press. (special issue of Semiotica vol. 120 (no.3-4), 1998, includes 13 reviews of the book and a rejoinder by the author). Hoffmeyer, Jesper (2008). Biosemiotics: An Examination into the Signs of Life and the Life of Signs. Scranton: University of Scranton Press. Hoffmeyer, Jesper (ed.)(2008). A Legacy for Living Systems: Gregory Bateson as a Precursor to Biosemiotics. Berlin: Springer. Hoffmeyer Jesper; Kull, Kalevi (2003): Baldwin and Biosemiotics: What Intelligence Is For. In: Bruce H. Weber and David J. Depew (eds.), Evolution and Learning - The Baldwin Effect Reconsidered'. Cambridge: The MIT Press. Kull, Kalevi, eds. (2001). Jakob von Uexküll: A Paradigm for Biology and Semiotics. Berlin & New York: Mouton de Gruyter. [ = Semiotica vol. 134 (no.1-4)]. Rothschild, Friedrich S. (2000). Creation and Evolution: A Biosemiotic Approach. Edison, New Jersey: Transaction Publishers. Sebeok, Thomas A.; Umiker-Sebeok, Jean (eds.) (1992): Biosemiotics. The Semiotic Web 1991. Berlin and New York: Mouton de Gruyter. Sebeok, Thomas A.; Hoffmeyer, Jesper; Emmeche, Claus (eds.) (1999). Biosemiotica. Berlin & New York: Mouton de Gruyter. [ = Semiotica vol. 127 (no.1-4)]. External links International Society for Biosemiotics Studies, (older version) New Scientist article on Biosemiotics The Biosemiotics website by Alexei Sharov Biosemiotics in Spanish Biosemiotics, introduction (Archive.org archived version) Overview of Gatherings in Biosemiotics The S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Jakob von Uexküll Centre Zoosemiotics Home Page Plant cognition Plant communication Semiotics Zoosemiotics
0.794441
0.984719
0.782302
In vitro
In vitro (meaning in glass, or in the glass) studies are performed with microorganisms, cells, or biological molecules outside their normal biological context. Colloquially called "test-tube experiments", these studies in biology and its subdisciplines are traditionally done in labware such as test tubes, flasks, Petri dishes, and microtiter plates. Studies conducted using components of an organism that have been isolated from their usual biological surroundings permit a more detailed or more convenient analysis than can be done with whole organisms; however, results obtained from in vitro experiments may not fully or accurately predict the effects on a whole organism. In contrast to in vitro experiments, in vivo studies are those conducted in living organisms, including humans, known as clinical trials, and whole plants. Definition In vitro (Latin for "in glass"; often not italicized in English usage) studies are conducted using components of an organism that have been isolated from their usual biological surroundings, such as microorganisms, cells, or biological molecules. For example, microorganisms or cells can be studied in artificial culture media, and proteins can be examined in solutions. Colloquially called "test-tube experiments", these studies in biology, medicine, and their subdisciplines are traditionally done in test tubes, flasks, Petri dishes, etc. They now involve the full range of techniques used in molecular biology, such as the omics. In contrast, studies conducted in living beings (microorganisms, animals, humans, or whole plants) are called in vivo. Examples Examples of in vitro studies include: the isolation, growth and identification of cells derived from multicellular organisms (in cell or tissue culture); subcellular components (e.g. mitochondria or ribosomes); cellular or subcellular extracts (e.g. wheat germ or reticulocyte extracts); purified molecules (such as proteins, DNA, or RNA); and the commercial production of antibiotics and other pharmaceutical products. Viruses, which only replicate in living cells, are studied in the laboratory in cell or tissue culture, and many animal virologists refer to such work as being in vitro to distinguish it from in vivo work in whole animals. Polymerase chain reaction is a method for selective replication of specific DNA and RNA sequences in the test tube. Protein purification involves the isolation of a specific protein of interest from a complex mixture of proteins, often obtained from homogenized cells or tissues. In vitro fertilization is used to allow spermatozoa to fertilize eggs in a culture dish before implanting the resulting embryo or embryos into the uterus of the prospective mother. In vitro diagnostics refers to a wide range of medical and veterinary laboratory tests that are used to diagnose diseases and monitor the clinical status of patients using samples of blood, cells, or other tissues obtained from a patient. In vitro testing has been used to characterize specific adsorption, distribution, metabolism, and excretion processes of drugs or general chemicals inside a living organism; for example, Caco-2 cell experiments can be performed to estimate the absorption of compounds through the lining of the gastrointestinal tract; The partitioning of the compounds between organs can be determined to study distribution mechanisms; Suspension or plated cultures of primary hepatocytes or hepatocyte-like cell lines (Hep G2, HepaRG) can be used to study and quantify metabolism of chemicals. These ADME process parameters can then be integrated into so called "physiologically based pharmacokinetic models" or PBPK. Advantages In vitro studies permit a species-specific, simpler, more convenient, and more detailed analysis than can be done with the whole organism. Just as studies in whole animals more and more replace human trials, so are in vitro studies replacing studies in whole animals. Simplicity Living organisms are extremely complex functional systems that are made up of, at a minimum, many tens of thousands of genes, protein molecules, RNA molecules, small organic compounds, inorganic ions, and complexes in an environment that is spatially organized by membranes, and in the case of multicellular organisms, organ systems. These myriad components interact with each other and with their environment in a way that processes food, removes waste, moves components to the correct location, and is responsive to signalling molecules, other organisms, light, sound, heat, taste, touch, and balance. This complexity makes it difficult to identify the interactions between individual components and to explore their basic biological functions. In vitro work simplifies the system under study, so the investigator can focus on a small number of components. For example, the identity of proteins of the immune system (e.g. antibodies), and the mechanism by which they recognize and bind to foreign antigens would remain very obscure if not for the extensive use of in vitro work to isolate the proteins, identify the cells and genes that produce them, study the physical properties of their interaction with antigens, and identify how those interactions lead to cellular signals that activate other components of the immune system. Species specificity Another advantage of in vitro methods is that human cells can be studied without "extrapolation" from an experimental animal's cellular response. Convenience, automation In vitro methods can be miniaturized and automated, yielding high-throughput screening methods for testing molecules in pharmacology or toxicology. Disadvantages The primary disadvantage of in vitro experimental studies is that it may be challenging to extrapolate from the results of in vitro work back to the biology of the intact organism. Investigators doing in vitro work must be careful to avoid over-interpretation of their results, which can lead to erroneous conclusions about organismal and systems biology. For example, scientists developing a new viral drug to treat an infection with a pathogenic virus (e.g., HIV-1) may find that a candidate drug functions to prevent viral replication in an in vitro setting (typically cell culture). However, before this drug is used in the clinic, it must progress through a series of in vivo trials to determine if it is safe and effective in intact organisms (typically small animals, primates, and humans in succession). Typically, most candidate drugs that are effective in vitro prove to be ineffective in vivo because of issues associated with delivery of the drug to the affected tissues, toxicity towards essential parts of the organism that were not represented in the initial in vitro studies, or other issues. In vitro test batteries A method which could help decrease animal testing is the use of in vitro batteries, where several in vitro assays are compiled to cover multiple endpoints. Within developmental neurotoxicity and reproductive toxicity there are hopes for test batteries to become easy screening methods for prioritization for which chemicals to be risk assessed and in which order. Within ecotoxicology in vitro test batteries are already in use for regulatory purpose and for toxicological evaluation of chemicals. In vitro tests can also be combined with in vivo testing to make a in vitro in vivo test battery, for example for pharmaceutical testing. In vitro to in vivo extrapolation Results obtained from in vitro experiments cannot usually be transposed, as is, to predict the reaction of an entire organism in vivo. Building a consistent and reliable extrapolation procedure from in vitro results to in vivo is therefore extremely important. Solutions include: Increasing the complexity of in vitro systems to reproduce tissues and interactions between them (as in "human on chip" systems) Using mathematical modeling to numerically simulate the behavior of the complex system, where the in vitro data provide model parameter values These two approaches are not incompatible; better in vitro systems provide better data to mathematical models. However, increasingly sophisticated in vitro experiments collect increasingly numerous, complex, and challenging data to integrate. Mathematical models, such as systems biology models, are much needed here. Extrapolating in pharmacology In pharmacology, IVIVE can be used to approximate pharmacokinetics (PK) or pharmacodynamics (PD). Since the timing and intensity of effects on a given target depend on the concentration time course of candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ sensitivities can be completely different or even inverse of those observed on cells cultured and exposed in vitro. That indicates that extrapolating effects observed in vitro needs a quantitative model of in vivo PK. Physiologically based PK (PBPK) models are generally accepted to be central to the extrapolations. In the case of early effects or those without intercellular communications, the same cellular exposure concentration is assumed to cause the same effects, both qualitatively and quantitatively, in vitro and in vivo. In these conditions, developing a simple PD model of the dose–response relationship observed in vitro, and transposing it without changes to predict in vivo effects is not enough. See also Animal testing Ex vivo In situ In utero In vivo In silico In papyro Animal in vitro cellular and developmental biology Plant in vitro cellular and developmental biology In vitro toxicology In vitro to in vivo extrapolation Slice preparation References External links Latin biological phrases Alternatives to animal testing Animal test conditions Laboratory techniques
0.784793
0.99665
0.782164
Biosignature
A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon – that provides scientific evidence of past or present life on a planet. Measurable attributes of life include its physical or chemical structures, its use of free energy, and the production of biomass and wastes. The field of astrobiology uses biosignatures as evidence for the search for past or present extraterrestrial life. Types Biosignatures can be grouped into ten broad categories: Isotope patterns: Isotopic evidence or patterns that require biological processes. Chemistry: Chemical features that require biological activity. Organic matter: Organics formed by biological processes. Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite). Microscopic structures and textures: Biologically-formed cements, microtextures, microfossils, and films. Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms. Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence. Surface reflectance features: Large-scale reflectance features due to biological pigments. Atmospheric gases: Gases formed by metabolic processes, which may be present on a planet-wide scale. Technosignatures: Signatures that indicate a technologically advanced civilization. Viability Determining whether an observed feature is a true biosignature is complex. There are three criteria that a potential biosignature must meet to be considered viable for further research: Reliability, survivability, and detectability. Reliability A biosignature must be able to dominate over all other processes that may produce similar physical, spectral, and chemical features. When investigating a potential biosignature, scientists must carefully consider all other possible origins of the biosignature in question. Many forms of life are known to mimic geochemical reactions. One of the theories on the origin of life involves molecules developing the ability to catalyse geochemical reactions to exploit the energy being released by them. These are some of the earliest known metabolisms (see methanogenesis). In such case, scientists might search for a disequilibrium in the geochemical cycle, which would point to a reaction happening more or less often than it should. A disequilibrium such as this could be interpreted as an indication of life. Survivability A biosignature must be able to last for long enough so that a probe, telescope, or human can be able to detect it. A consequence of a biological organism's use of metabolic reactions for energy is the production of metabolic waste. In addition, the structure of an organism can be preserved as a fossil and we know that some fossils on Earth are as old as 3.5 billion years. These byproducts can make excellent biosignatures since they provide direct evidence for life. However, in order to be a viable biosignature, a byproduct must subsequently remain intact so that scientists may discover it. Detectability A biosignature must be detectable with the most latest technology to be relevant in scientific investigation. This seems to be an obvious statement, however, there are many scenarios in which life may be present on a planet yet remain undetectable because of human-caused limitations. False positives Every possible biosignature is associated with its own set of unique false positive mechanisms or non-biological processes that can mimic the detectable feature of a biosignature. An important example is using oxygen as a biosignature. On Earth, the majority of life is centred around oxygen. It is a byproduct of photosynthesis and is subsequently used by other life forms to breathe. Oxygen is also readily detectable in spectra, with multiple bands across a relatively wide wavelength range, therefore, it makes a very good biosignature. However, finding oxygen alone in a planet's atmosphere is not enough to confirm a biosignature because of the false-positive mechanisms associated with it. One possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of non-condensable gasses or if the planet loses a lot of water. Finding and distinguishing a biosignature from its potential false-positive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abiotic-biological degeneracy, if nature allows. False negatives Opposite to false positives, false negative biosignatures arise in a scenario where life may be present on another planet, but some processes on that planet make potential biosignatures undetectable. This is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres. Human limitations There are many ways in which humans may limit the viability of a potential biosignature. The resolution of a telescope becomes important when vetting certain false-positive mechanisms, and many current telescopes do not have the capabilities to observe at the resolution needed to investigate some of these. In addition, probes and telescopes are worked on by huge collaborations of scientists with varying interests. As a result, new probes and telescopes carry a variety of instruments that are a compromise to everyone's unique inputs. For a different type of scientist to detect something unrelated to biosignatures, a sacrifice may have to be made in the capability of an instrument to search for biosignatures. General examples Geomicrobiology The ancient record on Earth provides an opportunity to see what geochemical signatures are produced by microbial life and how these signatures are preserved over geologic time. Some related disciplines such as geochemistry, geobiology, and geomicrobiology often use biosignatures to determine if living organisms are or were present in a sample. These possible biosignatures include: (a) microfossils and stromatolites; (b) molecular structures (biomarkers) and isotopic compositions of carbon, nitrogen and hydrogen in organic matter; (c) multiple sulfur and oxygen isotope ratios of minerals; and (d) abundance relationships and isotopic compositions of redox-sensitive metals (e.g., Fe, Mo, Cr, and rare earth elements). For example, the particular fatty acids measured in a sample can indicate which types of bacteria and archaea live in that environment. Another example is the long-chain fatty alcohols with more than 23 atoms that are produced by planktonic bacteria. When used in this sense, geochemists often prefer the term biomarker. Another example is the presence of straight-chain lipids in the form of alkanes, alcohols, and fatty acids with 20–36 carbon atoms in soils or sediments. Peat deposits are an indication of originating from the epicuticular wax of higher plants. Life processes may produce a range of biosignatures such as nucleic acids, lipids, proteins, amino acids, kerogen-like material and various morphological features that are detectable in rocks and sediments. Microbes often interact with geochemical processes, leaving features in the rock record indicative of biosignatures. For example, bacterial micrometer-sized pores in carbonate rocks resemble inclusions under transmitted light, but have distinct sizes, shapes, and patterns (swirling or dendritic) and are distributed differently from common fluid inclusions. A potential biosignature is a phenomenon that may have been produced by life, but for which alternate abiotic origins may also be possible. Morphology Another possible biosignature might be morphology since the shape and size of certain objects may potentially indicate the presence of past or present life. For example, microscopic magnetite crystals in the Martian meteorite ALH84001 are one of the longest-debated of several potential biosignatures in that specimen. The possible biomineral studied in the Martian ALH84001 meteorite includes putative microbial fossils, tiny rock-like structures whose shape was a potential biosignature because it resembled known bacteria. Most scientists ultimately concluded that these were far too small to be fossilized cells. A consensus that has emerged from these discussions, and is now seen as a critical requirement, is the demand for further lines of evidence in addition to any morphological data that supports such extraordinary claims. Currently, the scientific consensus is that "morphology alone cannot be used unambiguously as a tool for primitive life detection". Interpretation of morphology is notoriously subjective, and its use alone has led to numerous errors of interpretation. Chemistry No single compound will prove life once existed. Rather, it will be distinctive patterns present in any organic compounds showing a process of selection. For example, membrane lipids left behind by degraded cells will be concentrated, have a limited size range, and comprise an even number of carbons. Similarly, life only uses left-handed amino acids. Biosignatures need not be chemical, however, and can also be suggested by a distinctive magnetic biosignature. Chemical biosignatures include any suite of complex organic compounds composed of carbon, hydrogen, and other elements or heteroatoms such as oxygen, nitrogen, and sulfur, which are found in crude oils, bitumen, petroleum source rock and eventually show simplification in molecular structure from the parent organic molecules found in all living organisms. They are complex carbon-based molecules derived from formerly living organisms. Each biomarker is quite distinctive when compared to its counterparts, as the time required for organic matter to convert to crude oil is characteristic. Most biomarkers also usually have high molecular mass. Some examples of biomarkers found in petroleum are pristane, triterpanes, steranes, phytane and porphyrin. Such petroleum biomarkers are produced via chemical synthesis using biochemical compounds as their main constituents. For instance, triterpenes are derived from biochemical compounds found on land angiosperm plants. The abundance of petroleum biomarkers in small amounts in its reservoir or source rock make it necessary to use sensitive and differential approaches to analyze the presence of those compounds. The techniques typically used include gas chromatography and mass spectrometry. Petroleum biomarkers are highly important in petroleum inspection as they help indicate the depositional territories and determine the geological properties of oils. For instance, they provide more details concerning their maturity and the source material. In addition to that they can also be good parameters of age, hence they are technically referred to as "chemical fossils". The ratio of pristane to phytane (pr:ph) is the geochemical factor that allows petroleum biomarkers to be successful indicators of their depositional environments. Geologists and geochemists use biomarker traces found in crude oils and their related source rock to unravel the stratigraphic origin and migration patterns of presently existing petroleum deposits. The dispersion of biomarker molecules is also quite distinctive for each type of oil and its source; hence, they display unique fingerprints. Another factor that makes petroleum biomarkers more preferable than their counterparts is that they have a high tolerance to environmental weathering and corrosion. Such biomarkers are very advantageous and often used in the detection of oil spillage in the major waterways. The same biomarkers can also be used to identify contamination in lubricant oils. However, biomarker analysis of untreated rock cuttings can be expected to produce misleading results. This is due to potential hydrocarbon contamination and biodegradation in the rock samples. Atmospheric The atmospheric properties of exoplanets are of particular importance, as atmospheres provide the most likely observables for the near future, including habitability indicators and biosignatures. Over billions of years, the processes of life on a planet would result in a mixture of chemicals unlike anything that could form in an ordinary chemical equilibrium. For example, large amounts of oxygen and small amounts of methane are generated by life on Earth. An exoplanet's color—or reflectance spectrum—can also be used as a biosignature due to the effect of pigments that are uniquely biologic in origin such as the pigments of phototrophic and photosynthetic life forms. Scientists use the Earth as an example of this when looked at from far away (see Pale Blue Dot) as a comparison to worlds observed outside of our solar system. Ultraviolet radiation on life forms could also induce biofluorescence in visible wavelengths that may be detected by the new generation of space observatories under development. Some scientists have reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. Habitability indicators and biosignatures must be interpreted within a planetary and environmental context. For example, the presence of oxygen and methane together could indicate the kind of extreme thermochemical disequilibrium generated by life. Two of the top 14,000 proposed atmospheric biosignatures are dimethyl sulfide and chloromethane. An alternative biosignature is the combination of methane and carbon dioxide. The detection of phosphine in the atmosphere of Venus is being investigated as a possible biosignature. Atmospheric disequilibrium A disequilibrium in the abundance of gas species in an atmosphere can be interpreted as a biosignature. Life has greatly altered the atmosphere on Earth in a way that would be unlikely for any other processes to replicate. Therefore, a departure from equilibrium is evidence for a biosignature. For example, the abundance of methane in the Earth's atmosphere is orders of magnitude above the equilibrium value due to the constant methane flux that life on the surface emits. Depending on the host star, a disequilibrium in the methane abundance on another planet may indicate a biosignature. Agnostic biosignatures Because the only form of known life is that on Earth, the search for biosignatures is heavily influenced by the products that life produces on Earth. However, life that is different from life on Earth may still produce biosignatures that are detectable by humans, even though nothing is known about their specific biology. This form of biosignature is called an "agnostic biosignature" because it is independent of the form of life that produces it. It is widely agreed that all life–no matter how different it is from life on Earth–needs a source of energy to thrive. This must involve some sort of chemical disequilibrium, which can be exploited for metabolism. Geological processes are independent of life, and if scientists can constrain the geology well enough on another planet, then they know what the particular geologic equilibrium for that planet should be. A deviation from geological equilibrium can be interpreted as an atmospheric disequilibrium and agnostic biosignature. Antibiosignatures In the same way that detecting a biosignature would be a significant discovery about a planet, finding evidence that life is not present can also be an important discovery about a planet. Life relies on redox imbalances to metabolize the resources available into energy. The evidence that nothing on an earth is taking advantage of the "free lunch" available due to an observed redox imbalance is called antibiosignatures. Polyelectrolytes The Polyelectrolyte theory of the gene is a proposed generic biosignature. In 2002, Steven A. Benner and Daniel Hutter proposed that for a linear genetic biopolymer dissolved in water, such as DNA, to undergo Darwinian evolution anywhere in the universe, it must be a polyelectrolyte, a polymer containing repeating ionic charges. Benner and others proposed methods for concentrating and analyzing these polyelectrolyte genetic biopolymers on Mars, Enceladus, and Europa. Specific examples Methane on Mars The presence of methane in the atmosphere of Mars is an area of ongoing research and a highly contentious subject. Because of its tendency to be destroyed in the atmosphere by photochemistry, the presence of excess methane on a planet can indicate that there must be an active source. With life being the strongest source of methane on Earth, observing a disequilibrium in the methane abundance on another planet could be a viable biosignature. Since 2004, there have been several detections of methane in the Mars atmosphere by a variety of instruments onboard orbiters and ground-based landers on the Martian surface as well as Earth-based telescopes. These missions reported values anywhere between a 'background level' ranging between 0.24 and 0.65 parts per billion by volume (p.p.b.v.) to as much as 45 ± 10 p.p.b.v. However, recent measurements using the ACS and NOMAD instruments on board the ESA-Roscosmos ExoMars Trace Gas Orbiter have failed to detect any methane over a range of latitudes and longitudes on both Martian hemispheres. These highly sensitive instruments were able to put an upper bound on the overall methane abundance at 0.05 p.p.b.v. This nondetection is a major contradiction to what was previously observed with less sensitive instruments and will remain a strong argument in the ongoing debate over the presence of methane in the Martian atmosphere. Furthermore, current photochemical models cannot explain the presence of methane in the atmosphere of Mars and its reported rapid variations in space and time. Neither its fast appearance nor disappearance can be explained yet. To rule out a biogenic origin for the methane, a future probe or lander hosting a mass spectrometer will be needed, as the isotopic proportions of carbon-12 to carbon-14 in methane could distinguish between a biogenic and non-biogenic origin, similarly to the use of the δ13C standard for recognizing biogenic methane on Earth. Martian atmosphere The Martian atmosphere contains high abundances of photochemically produced CO and H2, which are reducing molecules. Mars' atmosphere is otherwise mostly oxidizing, leading to a source of untapped energy that life could exploit if it used by a metabolism compatible with one or both of these reducing molecules. Because these molecules can be observed, scientists use this as evidence for an antibiosignature. Scientists have used this concept as an argument against life on Mars. Missions inside the Solar System Astrobiological exploration is founded upon the premise that biosignatures encountered in space will be recognizable as extraterrestrial life. The usefulness of a biosignature is determined not only by the probability of life creating it but also by the improbability of non-biological (abiotic) processes producing it. Concluding that evidence of an extraterrestrial life form (past or present) has been discovered requires proving that a possible biosignature was produced by the activities or remains of life. As with most scientific discoveries, discovery of a biosignature will require evidence building up until no other explanation exists. Possible examples of a biosignature include complex organic molecules or structures whose formation is virtually unachievable in the absence of life: Cellular and extracellular morphologies Biomolecules in rocks Bio-organic molecular structures Chirality Biogenic minerals Biogenic isotope patterns in minerals and organic compounds Atmospheric gases Photosynthetic pigments The Viking missions to Mars The Viking missions to Mars in the 1970s conducted the first experiments which were explicitly designed to look for biosignatures on another planet. Each of the two Viking landers carried three life-detection experiments which looked for signs of metabolism; however, the results were declared inconclusive. Mars Science Laboratory The Curiosity rover from the Mars Science Laboratory mission, with its Curiosity rover is currently assessing the potential past and present habitability of the Martian environment and is attempting to detect biosignatures on the surface of Mars. Considering the MSL instrument payload package, the following classes of biosignatures are within the MSL detection window: organism morphologies (cells, body fossils, casts), biofabrics (including microbial mats), diagnostic organic molecules, isotopic signatures, evidence of biomineralization and bioalteration, spatial patterns in chemistry, and biogenic gases. The Curiosity rover targets outcrops to maximize the probability of detecting 'fossilized' organic matter preserved in sedimentary deposits. ExoMars Orbiter The 2016 ExoMars Trace Gas Orbiter (TGO) is a Mars telecommunications orbiter and atmospheric gas analyzer mission. It delivered the Schiaparelli EDM lander and then began to settle into its science orbit to map the sources of methane on Mars and other gases, and in doing so, will help select the landing site for the Rosalind Franklin rover to be launched in 2022. The primary objective of the Rosalind Franklin rover mission is the search for biosignatures on the surface and subsurface by using a drill able to collect samples down to a depth of , away from the destructive radiation that bathes the surface. Mars 2020 Rover The Mars 2020 rover, which launched in 2020, is intended to investigate an astrobiologically relevant ancient environment on Mars, investigate its surface geological processes and history, including the assessment of its past habitability, the possibility of past life on Mars, and potential for preservation of biosignatures within accessible geological materials. In addition, it will cache the most interesting samples for possible future transport to Earth. Titan Dragonfly NASA's Dragonfly lander/aircraft concept is proposed to launch in 2025 and would seek evidence of biosignatures on the organic-rich surface and atmosphere of Titan, as well as study its possible prebiotic primordial soup. Titan is the largest moon of Saturn and is widely believed to have a large subsurface ocean consisting of a salty brine. In addition, scientists believe that Titan may have the conditions necessary to promote prebiotic chemistry, making it a prime candidate for biosignature discovery. Europa Clipper NASA's Europa Clipper probe is designed as a flyby mission to Jupiter's smallest Galilean moon, Europa. The mission launched in October 2024 and is set to reach Europa in April 2030, where it will investigate the potential for habitability on Europa. Europa is one of the best candidates for biosignature discovery in the Solar System because of the scientific consensus that it retains a subsurface ocean, with two to three times the volume of water on Earth. Evidence for this subsurface ocean includes: Voyager 1 (1979): The first close-up photos of Europa are taken. Scientists propose that a subsurface ocean could cause the tectonic-like marks on the surface. Galileo (1997): The magnetometer aboard this probe detected a subtle change in the magnetic field near Europa. This was later interpreted as a disruption in the expected magnetic field due to the current induction in a conducting layer on Europa. The composition of this conducting layer is consistent with a salty subsurface ocean. Hubble Space Telescope (2012): An image was taken of Europa which showed evidence for a plume of water vapor coming off the surface. The Europa Clipper probe includes instruments to help confirm the existence and composition of a subsurface ocean and thick icy layer. In addition, the instruments will be used to map and study surface features that may indicate tectonic activity due to a subsurface ocean. Enceladus Although there are no set plans to search for biosignatures on Saturn's sixth-largest moon, Enceladus, the prospects of biosignature discovery there are exciting enough to warrant several mission concepts that may be funded in the future. Similar to Jupiter's moon Europa, there is much evidence for a subsurface ocean to also exist on Enceladus. Plumes of water vapor were first observed in 2005 by the Cassini mission and were later determined to contain salt as well as organic compounds. In 2014, more evidence was presented using gravimetric measurements on Enceladus to conclude that there is in fact a large reservoir of water underneath an icy surface. Mission design concepts include: Enceladus Life Finder (ELF) Enceladus Life Signatures and Habitability Enceladus Organic Analyzer Enceladus Explorer (En-Ex) Explorer of Enceladus and Titan (E2T) Journey to Enceladus and Titan (JET) Life Investigation For Enceladus (LIFE) Testing the Habitability of Enceladus's Ocean (THEO) All of these concept missions have similar science goals: To assess the habitability of Enceladus and search for biosignatures, in line with the strategic map for exploring the ocean-world Enceladus. Searching outside of the Solar System At 4.2 light-years (1.3 parsecs, 40 trillion km, or 25 trillion miles) away from Earth, the closest potentially habitable exoplanet is Proxima Centauri b, which was discovered in 2016. This means it would take more than 18,100 years to get there if a vessel could consistently travel as fast as the Juno spacecraft (250,000 kilometers per hour or 150,000 miles per hour). It is currently not feasible to send humans or even probes to search for biosignatures outside of the Solar System. The only way to search for biosignatures outside of the Solar System is by observing exoplanets with telescopes. There have been no plausible or confirmed biosignature detections outside of the Solar System. Despite this, it is a rapidly growing field of research due to the prospects of the next generation of telescopes. The James Webb Space Telescope, which launched in December 2021, will be a promising next step in the search for biosignatures. Although its wavelength range and resolution will not be compatible with some of the more important atmospheric biosignature gas bands like oxygen, it will still be able to detect some evidence for oxygen false positive mechanisms. The new generation of ground-based 30-meter class telescopes (Thirty Meter Telescope and Extremely Large Telescope) will have the ability to take high-resolution spectra of exoplanet atmospheres at a variety of wavelengths. These telescopes will be capable of distinguishing some of the more difficult false positive mechanisms such as the abiotic buildup of oxygen via photolysis. In addition, their large collecting area will enable high angular resolution, making direct imaging studies more feasible. See also Bioindicator MERMOZ (remote detection of lifeforms) Taphonomy Technosignature References Astrobiology Astrochemistry Bioindicators Biology terminology Search for extraterrestrial intelligence Petroleum geology
0.800242
0.977392
0.782149
Oxidative phosphorylation
Oxidative phosphorylation (UK , US ) or electron transport-linked phosphorylation or terminal oxidation is the metabolic pathway in which cells use enzymes to oxidize nutrients, thereby releasing chemical energy in order to produce adenosine triphosphate (ATP). In eukaryotes, this takes place inside mitochondria. Almost all aerobic organisms carry out oxidative phosphorylation. This pathway is so pervasive because it releases more energy than alternative fermentation processes such as anaerobic glycolysis. The energy stored in the chemical bonds of glucose is released by the cell in the citric acid cycle, producing carbon dioxide and the energetic electron donors NADH and FADH. Oxidative phosphorylation uses these molecules and O2 to produce ATP, which is used throughout the cell whenever energy is needed. During oxidative phosphorylation, electrons are transferred from the electron donors to a series of electron acceptors in a series of redox reactions ending in oxygen, whose reaction releases half of the total energy. In eukaryotes, these redox reactions are catalyzed by a series of protein complexes within the inner membrane of the cell's mitochondria, whereas, in prokaryotes, these proteins are located in the cell's outer membrane. These linked sets of proteins are called the electron transport chain. In eukaryotes, five main protein complexes are involved, whereas in prokaryotes many different enzymes are present, using a variety of electron donors and acceptors. The energy transferred by electrons flowing through this electron transport chain is used to transport protons across the inner mitochondrial membrane, in a process called electron transport. This generates potential energy in the form of a pH gradient and the resulting electrical potential across this membrane. This store of energy is tapped when protons flow back across the membrane and down the potential energy gradient, through a large enzyme called ATP synthase in a process called chemiosmosis. The ATP synthase uses the energy to transform adenosine diphosphate (ADP) into adenosine triphosphate, in a phosphorylation reaction. The reaction is driven by the proton flow, which forces the rotation of a part of the enzyme. The ATP synthase is a rotary mechanical motor. Although oxidative phosphorylation is a vital part of metabolism, it produces reactive oxygen species such as superoxide and hydrogen peroxide, which lead to propagation of free radicals, damaging cells and contributing to disease and, possibly, aging and senescence. The enzymes carrying out this metabolic pathway are also the target of many drugs and poisons that inhibit their activities. Chemiosmosis Oxidative phosphorylation works by using energy-releasing chemical reactions to drive energy-requiring reactions. The two sets of reactions are said to be coupled. This means one cannot occur without the other. The chain of redox reactions driving the flow of electrons through the electron transport chain, from electron donors such as NADH to electron acceptors such as oxygen and hydrogen (protons), is an exergonic process – it releases energy, whereas the synthesis of ATP is an endergonic process, which requires an input of energy. Both the electron transport chain and the ATP synthase are embedded in a membrane, and energy is transferred from the electron transport chain to the ATP synthase by movements of protons across this membrane, in a process called chemiosmosis. A current of protons is driven from the negative N-side of the membrane to the positive P-side through the proton-pumping enzymes of the electron transport chain. The movement of protons creates an electrochemical gradient across the membrane, is called the proton-motive force. It has two components: a difference in proton concentration (a H+ gradient, ΔpH) and a difference in electric potential, with the N-side having a negative charge. ATP synthase releases this stored energy by completing the circuit and allowing protons to flow down the electrochemical gradient, back to the N-side of the membrane. The electrochemical gradient drives the rotation of part of the enzyme's structure and couples this motion to the synthesis of ATP. The two components of the proton-motive force are thermodynamically equivalent: In mitochondria, the largest part of energy is provided by the potential; in alkaliphile bacteria the electrical energy even has to compensate for a counteracting inverse pH difference. Inversely, chloroplasts operate mainly on ΔpH. However, they also require a small membrane potential for the kinetics of ATP synthesis. In the case of the fusobacterium Propionigenium modestum it drives the counter-rotation of subunits a and c of the FO motor of ATP synthase. The amount of energy released by oxidative phosphorylation is high, compared with the amount produced by anaerobic fermentation. Glycolysis produces only 2 ATP molecules, but somewhere between 30 and 36 ATPs are produced by the oxidative phosphorylation of the 10 NADH and 2 succinate molecules made by converting one molecule of glucose to carbon dioxide and water, while each cycle of beta oxidation of a fatty acid yields about 14 ATPs. These ATP yields are theoretical maximum values; in practice, some protons leak across the membrane, lowering the yield of ATP. Electron and proton transfer molecules The electron transport chain carries both protons and electrons, passing electrons from donors to acceptors, and transporting protons across a membrane. These processes use both soluble and protein-bound transfer molecules. In the mitochondria, electrons are transferred within the intermembrane space by the water-soluble electron transfer protein cytochrome c. This carries only electrons, and these are transferred by the reduction and oxidation of an iron atom that the protein holds within a heme group in its structure. Cytochrome c is also found in some bacteria, where it is located within the periplasmic space. Within the inner mitochondrial membrane, the lipid-soluble electron carrier coenzyme Q10 (Q) carries both electrons and protons by a redox cycle. This small benzoquinone molecule is very hydrophobic, so it diffuses freely within the membrane. When Q accepts two electrons and two protons, it becomes reduced to the ubiquinol form (QH2); when QH2 releases two electrons and two protons, it becomes oxidized back to the ubiquinone (Q) form. As a result, if two enzymes are arranged so that Q is reduced on one side of the membrane and QH2 oxidized on the other, ubiquinone will couple these reactions and shuttle protons across the membrane. Some bacterial electron transport chains use different quinones, such as menaquinone, in addition to ubiquinone. Within proteins, electrons are transferred between flavin cofactors, iron–sulfur clusters and cytochromes. There are several types of iron–sulfur cluster. The simplest kind found in the electron transfer chain consists of two iron atoms joined by two atoms of inorganic sulfur; these are called [2Fe–2S] clusters. The second kind, called [4Fe–4S], contains a cube of four iron atoms and four sulfur atoms. Each iron atom in these clusters is coordinated by an additional amino acid, usually by the sulfur atom of cysteine. Metal ion cofactors undergo redox reactions without binding or releasing protons, so in the electron transport chain they serve solely to transport electrons through proteins. Electrons move quite long distances through proteins by hopping along chains of these cofactors. This occurs by quantum tunnelling, which is rapid over distances of less than 1.4 m. Eukaryotic electron transport chains Many catabolic biochemical processes, such as glycolysis, the citric acid cycle, and beta oxidation, produce the reduced coenzyme NADH. This coenzyme contains electrons that have a high transfer potential; in other words, they will release a large amount of energy upon oxidation. However, the cell does not release this energy all at once, as this would be an uncontrollable reaction. Instead, the electrons are removed from NADH and passed to oxygen through a series of enzymes that each release a small amount of the energy. This set of enzymes, consisting of complexes I through IV, is called the electron transport chain and is found in the inner membrane of the mitochondrion. Succinate is also oxidized by the electron transport chain, but feeds into the pathway at a different point. In eukaryotes, the enzymes in this electron transport system use the energy released from O2 by NADH to pump protons across the inner membrane of the mitochondrion. This causes protons to build up in the intermembrane space, and generates an electrochemical gradient across the membrane. The energy stored in this potential is then used by ATP synthase to produce ATP. Oxidative phosphorylation in the eukaryotic mitochondrion is the best-understood example of this process. The mitochondrion is present in almost all eukaryotes, with the exception of anaerobic protozoa such as Trichomonas vaginalis that instead reduce protons to hydrogen in a remnant mitochondrion called a hydrogenosome. NADH-coenzyme Q oxidoreductase (complex I) NADH-coenzyme Q oxidoreductase, also known as NADH dehydrogenase or complex I, is the first protein in the electron transport chain. Complex I is a giant enzyme with the mammalian complex I having 46 subunits and a molecular mass of about 1,000 kilodaltons (kDa). The structure is known in detail only from a bacterium; in most organisms the complex resembles a boot with a large "ball" poking out from the membrane into the mitochondrion. The genes that encode the individual proteins are contained in both the cell nucleus and the mitochondrial genome, as is the case for many enzymes present in the mitochondrion. The reaction that is catalyzed by this enzyme is the two electron oxidation of NADH by coenzyme Q10 or ubiquinone (represented as Q in the equation below), a lipid-soluble quinone that is found in the mitochondrion membrane: The start of the reaction, and indeed of the entire electron chain, is the binding of a NADH molecule to complex I and the donation of two electrons. The electrons enter complex I via a prosthetic group attached to the complex, flavin mononucleotide (FMN). The addition of electrons to FMN converts it to its reduced form, FMNH2. The electrons are then transferred through a series of iron–sulfur clusters: the second kind of prosthetic group present in the complex. There are both [2Fe–2S] and [4Fe–4S] iron–sulfur clusters in complex I. As the electrons pass through this complex, four protons are pumped from the matrix into the intermembrane space. Exactly how this occurs is unclear, but it seems to involve conformational changes in complex I that cause the protein to bind protons on the N-side of the membrane and release them on the P-side of the membrane. Finally, the electrons are transferred from the chain of iron–sulfur clusters to a ubiquinone molecule in the membrane. Reduction of ubiquinone also contributes to the generation of a proton gradient, as two protons are taken up from the matrix as it is reduced to ubiquinol (QH2). Succinate-Q oxidoreductase (complex II) Succinate-Q oxidoreductase, also known as complex II or succinate dehydrogenase, is a second entry point to the electron transport chain. It is unusual because it is the only enzyme that is part of both the citric acid cycle and the electron transport chain. Complex II consists of four protein subunits and contains a bound flavin adenine dinucleotide (FAD) cofactor, iron–sulfur clusters, and a heme group that does not participate in electron transfer to coenzyme Q, but is believed to be important in decreasing production of reactive oxygen species. It oxidizes succinate to fumarate and reduces ubiquinone. As this reaction releases less energy than the oxidation of NADH, complex II does not transport protons across the membrane and does not contribute to the proton gradient. In some eukaryotes, such as the parasitic worm Ascaris suum, an enzyme similar to complex II, fumarate reductase (menaquinol:fumarate oxidoreductase, or QFR), operates in reverse to oxidize ubiquinol and reduce fumarate. This allows the worm to survive in the anaerobic environment of the large intestine, carrying out anaerobic oxidative phosphorylation with fumarate as the electron acceptor. Another unconventional function of complex II is seen in the malaria parasite Plasmodium falciparum. Here, the reversed action of complex II as an oxidase is important in regenerating ubiquinol, which the parasite uses in an unusual form of pyrimidine biosynthesis. Electron transfer flavoprotein-Q oxidoreductase Electron transfer flavoprotein-ubiquinone oxidoreductase (ETF-Q oxidoreductase), also known as electron transferring-flavoprotein dehydrogenase, is a third entry point to the electron transport chain. It is an enzyme that accepts electrons from electron-transferring flavoprotein in the mitochondrial matrix, and uses these electrons to reduce ubiquinone. This enzyme contains a flavin and a [4Fe–4S] cluster, but, unlike the other respiratory complexes, it attaches to the surface of the membrane and does not cross the lipid bilayer. In mammals, this metabolic pathway is important in beta oxidation of fatty acids and catabolism of amino acids and choline, as it accepts electrons from multiple acetyl-CoA dehydrogenases. In plants, ETF-Q oxidoreductase is also important in the metabolic responses that allow survival in extended periods of darkness. Q-cytochrome c oxidoreductase (complex III) Q-cytochrome c oxidoreductase is also known as cytochrome c reductase, cytochrome bc1 complex, or simply complex III. In mammals, this enzyme is a dimer, with each subunit complex containing 11 protein subunits, an [2Fe-2S] iron–sulfur cluster and three cytochromes: one cytochrome c1 and two b cytochromes. A cytochrome is a kind of electron-transferring protein that contains at least one heme group. The iron atoms inside complex III's heme groups alternate between a reduced ferrous (+2) and oxidized ferric (+3) state as the electrons are transferred through the protein. The reaction catalyzed by complex III is the oxidation of one molecule of ubiquinol and the reduction of two molecules of cytochrome c, a heme protein loosely associated with the mitochondrion. Unlike coenzyme Q, which carries two electrons, cytochrome c carries only one electron. As only one of the electrons can be transferred from the QH2 donor to a cytochrome c acceptor at a time, the reaction mechanism of complex III is more elaborate than those of the other respiratory complexes, and occurs in two steps called the Q cycle. In the first step, the enzyme binds three substrates, first, QH2, which is then oxidized, with one electron being passed to the second substrate, cytochrome c. The two protons released from QH2 pass into the intermembrane space. The third substrate is Q, which accepts the second electron from the QH2 and is reduced to Q.−, which is the ubisemiquinone free radical. The first two substrates are released, but this ubisemiquinone intermediate remains bound. In the second step, a second molecule of QH2 is bound and again passes its first electron to a cytochrome c acceptor. The second electron is passed to the bound ubisemiquinone, reducing it to QH2 as it gains two protons from the mitochondrial matrix. This QH2 is then released from the enzyme. As coenzyme Q is reduced to ubiquinol on the inner side of the membrane and oxidized to ubiquinone on the other, a net transfer of protons across the membrane occurs, adding to the proton gradient. The rather complex two-step mechanism by which this occurs is important, as it increases the efficiency of proton transfer. If, instead of the Q cycle, one molecule of QH2 were used to directly reduce two molecules of cytochrome c, the efficiency would be halved, with only one proton transferred per cytochrome c reduced. Cytochrome c oxidase (complex IV) Cytochrome c oxidase, also known as complex IV, is the final protein complex in the electron transport chain. The mammalian enzyme has an extremely complicated structure and contains 13 subunits, two heme groups, as well as multiple metal ion cofactors – in all, three atoms of copper, one of magnesium and one of zinc. This enzyme mediates the final reaction in the electron transport chain and transfers electrons to oxygen and hydrogen (protons), while pumping protons across the membrane. The final electron acceptor oxygen is reduced to water in this step. Both the direct pumping of protons and the consumption of matrix protons in the reduction of oxygen contribute to the proton gradient. The reaction catalyzed is the oxidation of cytochrome c and the reduction of oxygen: Alternative reductases and oxidases Many eukaryotic organisms have electron transport chains that differ from the much-studied mammalian enzymes described above. For example, plants have alternative NADH oxidases, which oxidize NADH in the cytosol rather than in the mitochondrial matrix, and pass these electrons to the ubiquinone pool. These enzymes do not transport protons, and, therefore, reduce ubiquinone without altering the electrochemical gradient across the inner membrane. Another example of a divergent electron transport chain is the alternative oxidase, which is found in plants, as well as some fungi, protists, and possibly some animals. This enzyme transfers electrons directly from ubiquinol to oxygen. The electron transport pathways produced by these alternative NADH and ubiquinone oxidases have lower ATP yields than the full pathway. The advantages produced by a shortened pathway are not entirely clear. However, the alternative oxidase is produced in response to stresses such as cold, reactive oxygen species, and infection by pathogens, as well as other factors that inhibit the full electron transport chain. Alternative pathways might, therefore, enhance an organism's resistance to injury, by reducing oxidative stress. Organization of complexes The original model for how the respiratory chain complexes are organized was that they diffuse freely and independently in the mitochondrial membrane. However, recent data suggest that the complexes might form higher-order structures called supercomplexes or "respirasomes". In this model, the various complexes exist as organized sets of interacting enzymes. These associations might allow channeling of substrates between the various enzyme complexes, increasing the rate and efficiency of electron transfer. Within such mammalian supercomplexes, some components would be present in higher amounts than others, with some data suggesting a ratio between complexes I/II/III/IV and the ATP synthase of approximately 1:1:3:7:4. However, the debate over this supercomplex hypothesis is not completely resolved, as some data do not appear to fit with this model. Prokaryotic electron transport chains In contrast to the general similarity in structure and function of the electron transport chains in eukaryotes, bacteria and archaea possess a large variety of electron-transfer enzymes. These use an equally wide set of chemicals as substrates. In common with eukaryotes, prokaryotic electron transport uses the energy released from the oxidation of a substrate to pump ions across a membrane and generate an electrochemical gradient. In the bacteria, oxidative phosphorylation in Escherichia coli is understood in most detail, while archaeal systems are at present poorly understood. The main difference between eukaryotic and prokaryotic oxidative phosphorylation is that bacteria and archaea use many different substances to donate or accept electrons. This allows prokaryotes to grow under a wide variety of environmental conditions. In E. coli, for example, oxidative phosphorylation can be driven by a large number of pairs of reducing agents and oxidizing agents, which are listed below. The midpoint potential of a chemical measures how much energy is released when it is oxidized or reduced, with reducing agents having negative potentials and oxidizing agents positive potentials. As shown above, E. coli can grow with reducing agents such as formate, hydrogen, or lactate as electron donors, and nitrate, DMSO, or oxygen as acceptors. The larger the difference in midpoint potential between an oxidizing and reducing agent, the more energy is released when they react. Out of these compounds, the succinate/fumarate pair is unusual, as its midpoint potential is close to zero. Succinate can therefore be oxidized to fumarate if a strong oxidizing agent such as oxygen is available, or fumarate can be reduced to succinate using a strong reducing agent such as formate. These alternative reactions are catalyzed by succinate dehydrogenase and fumarate reductase, respectively. Some prokaryotes use redox pairs that have only a small difference in midpoint potential. For example, nitrifying bacteria such as Nitrobacter oxidize nitrite to nitrate, donating the electrons to oxygen. The small amount of energy released in this reaction is enough to pump protons and generate ATP, but not enough to produce NADH or NADPH directly for use in anabolism. This problem is solved by using a nitrite oxidoreductase to produce enough proton-motive force to run part of the electron transport chain in reverse, causing complex I to generate NADH. Prokaryotes control their use of these electron donors and acceptors by varying which enzymes are produced, in response to environmental conditions. This flexibility is possible because different oxidases and reductases use the same ubiquinone pool. This allows many combinations of enzymes to function together, linked by the common ubiquinol intermediate. These respiratory chains therefore have a modular design, with easily interchangeable sets of enzyme systems. In addition to this metabolic diversity, prokaryotes also possess a range of isozymes – different enzymes that catalyze the same reaction. For example, in E. coli, there are two different types of ubiquinol oxidase using oxygen as an electron acceptor. Under highly aerobic conditions, the cell uses an oxidase with a low affinity for oxygen that can transport two protons per electron. However, if levels of oxygen fall, they switch to an oxidase that transfers only one proton per electron, but has a high affinity for oxygen. ATP synthase (complex V) ATP synthase, also called complex V, is the final enzyme in the oxidative phosphorylation pathway. This enzyme is found in all forms of life and functions in the same way in both prokaryotes and eukaryotes. The enzyme uses the energy stored in a proton gradient across a membrane to drive the synthesis of ATP from ADP and phosphate (Pi). Estimates of the number of protons required to synthesize one ATP have ranged from three to four, with some suggesting cells can vary this ratio, to suit different conditions. This phosphorylation reaction is an equilibrium, which can be shifted by altering the proton-motive force. In the absence of a proton-motive force, the ATP synthase reaction will run from right to left, hydrolyzing ATP and pumping protons out of the matrix across the membrane. However, when the proton-motive force is high, the reaction is forced to run in the opposite direction; it proceeds from left to right, allowing protons to flow down their concentration gradient and turning ADP into ATP. Indeed, in the closely related vacuolar type H+-ATPases, the hydrolysis reaction is used to acidify cellular compartments, by pumping protons and hydrolysing ATP. ATP synthase is a massive protein complex with a mushroom-like shape. The mammalian enzyme complex contains 16 subunits and has a mass of approximately 600 kilodaltons. The portion embedded within the membrane is called FO and contains a ring of c subunits and the proton channel. The stalk and the ball-shaped headpiece is called F1 and is the site of ATP synthesis. The ball-shaped complex at the end of the F1 portion contains six proteins of two different kinds (three α subunits and three β subunits), whereas the "stalk" consists of one protein: the γ subunit, with the tip of the stalk extending into the ball of α and β subunits. Both the α and β subunits bind nucleotides, but only the β subunits catalyze the ATP synthesis reaction. Reaching along the side of the F1 portion and back into the membrane is a long rod-like subunit that anchors the α and β subunits into the base of the enzyme. As protons cross the membrane through the channel in the base of ATP synthase, the FO proton-driven motor rotates. Rotation might be caused by changes in the ionization of amino acids in the ring of c subunits causing electrostatic interactions that propel the ring of c subunits past the proton channel. This rotating ring in turn drives the rotation of the central axle (the γ subunit stalk) within the α and β subunits. The α and β subunits are prevented from rotating themselves by the side-arm, which acts as a stator. This movement of the tip of the γ subunit within the ball of α and β subunits provides the energy for the active sites in the β subunits to undergo a cycle of movements that produces and then releases ATP. This ATP synthesis reaction is called the binding change mechanism and involves the active site of a β subunit cycling between three states. In the "open" state, ADP and phosphate enter the active site (shown in brown in the diagram). The protein then closes up around the molecules and binds them loosely – the "loose" state (shown in red). The enzyme then changes shape again and forces these molecules together, with the active site in the resulting "tight" state (shown in pink) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state, releasing ATP and binding more ADP and phosphate, ready for the next cycle. In some bacteria and archaea, ATP synthesis is driven by the movement of sodium ions through the cell membrane, rather than the movement of protons. Archaea such as Methanococcus also contain the A1Ao synthase, a form of the enzyme that contains additional proteins with little similarity in sequence to other bacterial and eukaryotic ATP synthase subunits. It is possible that, in some species, the A1Ao form of the enzyme is a specialized sodium-driven ATP synthase, but this might not be true in all cases. Oxidative phosphorylation - energetics The transport of electrons from redox pair NAD+/ NADH to the final redox pair 1/2 O2/ H2O can be summarized as 1/2 O2 + NADH + H+ → H2O + NAD+ The potential difference between these two redox pairs is 1.14 volt, which is equivalent to -52 kcal/mol or -2600 kJ per 6 mol of O2. When one NADH is oxidized through the electron transfer chain, three ATPs are produced, which is equivalent to 7.3 kcal/mol x 3 = 21.9 kcal/mol. The conservation of the energy can be calculated by the following formula Efficiency = (21.9 x 100%) / 52 = 42% So we can conclude that when NADH is oxidized, about 42% of energy is conserved in the form of three ATPs and the remaining (58%) energy is lost as heat (unless the chemical energy of ATP under physiological conditions was underestimated). Reactive oxygen species Molecular oxygen is a good terminal electron acceptor because it is a strong oxidizing agent. The reduction of oxygen does involve potentially harmful intermediates. Although the transfer of four electrons and four protons reduces oxygen to water, which is harmless, transfer of one or two electrons produces superoxide or peroxide anions, which are dangerously reactive. These reactive oxygen species and their reaction products, such as the hydroxyl radical, are very harmful to cells, as they oxidize proteins and cause mutations in DNA. This cellular damage may contribute to disease and is proposed as one cause of aging. The cytochrome c oxidase complex is highly efficient at reducing oxygen to water, and it releases very few partly reduced intermediates; however small amounts of superoxide anion and peroxide are produced by the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, as a highly reactive ubisemiquinone free radical is formed as an intermediate in the Q cycle. This unstable species can lead to electron "leakage" when electrons transfer directly to oxygen, forming superoxide. As the production of reactive oxygen species by these proton-pumping complexes is greatest at high membrane potentials, it has been proposed that mitochondria regulate their activity to maintain the membrane potential within a narrow range that balances ATP production against oxidant generation. For instance, oxidants can activate uncoupling proteins that reduce membrane potential. To counteract these reactive oxygen species, cells contain numerous antioxidant systems, including antioxidant vitamins such as vitamin C and vitamin E, and antioxidant enzymes such as superoxide dismutase, catalase, and peroxidases, which detoxify the reactive species, limiting damage to the cell. Oxidative phosphorylation in hypoxic/anoxic conditions As oxygen is fundamental for oxidative phosphorylation, a shortage in O2 level can alter ATP production rates. Under anoxic conditions, ATP-synthase will commit 'cellular treason' and run in reverse, forcing protons from the matrix back into the inner membrane space, using up ATP in the process. The proton motive force and ATP production can be maintained by intracellular acidosis. Cytosolic protons that have accumulated with ATP hydrolysis and lactic acidosis can freely diffuse across the mitochondrial outer-membrane and acidify the inter-membrane space, hence directly contributing to the proton motive force and ATP production. Inhibitors There are several well-known drugs and toxins that inhibit oxidative phosphorylation. Although any one of these toxins inhibits only one enzyme in the electron transport chain, inhibition of any step in this process will halt the rest of the process. For example, if oligomycin inhibits ATP synthase, protons cannot pass back into the mitochondrion. As a result, the proton pumps are unable to operate, as the gradient becomes too strong for them to overcome. NADH is then no longer oxidized and the citric acid cycle ceases to operate because the concentration of NAD+ falls below the concentration that these enzymes can use. Many site-specific inhibitors of the electron transport chain have contributed to the present knowledge of mitochondrial respiration. Synthesis of ATP is also dependent on the electron transport chain, so all site-specific inhibitors also inhibit ATP formation. The fish poison rotenone, the barbiturate drug amytal, and the antibiotic piericidin A inhibit NADH and coenzyme Q. Carbon monoxide, cyanide, hydrogen sulphide and azide effectively inhibit cytochrome oxidase. Carbon monoxide reacts with the reduced form of the cytochrome while cyanide and azide react with the oxidised form. An antibiotic, antimycin A, and British anti-Lewisite, an antidote used against chemical weapons, are the two important inhibitors of the site between cytochrome B and C1. Not all inhibitors of oxidative phosphorylation are toxins. In brown adipose tissue, regulated proton channels called uncoupling proteins can uncouple respiration from ATP synthesis. This rapid respiration produces heat, and is particularly important as a way of maintaining body temperature for hibernating animals, although these proteins may also have a more general function in cells' responses to stress. History The field of oxidative phosphorylation began with the report in 1906 by Arthur Harden of a vital role for phosphate in cellular fermentation, but initially only sugar phosphates were known to be involved. However, in the early 1940s, the link between the oxidation of sugars and the generation of ATP was firmly established by Herman Kalckar, confirming the central role of ATP in energy transfer that had been proposed by Fritz Albert Lipmann in 1941. Later, in 1949, Morris Friedkin and Albert L. Lehninger proved that the coenzyme NADH linked metabolic pathways such as the citric acid cycle and the synthesis of ATP. The term oxidative phosphorylation was coined by in 1939. For another twenty years, the mechanism by which ATP is generated remained mysterious, with scientists searching for an elusive "high-energy intermediate" that would link oxidation and phosphorylation reactions. This puzzle was solved by Peter D. Mitchell with the publication of the chemiosmotic theory in 1961. At first, this proposal was highly controversial, but it was slowly accepted and Mitchell was awarded a Nobel prize in 1978. Subsequent research concentrated on purifying and characterizing the enzymes involved, with major contributions being made by David E. Green on the complexes of the electron-transport chain, as well as Efraim Racker on the ATP synthase. A critical step towards solving the mechanism of the ATP synthase was provided by Paul D. Boyer, by his development in 1973 of the "binding change" mechanism, followed by his radical proposal of rotational catalysis in 1982. More recent work has included structural studies on the enzymes involved in oxidative phosphorylation by John E. Walker, with Walker and Boyer being awarded a Nobel Prize in 1997. See also Respirometry TIM/TOM Complex Notes References Further reading Introductory Advanced General resources Animated diagrams illustrating oxidative phosphorylation Wiley and Co Concepts in Biochemistry On-line biophysics lectures Antony Crofts, University of Illinois at Urbana–Champaign ATP Synthase Graham Johnson Structural resources PDB molecule of the month: ATP synthase Cytochrome c Cytochrome c oxidase Interactive molecular models at Universidade Fernando Pessoa: NADH dehydrogenase succinate dehydrogenase Coenzyme Q - cytochrome c reductase cytochrome c oxidase Cellular respiration Integral membrane proteins Metabolism Redox
0.784896
0.996456
0.782114
Thermoeconomics
Thermoeconomics, also referred to as biophysical economics, is a school of heterodox economics that applies the laws of statistical mechanics to economic theory. Thermoeconomics can be thought of as the statistical physics of economic value and is a subfield of econophysics. It is the study of the ways and means by which human societies procure and use energy and other biological and physical resources to produce, distribute, consume and exchange goods and services, while generating various types of waste and environmental impacts. Biophysical economics builds on both social sciences and natural sciences to overcome some of the most fundamental limitations and blind spots of conventional economics. It makes it possible to understand some key requirements and framework conditions for economic growth, as well as related constraints and boundaries. Thermodynamics "Rien ne se perd, rien ne se crée, tout se transforme" "Nothing is lost, nothing is created, everything is transformed." -Antoine Lavoisier, one of the fathers of chemistryThermoeconomists maintain that human economic systems can be modeled as thermodynamic systems. Thermoeconomists argue that economic systems always involve matter, energy, entropy, and information. Then, based on this premise, theoretical economic analogs of the first and second laws of thermodynamics are developed. The global economy is viewed as an open system. Moreover, many economic activities result in the formation of structures. Thermoeconomics applies the statistical mechanics of non-equilibrium thermodynamics to model these activities. In thermodynamic terminology, human economic activity may be described as a dissipative system, which flourishes by consuming free energy in transformations and exchange of resources, goods, and services. Energy Return on Investment Thermoeconomics is based on the proposition that the role of energy in biological evolution should be defined and understood not through the second law of thermodynamics but in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits (or profitability) of the various mechanisms for capturing and utilizing available energy to build biomass and do work. Peak oil Political Implications "[T]he escalation of social protest and political instability around the world is causally related to the unstoppable thermodynamics of global hydrocarbon energy decline and its interconnected environmental and economic consequences." Energy Backed Credit Under this analysis, a reduction of GDP in advanced economies is now likely: when we can no longer access consumption via adding credit, and with a shift towards lower quality and more costly energy and resources. The 20th  century experienced increasing energy quality and decreasing energy prices. The 21st century will be a story of decreasing energy quality and increasing energy cost. See also Econophysics Ecodynamics Kinetic exchange models of markets Systems ecology Ecological economics Nicholas Georgescu-Roegen Energy quality Limits to growth Myron Tribus References Further reading Chen, Jing (2015). The Unity of Science and Economics: A New Foundation of Economic Theory: Springer. Charles A.S. Hall, Kent Klitgaard (2018). Energy and the Wealth of Nations: An Introduction to Biophysical Economics: Springer. Jean-Marc Jancovici, Christopher Blain (2020). World Without End. Europe Comics N.J. Hagens (2019). Economics for the future – Beyond the superorganism. Science Direct. Nafeez Ahmed (2017). Failing States, Collapsing Systems: BioPhysical Triggers of Political Violence. Springer Briefs in Energy Smil, Vaclav (2018). Energy and Civilization: A History. MIT Press External links Yuri Yegorov, article Econo-physics: A Perspective of Matching Two Sciences, Evol. Inst. Econ. Rev. 4(1): 143–170 (2007) Borisas Cimbleris (1998): Economy and Thermodynamics Schwartzman, David. (2007). "The Limits to Entropy: the Continuing Misuse of Thermodynamics in Environmental and Marxist theory", In Press, Science & Society. Saslow, Wayne M. (1999). "An Economic Analogy to Thermodynamics" American Association of Physics Teachers. Biophysical Economics Institute Schools of economic thought Industrial ecology Ecological economics
0.805523
0.970873
0.78206
Marine biology
Marine biology is the scientific study of the biology of marine life, organisms that inhabit the sea. Given that in biology many phyla, families and genera have some species that live in the sea and others that live on land, marine biology classifies species based on the environment rather than on taxonomy. A large proportion of all life on Earth lives in the ocean. The exact size of this "large proportion" is unknown, since many ocean species are still to be discovered. The ocean is a complex three-dimensional world, covering approximately 71% of the Earth's surface. The habitats studied in marine biology include everything from the tiny layers of surface water in which organisms and abiotic items may be trapped in surface tension between the ocean and atmosphere, to the depths of the oceanic trenches, sometimes 10,000 meters or more beneath the surface of the ocean. Specific habitats include estuaries, coral reefs, kelp forests, seagrass meadows, the surrounds of seamounts and thermal vents, tidepools, muddy, sandy and rocky bottoms, and the open ocean (pelagic) zone, where solid objects are rare and the surface of the water is the only visible boundary. The organisms studied range from microscopic phytoplankton and zooplankton to huge cetaceans (whales) in length. Marine ecology is the study of how marine organisms interact with each other and the environment. Marine life is a vast resource, providing food, medicine, and raw materials, in addition to helping to support recreation and tourism all over the world. At a fundamental level, marine life helps determine the very nature of our planet. Marine organisms contribute significantly to the oxygen cycle, and are involved in the regulation of the Earth's climate. Shorelines are in part shaped and protected by marine life, and some marine organisms even help create new land. Many species are economically important to humans, including both finfish and shellfish. It is also becoming understood that the well-being of marine organisms and other organisms are linked in fundamental ways. The human body of knowledge regarding the relationship between life in the sea and important cycles is rapidly growing, with new discoveries being made nearly every day. These cycles include those of matter (such as the carbon cycle) and of air (such as Earth's respiration, and movement of energy through ecosystems including the ocean). Large areas beneath the ocean surface still remain effectively unexplored. Biological oceanography Marine biology can be contrasted with biological oceanography. Marine life is a field of study both in marine biology and in biological oceanography. Biological oceanography is the study of how organisms affect and are affected by the physics, chemistry, and geology of the oceanographic system. Biological oceanography mostly focuses on the microorganisms within the ocean; looking at how they are affected by their environment and how that affects larger marine creatures and their ecosystem. Biological oceanography is similar to marine biology, but it studies ocean life from a different perspective. Biological oceanography takes a bottom up approach in terms of the food web, while marine biology studies the ocean from a top down perspective. Biological oceanography mainly focuses on the ecosystem of the ocean with an emphasis on plankton: their diversity (morphology, nutritional sources, motility, and metabolism); their productivity and how that plays a role in the global carbon cycle; and their distribution (predation and life cycle). Biological oceanography also investigates the role of microbes in food webs, and how humans impact the ecosystems in the oceans. Marine habitats Marine habitats can be divided into coastal and open ocean habitats. Coastal habitats are found in the area that extends from the shoreline to the edge of the continental shelf. Most marine life is found in coastal habitats, even though the shelf area occupies only seven percent of the total ocean area. Open ocean habitats are found in the deep ocean beyond the edge of the continental shelf. Alternatively, marine habitats can be divided into pelagic and demersal habitats. Pelagic habitats are found near the surface or in the open water column, away from the bottom of the ocean and affected by ocean currents, while demersal habitats are near or on the bottom. Marine habitats can be modified by their inhabitants. Some marine organisms, like corals, kelp and sea grasses, are ecosystem engineers which reshape the marine environment to the point where they create further habitat for other organisms. Intertidal and near shore Intertidal zones, the areas that are close to the shore, are constantly being exposed and covered by the ocean's tides. A huge array of life can be found within this zone. Shore habitats span from the upper intertidal zones to the area where land vegetation takes prominence. It can be underwater anywhere from daily to very infrequently. Many species here are scavengers, living off of sea life that is washed up on the shore. Many land animals also make much use of the shore and intertidal habitats. A subgroup of organisms in this habitat bores and grinds exposed rock through the process of bioerosion. Estuaries Estuaries are also near shore and influenced by the tides. An estuary is a partially enclosed coastal body of water with one or more rivers or streams flowing into it and with a free connection to the open sea. Estuaries form a transition zone between freshwater river environments and saltwater maritime environments. They are subject both to marine influences—such as tides, waves, and the influx of saline water—and to riverine influences—such as flows of fresh water and sediment. The shifting flows of both sea water and fresh water provide high levels of nutrients both in the water column and in sediment, making estuaries among the most productive natural habitats in the world. Reefs Reefs comprise some of the densest and most diverse habitats in the world. The best-known types of reefs are tropical coral reefs which exist in most tropical waters; however, reefs can also exist in cold water. Reefs are built up by corals and other calcium-depositing animals, usually on top of a rocky outcrop on the ocean floor. Reefs can also grow on other surfaces, which has made it possible to create artificial reefs. Coral reefs also support a huge community of life, including the corals themselves, their symbiotic zooxanthellae, tropical fish and many other organisms. Much attention in marine biology is focused on coral reefs and the El Niño weather phenomenon. In 1998, coral reefs experienced the most severe mass bleaching events on record, when vast expanses of reefs across the world died because sea surface temperatures rose well above normal. Some reefs are recovering, but scientists say that between 50% and 70% of the world's coral reefs are now endangered and predict that global warming could exacerbate this trend. Open ocean The open ocean is relatively unproductive because of a lack of nutrients, yet because it is so vast, in total it produces the most primary productivity. The open ocean is separated into different zones, and the different zones each have different ecologies. Zones which vary according to their depth include the epipelagic, mesopelagic, bathypelagic, abyssopelagic, and hadopelagic zones. Zones which vary by the amount of light they receive include the photic and aphotic zones. Much of the aphotic zone's energy is supplied by the open ocean in the form of detritus. Deep sea and trenches The deepest recorded oceanic trench measured to date is the Mariana Trench, near the Philippines, in the Pacific Ocean at . At such depths, water pressure is extreme and there is no sunlight, but some life still exists. A white flatfish, a shrimp and a jellyfish were seen by the American crew of the bathyscaphe Trieste when it dove to the bottom in 1960. In general, the deep sea is considered to start at the aphotic zone, the point where sunlight loses its power of transference through the water. Many life forms that live at these depths have the ability to create their own light known as bio-luminescence. Marine life also flourishes around seamounts that rise from the depths, where fish and other sea life congregate to spawn and feed. Hydrothermal vents along the mid-ocean ridge spreading centers act as oases, as do their opposites, cold seeps. Such places support unique biomes and many new microbes and other lifeforms have been discovered at these locations.There is still much more to learn about the deeper parts of the ocean. Marine life In biology, many phyla, families and genera have some species that live in the sea and others that live on land. Marine biology classifies species based on their environment rather than their taxonomy. For this reason, marine biology encompasses not only organisms that live only in a marine environment, but also other organisms whose lives revolve around the sea. Microscopic life As inhabitants of the largest environment on Earth, microbial marine systems drive changes in every global system. Microbes are responsible for virtually all photosynthesis that occurs in the ocean, as well as the cycling of carbon, nitrogen, phosphorus and other nutrients and trace elements. Microscopic life undersea is incredibly diverse and still poorly understood. For example, the role of viruses in marine ecosystems is barely being explored even in the beginning of the 21st century. The role of phytoplankton is better understood due to their critical position as the most numerous primary producers on Earth. Phytoplankton are categorized into cyanobacteria (also called blue-green algae/bacteria), various types of algae (red, green, brown, and yellow-green), diatoms, dinoflagellates, euglenoids, coccolithophorids, cryptomonads, chrysophytes, chlorophytes, prasinophytes, and silicoflagellates. Zooplankton tend to be somewhat larger, and not all are microscopic. Many Protozoa are zooplankton, including dinoflagellates, zooflagellates, foraminiferans, and radiolarians. Some of these (such as dinoflagellates) are also phytoplankton; the distinction between plants and animals often breaks down in very small organisms. Other zooplankton include cnidarians, ctenophores, chaetognaths, molluscs, arthropods, urochordates, and annelids such as polychaetes. Many larger animals begin their life as zooplankton before they become large enough to take their familiar forms. Two examples are fish larvae and sea stars (also called starfish). Plants and algae Microscopic algae and plants provide important habitats for life, sometimes acting as hiding places for larval forms of larger fish and foraging places for invertebrates. Algal life is widespread and very diverse under the ocean. Microscopic photosynthetic algae contribute a larger proportion of the world's photosynthetic output than all the terrestrial forests combined. Most of the niche occupied by sub plants on land is actually occupied by macroscopic algae in the ocean, such as Sargassum and kelp, which are commonly known as seaweeds that create kelp forests. Plants that survive in the sea are often found in shallow waters, such as the seagrasses (examples of which are eelgrass, Zostera, and turtle grass, Thalassia). These plants have adapted to the high salinity of the ocean environment. The intertidal zone is also a good place to find plant life in the sea, where mangroves or cordgrass or beach grass might grow. Invertebrates As on land, invertebrates, or animals that lack a backbone, make up a huge portion of all life in the sea. Invertebrate sea life includes Cnidaria such as jellyfish and sea anemones; Ctenophora; sea worms including the phyla Platyhelminthes, Nemertea, Annelida, Sipuncula, Echiura, Chaetognatha, and Phoronida; Mollusca including shellfish, squid, octopus; Arthropoda including Chelicerata and Crustacea; Porifera; Bryozoa; Echinodermata including starfish; and Urochordata including sea squirts or tunicates. Fungi Over 10,000 species of fungi are known from marine environments. These are parasitic on marine algae or animals, or are saprobes on algae, corals, protozoan cysts, sea grasses, wood and other substrata, and can also be found in sea foam. Spores of many species have special appendages which facilitate attachment to the substratum. A very diverse range of unusual secondary metabolites is produced by marine fungi. Vertebrates Fish A reported 33,400 species of fish, including bony and cartilaginous fish, had been described by 2016, more than all other vertebrates combined. About 60% of fish species live in saltwater. Reptiles Reptiles which inhabit or frequent the sea include sea turtles, sea snakes, terrapins, the marine iguana, and the saltwater crocodile. Most extant marine reptiles, except for some sea snakes, are oviparous and need to return to land to lay their eggs. Thus most species, excluding sea turtles, spend most of their lives on or near land rather than in the ocean. Despite their marine adaptations, most sea snakes prefer shallow waters nearby land, around islands, especially waters that are somewhat sheltered, as well as near estuaries. Some extinct marine reptiles, such as ichthyosaurs, evolved to be viviparous and had no requirement to return to land. Birds Birds adapted to living in the marine environment are often called seabirds. Examples include albatross, penguins, gannets, and auks. Although they spend most of their lives in the ocean, species such as gulls can often be found thousands of miles inland. Mammals There are five main types of marine mammals: cetaceans (toothed whales and baleen whales); sirenians such as manatees; pinnipeds including seals and the walrus; sea otters; and the polar bear. All are air-breathing, meaning that while some such as the sperm whale can dive for prolonged periods, all must return to the surface to breathe. Subfields The marine ecosystem is large, and thus there are many sub-fields of marine biology. Most involve studying specializations of particular animal groups, such as phycology, invertebrate zoology and ichthyology. Other subfields study the physical effects of continual immersion in sea water and the ocean in general, adaptation to a salty environment, and the effects of changing various oceanic properties on marine life. A subfield of marine biology studies the relationships between oceans and ocean life, and global warming and environmental issues (such as carbon dioxide displacement). Recent marine biotechnology has focused largely on marine biomolecules, especially proteins, that may have uses in medicine or engineering. Marine environments are the home to many exotic biological materials that may inspire biomimetic materials. Through constant monitoring of the ocean, there have been discoveries of marine life which could be used to create remedies for certain diseases such as cancer and leukemia. In addition, Ziconotide, an approved drug used to treat pain, was created from a snail which resides in the ocean. Related fields Marine biology is a branch of biology. It is closely linked to oceanography, especially biological oceanography, and may be regarded as a sub-field of marine science. It also encompasses many ideas from ecology. Fisheries science and marine conservation can be considered partial offshoots of marine biology (as well as environmental studies). Marine chemistry, physical oceanography and atmospheric sciences are also closely related to this field. Distribution factors An active research topic in marine biology is to discover and map the life cycles of various species and where they spend their time. Technologies that aid in this discovery include pop-up satellite archival tags, acoustic tags, and a variety of other data loggers. Marine biologists study how the ocean currents, tides and many other oceanic factors affect ocean life forms, including their growth, distribution and well-being. This has only recently become technically feasible with advances in GPS and newer underwater visual devices. Most ocean life breeds in specific places, nests in others, spends time as juveniles in still others, and in maturity in yet others. Scientists know little about where many species spend different parts of their life cycles especially in the infant and juvenile years. For example, it is still largely unknown where juvenile sea turtles and some sharks in the first year of their life travel. Recent advances in underwater tracking devices are illuminating what we know about marine organisms that live at great ocean depths. The information that pop-up satellite archival tags gives aids in fishing closures for certain times of the year and the development of marine protected areas. This data is important to both scientists and fishermen because they are discovering that, by restricting commercial fishing in one small area, they can have a large impact in maintaining a healthy fish population in a much larger area. History The study of marine biology dates to Aristotle (384–322 BC), who made many observations of life in the sea around Lesbos, laying the foundation for many future discoveries. In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves. The British naturalist Edward Forbes (1815–1854) is generally regarded as the founder of the science of marine biology. The pace of oceanographic and marine biology studies quickly accelerated during the course of the 19th century. The observations made in the first studies of marine biology fueled the Age of Discovery and exploration that followed. During this time, a vast amount of knowledge was gained about the life that exists in the oceans of the world. Many voyages contributed significantly to this pool of knowledge. Among the most significant were the voyages of where Charles Darwin came up with his theories of evolution and on the formation of coral reefs. Another important expedition was undertaken by HMS Challenger, where findings were made of unexpectedly high species diversity among fauna stimulating much theorizing by population ecologists on how such varieties of life could be maintained in what was thought to be such a hostile environment. This era was important for the history of marine biology but naturalists were still limited in their studies because they lacked technology that would allow them to adequately examine species that lived in deep parts of the oceans. The creation of marine laboratories was important because it allowed marine biologists to conduct research and process their specimens from expeditions. The oldest marine laboratory in the world, Station biologique de Roscoff, was established in Concarneau, France founded by the College of France in 1859. In the United States, Scripps Institution of Oceanography dates back to 1903, while the prominent Woods Hole Oceanographic Institute was founded in 1930. The development of technology such as sound navigation and ranging, scuba diving gear, submersibles and remotely operated vehicles allowed marine biologists to discover and explore life in deep oceans that was once thought to not exist. Public interest in the subject continued to develop in the post-war years with the publication of Rachel Carson's sea trilogy (1941-1955). See also Acoustic ecology Aquaculture Bathymetry Biological oceanography Effects of climate change on oceans Freshwater biology Modular ocean model Oceanic basin Oceanic climate Phycology Lists Glossary of ecology Index of biology articles Large marine ecosystem List of ecologists List of marine biologists List of marine ecoregions (WWF) Outline of biology Outline of ecology References Further references Morrissey J and Sumich J (2011) Introduction to the Biology of Marine Life Jones & Bartlett Publishers. . Mladenov, Philip V., Marine Biology: A Very Short Introduction, 2nd edn (Oxford, 2020; online edn, Very Short Introductions online, Feb. 2020), http://dx.doi.org/10.1093/actrade/9780198841715.001.0001, accessed 21 Jun. 2020. External links Smithsonian Ocean Portal Marine Conservation Society Marine Ecology – an evolutionary perspective Free special issue: Marine Biology in Time and Space Creatures of the deep ocean – National Geographic documentary, 2010. Exploris Freshwater and Marine Image Bank – From the University of Washington Library Marine Training Portal – Portal grouping training initiatives in the field of Marine Biology Biological oceanography Fisheries science Oceanographical terminology
0.784751
0.996569
0.782058
Environmental issues
Environmental issues are disruptions in the usual function of ecosystems. Further, these issues can be caused by humans (human impact on the environment) or they can be natural. These issues are considered serious when the ecosystem cannot recover in the present situation, and catastrophic if the ecosystem is projected to certainly collapse. Environmental protection is the practice of protecting the natural environment on the individual, organizational or governmental levels, for the benefit of both the environment and humans. Environmentalism is a social and environmental movement that addresses environmental issues through advocacy, legislation education, and activism. Environment destruction caused by humans is a global, ongoing problem. Water pollution also cause problems to marine life. Most scholars think that the project peak global world population of between 9-10 billion people, could live sustainably within the earth's ecosystems if human society worked to live sustainably within planetary boundaries. The bulk of environmental impacts are caused by excessive consumption of industrial goods by the world's wealthiest populations. The UN Environmental Program, in its "Making Peace With Nature" Report in 2021, found addressing key planetary crises, like pollution, climate change and biodiversity loss, was achievable if parties work to address the Sustainable Development Goals. Types Major current environmental issues may include climate change, pollution, environmental degradation, and resource depletion. The conservation movement lobbies for protection of endangered species and protection of any ecologically valuable natural areas, genetically modified foods and global warming. The UN system has adopted international frameworks for environmental issues in three key issues, which has been encoded as the "triple planetary crises": climate change, pollution, and biodiversity loss. Human impact Degradation Conflict Costs Action Justice The 2023 IPCC report highlighted the disproportionate effects of climate change on vulnerable populations. The report's findings make it clear that every increment of global warming exacerbates challenges such as extreme heatwaves, heavy rainfall, and other weather extremes, which in turn amplify risks for human health and ecosystems. With nearly half of the world's population residing in regions highly susceptible to climate change, the urgency for global actions that are both rapid and sustained is underscored. The importance of integrating diverse knowledge systems, including scientific, Indigenous, and local knowledge, into climate action is highlighted as a means to foster inclusive solutions that address the complexities of climate impacts across different communities. In addition, the report points out the critical gap in adaptation finance, noting that developing countries require significantly more resources to effectively adapt to climate challenges than what is currently available. This financial disparity raises questions about the global commitment to equitable climate action and underscores the need for a substantial increase in support and resources. The IPCC's analysis suggests that with adequate financial investment and international cooperation, it is possible to embark on a pathway towards resilience and sustainability that benefits all sections of society. Law Assessment Movement Organizations Environmental issues are addressed at a regional, national or international level by government organizations. The largest international agency, set up in 1972, is the United Nations Environment Programme. The International Union for Conservation of Nature brings together 83 states, 108 government agencies, 766 Non-governmental organizations and 81 international organizations and about 10,000 experts, scientists from countries around the world. International non-governmental organizations include Greenpeace, Friends of the Earth and World Wide Fund for Nature. Governments enact environmental policy and enforce environmental law and this is done to differing degrees around the world. Film and television There are an increasing number of films being produced on environmental issues, especially on climate change and global warming. Al Gore's 2006 film An Inconvenient Truth gained commercial success and a high media profile. See also Citizen science Ecotax Environmental impact statement Index of environmental articles Triple planetary crisis Issues List of environmental issues (includes mitigation and conservation) Specific issues Environmental impact of agriculture Environmental impact of aviation Environmental impact of reservoirs Environmental impact of the energy industry Environmental impact of fishing Environmental impact of irrigation Environmental impact of mining Environmental impact of paint Environmental impact of paper Environmental impact of pesticides Environmental implications of nanotechnology Environmental impact of shipping Environmental impact of war References Works cited Further reading External links Human impact on the environment
0.782804
0.998799
0.781864
Sustainable design
Environmentally sustainable design (also called environmentally conscious design, eco-design, etc.) is the philosophy of designing physical objects, the built environment, and services to comply with the principles of ecological sustainability and also aimed at improving the health and comfort of occupants in a building. Sustainable design seeks to reduce negative impacts on the environment, the health and well-being of building occupants, thereby improving building performance. The basic objectives of sustainability are to reduce the consumption of non-renewable resources, minimize waste, and create healthy, productive environments. Theory The sustainable design intends to "eliminate negative environmental impact through skillful sensitive design". Manifestations of sustainable design require renewable resources and innovation to impact the environment minimally, and connect people with the natural environment. "Human beings don't have a pollution problem; they have a design problem. If humans were to devise products, tools, furniture, homes, factories, and cities more intelligently from the start, they wouldn't even need to think in terms of waste, contamination, or scarcity. Good design would allow for abundance, endless reuse, and pleasure." - The Upcycle by authors Michael Braungart and William McDonough, 2013. Design-related decisions are happening everywhere daily, impacting "sustainable development" or provisioning for the needs of future generations of life on earth. Sustainability and design are intimately linked. Quite simply, our future is designed. The term "design" is here used to refer to practices applied to the making of products, services, as well as business and innovation strategies — all of which inform sustainability. Sustainability can be thought of as the property of continuance; that is, what is sustainable can be continued. Conceptual problems Diminishing returns The principle that all directions of progress run out, ending with diminishing returns, is evident in the typical 'S' curve of the technology life cycle and in the useful life of any system as discussed in industrial ecology and life cycle assessment. Diminishing returns are the result of reaching natural limits. Common business management practice is to read diminishing returns in any direction of effort as an indication of diminishing opportunity, the potential for accelerating decline, and a signal to seek new opportunities elsewhere. (see also: law of diminishing returns, marginal utility, and Jevons paradox.) Unsustainable investment A problem arises when the limits of a resource are hard to see, so increasing investment in response to diminishing returns may seem profitable as in the Tragedy of the Commons, but may lead to a collapse. This problem of increasing investment in diminishing resources has also been studied as a cause of civilization collapse by Joseph Tainter among others. This natural error in investment policy contributed to the collapse of both the Roman and Mayan, among others. Relieving over-stressed resources requires reducing pressure on them, not continually increasing it whether more efficiently or not. Negative Effects of Waste The designer is responsible for choices that place a demand on natural resources, produce waste, and potentially cause irreversible ecosystem damage. About 80 million tonnes of waste in total are generated in the U.K. alone, for example, each year. And concerning only household waste, between 1991–92 and 2007–08, each person in England generated an average of 1.35 pounds of waste per day. Experience has now shown that there is no completely safe method of waste disposal. All forms of disposal have negative effects on the environment, public innovation, and local economies. Landfills have contaminated drinking water. Garbage burned in incinerators has poisoned air, soil, and water. The majority of water treatment systems change the local ecology. Attempts to control or manage wastes after they are produced fail to eliminate environmental impacts. The toxic components of household products pose serious health risks and aggravate the trash problem. In the U.S., about seven pounds in every ton of household garbage contains toxic materials, such as heavy metals like nickel, lead, cadmium, and mercury from batteries, and organic compounds found in pesticides and consumer products, such as air freshener sprays, nail polish, cleaners, and other products. When burned or buried, toxic materials also pose a serious threat to public health and the environment. The only way to avoid environmental harm from waste is to prevent its generation. Pollution prevention means changing the way activities are conducted and eliminating the source of the problem. It does not mean doing without, but doing differently. For example, preventing waste pollution from litter caused by disposable beverage containers does not mean doing without beverages; it just means using refillable bottles. Industrial designer Victor Papanek has stated that when we design and plan things to be discarded, we exercise insufficient care in design. Waste prevention strategies In planning for facilities, a comprehensive design strategy is needed for preventing the generation of solid waste. A good garbage prevention strategy would require that everything brought into a facility is recycled for reuse or recycled back into the environment through biodegradation. This would mean a greater reliance on natural materials or products that are compatible with the environment. Any resource-related development is going to have two basic sources of solid waste — materials purchased and used by the facility and those brought into the facility by visitors. The following waste prevention strategies apply to both, although different approaches will be needed for implementation. use products that minimize waste and are nontoxic compost or anaerobically digest biodegradable wastes reuse materials onsite or collect suitable materials for offsite recycling consuming fewer resources means creating less waste, therefore it reduces the impact on the environment. Climate change Perhaps the most obvious and overshadowing driver of environmentally conscious sustainable design can be attributed to global warming and climate change. The sense of urgency that now prevails for humanity to take action against climate change has increased manifold in the past thirty years. Climate change can be attributed to several faults, and improper design that doesn't take into consideration the environment is one of them. While several steps in the field of sustainability have begun, most products, industries, and buildings still consume a lot of energy and create a lot of pollution. Loss of Biodiversity Unsustainable design, or simply design, also affects the biodiversity of a region. Improper design of transport highways forces thousands of animals to move further into forest boundaries. Poorly designed hydrothermal dams affect the mating cycle and indirectly, the numbers of local fish. Sustainable design principles While the practical application varies among disciplines, some common principles are as follows: Low-impact materials: choose non-toxic, sustainably produced, or recycled materials that require little energy to process Energy efficiency: use manufacturing processes and produce products that require less energy Emotionally durable design: reducing consumption and waste of resources by increasing the durability of relationships between people and products, through design Design for reuse and recycling: "Products, processes, and systems should be designed for performance in a commercial 'afterlife'." Targeted durability, not immortality, should be a design goal. Material diversity in multicomponent products should be minimized to promote disassembly and value retention. Design impact measures for total carbon footprint and life-cycle assessment for any resource used are increasingly required and available.^ Many are complex, but some give quick and accurate whole-earth estimates of impacts. One measure estimates any spending as consuming an average economic share of global energy use of per dollar and producing at the average rate of 0.57 kg of per dollar (1995 dollars US) from DOE figures. Sustainable design standards and project design guides are also increasingly available and are vigorously being developed by a wide array of private organizations and individuals. There is also a large body of new methods emerging from the rapid development of what has become known as 'sustainability science' promoted by a wide variety of educational and governmental institutions. Biomimicry: "redesigning industrial systems on biological lines ... enabling the constant reuse of materials in continuous closed cycles..." Service substitution: shifting the mode of consumption from personal ownership of products to provision of services that provide similar functions, e.g., from a private automobile to a carsharing service. Such a system promotes minimal resource use per unit of consumption (e.g., per trip driven). Renewable resource: materials should come from nearby (local or bioregional), sustainably managed renewable sources that can be composted when their usefulness has been exhausted. Bill of Rights for the Planet A model of the new design principles necessary for sustainability is exemplified by the "Bill of Rights for the Planet" or "Hannover Principles" - developed by William McDonough Architects for EXPO 2000 that was held in Hannover, Germany. The Bill of Rights: Insist on the right of humanity and nature to co-exist in healthy, supportive, diverse, and sustainable conditions. Recognize Interdependence. The elements of human design interact with and depend on the natural world, with broad and diverse implications at every scale. Expand design considerations to recognize even distant effects. Respect relationships between spirit and matter. Consider all aspects of human settlement including community, dwelling, industry, and trade in terms of existing and evolving connections between spiritual and material consciousness. Accept responsibility for the consequences of design decisions upon human well-being, the viability of natural systems, and their right to co-exist. Create safe objects of long-term value. Do not burden future generations with requirements for maintenance or vigilant administration of potential danger due to the careless creation of products, processes, or standards. Eliminate the concept of waste. Evaluate and optimize the full life-cycle of products and processes, to approach the state of natural systems in which there is no waste. Rely on natural energy flows. Human designs should, like the living world, derive their creative forces from perpetual solar income. Incorporating this energy efficiently and safely for responsible use. Understand the limitations of design. No human creation lasts forever and design does not solve all problems. Those who create and plan should practice humility in the face of nature. Treat nature as a model and mentor, not an inconvenience to be evaded or controlled. Seek constant improvement by the sharing of knowledge. Encourage direct and open communication between colleagues, patrons, manufacturers, and users to link long-term sustainable considerations with ethical responsibility, and re-establish the integral relationship between natural processes and human activity. These principles were adopted by the World Congress of the International Union of Architects (UIA) in June 1993 at the American Institute of Architects (AIA) Expo 93 in Chicago. Further, the AIA and UIA signed a "Declaration of Interdependence for a Sustainable Future." In summary, the declaration states that today's society is degrading its environment and that the AIA, UIA, and their members are committed to: Placing environmental and social sustainability at the core of practices and professional responsibilities Developing and continually improving practices, procedures, products, services, and standards for sustainable design Educating the building industry, clients, and the general public about the importance of sustainable design Working to change policies, regulations, and standards in government and business so that sustainable design will become the fully supported standard practice Bringing the existing built environment up to sustainable design standards. In addition, the Interprofessional Council on Environmental Design (ICED), a coalition of architectural, landscape architectural, and engineering organizations developed a vision statement in an attempt to foster a team approach to sustainable design. ICED states: The ethics, education, and practices of our professions will be directed to shape a sustainable future. . . . To achieve this vision we will join . . . as a multidisciplinary partnership." These activities are an indication that the concept of sustainable design is being supported on a global and interprofessional scale and that the ultimate goal is to become more environmentally responsive. The world needs facilities that are more energy-efficient and that promote conservation and recycling of natural and economic resources. Economically and socially sustainable design Environmentally sustainable design is most beneficial when it works hand-in-hand with the other two counterparts of sustainable design – the economic and socially sustainable designs. These three terms are often coined under the title "triple bottom line." In addition to financial terms, value can also be measured in relation to natural capital (the biosphere and earth's resources), social capital (the norms and networks that enable collective action), and human capital (the sum total of knowledge, experience, intellectual property, and labor available to society). In some countries the term sustainable design is known as ecodesign, green design or environmental design. Victor Papanek, embraced social design and social quality and ecological quality, but did not explicitly combine these areas of design concern in one term. Sustainable design and design for sustainability are more common terms, including the triple bottom line (people, planet and profit). Advocates like Ecothis.EU campaign urge all three considerations be taken into account when designing a circular economy. Aspects of environmentally sustainable design Emotionally durable design According to Jonathan Chapman of Carnegie Mellon University, emotionally durable design reduces the consumption and waste of natural resources by increasing the resilience of relationships established between consumers and products." Essentially, product replacement is delayed by strong emotional ties. In his book, Emotionally Durable Design: Objects, Experiences & Empathy, Chapman describes how "the process of consumption is, and has always been, motivated by complex emotional drivers, and is about far more than just the mindless purchasing of newer and shinier things; it is a journey towards the ideal or desired self, that through cyclical loops of desire and disappointment, becomes a seemingly endless process of serial destruction". Therefore, a product requires an attribute, or number of attributes, which extend beyond utilitarianism. According to Chapman, "emotional durability" can be achieved through consideration of the following five elements: Narrative: How users share a unique personal history with the product. Consciousness: How the product is perceived as autonomous and in possession of its own free will. Attachment: Can a user be made to feel a strong emotional connection to a product? Fiction: The product inspires interactions and connections beyond just the physical relationship. Surface: How the product ages and develops character through time and use. As a strategic approach, "emotionally durable design provides a useful language to describe the contemporary relevance of designing responsible, well made, tactile products which the user can get to know and assign value to in the long-term". According to Hazel Clark and David Brody of Parsons The New School for Design in New York, "emotionally durable design is a call for professionals and students alike to prioritise the relationships between design and its users, as a way of developing more sustainable attitudes to, and in, design things". Beauty and sustainable design Because standards of sustainable design appear to emphasize ethics over aesthetics, some designers and critics have complained that it lacks inspiration. Pritzker Architecture Prize winner Frank Gehry has called green building "bogus", and National Design Awards winner Peter Eisenman has dismissed it as "having nothing to do with architecture". In 2009, The American Prospect asked whether "well-designed green architecture" is an "oxymoron". Others claim that such criticism of sustainable design is misguided. A leading advocate for this alternative view is architect Lance Hosey, whose book The Shape of Green: Aesthetics, Ecology, and Design (2012) was the first dedicated to the relationships between sustainability and beauty. Hosey argues not just that sustainable design needs to be aesthetically appealing in order to be successful, but also that following the principles of sustainability to their logical conclusion requires reimagining the shape of everything designed, creating things of even greater beauty. Reviewers have suggested that the ideas in The Shape of Green could "revolutionize what it means to be sustainable". Small and large buildings are beginning to successfully incorporate principles of sustainability into award-winning designs. Examples include One Central Park and the Science Faculty building, UTS. The popular Living Building Challenge has incorporated beauty as one of its petals in building design. Sustainable products and processes are required to be beautiful because it allows for emotional durability, which increases the probability that they are going to be maintained and preserved, decreasing their carbon footprint. Many people also argue that biophilia is innately beautiful. Which is why building architecture is designed such that people feel close to nature and is often surrounded by well-kept lawns – a design that is both "beautiful" and encourages the inculcation of nature in our daily lives. Or utilizes daylight design into the system – reducing lighting loads while also fulfilling our need for being close to that which is outdoors. Economic aspects Discussed above, economics is another aspect of it environmental design that is crucial to most design decisions. It is obvious that most people consider the cost of any design before they consider the environmental impacts of it. Therefore, there is a growing nuance of pitching ideas and suggestions for environmentally sustainable design by highlighting the economical profits that they bring to us. "As the green design field matures, it becomes ever more clear that integration is the key to achieving energy and environmental goals especially if cost is a major driver." Building Green Inc. (1999) To achieve the more ambitious goals of the green design movement, architects, engineers and designers need to further embrace and communicate the profit and economic potential of sustainable design measures. Focus should be on honing skills in communicating the economic and profit potential of smart design, with the same rigor that have been applied to advancing technical building solutions. Standards of Evaluation There are several standards and rating systems developed as sustainability gains popularity. Most rating systems revolve around buildings and energy, and some cover products as well. Most rating systems certify on the basis of design as well as post construction or manufacturing. LEED - Leadership in energy and environmental design. Living building challenge HERS - Home energy rating WELS rating - water efficiency labeling standard BREEAM - Building Research Establishment's Environmental Assessment Method GBI - Green Building Initiative EPA WaterSense Energy Star FSC - Forest Stewardship Council CASBEE - Comprehensive Assessment System for Built Environment Efficiency Passive house. Net-Positive Design Net-Positive Design and Assessment computer app While designing for environmental sustainability, it is imperative that the appropriate units are paid attention to. Often, different standards weigh things in different units, and that can make a huge impact on the outcome of the project. Another important aspect of using standards and looking at data involves understanding the baseline. A poor design baseline with huge improvements often show a higher efficiency percentage, while an intelligent baseline from the start might only have a little improvement needed and show lesser change. Therefore, all data should ideally be compared on similar levels, and also be looked at from multiple unit values. Greenwashing Greenwashing is defined to be "the process of conveying a false impression or providing misleading information about how a company's products are more environmentally sound". This can be as simple as using green packaging which subconsciously leads a consumer to think that a product is more environmentally friendly than others. Another example are eco-labels. Companies can take advantage of these certifications for appearance and profit, but their exact meanings are unclear and not readily available. Some labels are more credible than others as they are verified by a credible third-party, while others are self-awarded. The labels are badly regulated and prone to deception. This can lead people to make different decisions on the basis of potentially false narratives. These labels are highly effective as a study in Sweden found that a 32.8% of purchase behavior on ecological food can be determined by the presence of an eco-label. Increased transparency of these labels and recycling labels can empower consumers to make better choices. The methods used by most assessment tools can also result in greenwashing, as explained in Net-Positive Design and Sustainable Urban Development. LCA and Product Life Life cycle assessment is the complete assessment of materials from their extraction, transport, processing, refining, manufacturing, maintenance, use, disposal, reuse and recycle stages. It helps put into perspective whether a design is actually environmentally sustainable in the long run. Products such as aluminum which can be reused multiple number of times but have a very energy intensive mining and refining which makes it unfavorable. Information such as this is done using LCA and then taken into consideration when designing. Applications Applications of this philosophy range from the microcosm — small objects for everyday use, through to the macrocosm — buildings, cities, and the Earth's physical surface. It is a philosophy that can be applied in the fields of architecture, landscape architecture, urban design, urban planning, engineering, graphic design, industrial design, interior design, fashion design and human-computer interaction. Sustainable design is mostly a general reaction to global environmental crises, the rapid growth of economic activity and human population, depletion of natural resources, damage to ecosystems, and loss of biodiversity. In 2013, eco architecture writer Bridgette Meinhold surveyed emergency and long-term sustainable housing projects that were developed in response to these crises in her book, "Urgent Architecture: 40 Sustainable Housing Solutions for a Changing World." Featured projects focus on green building, sustainable design, eco-friendly materials, affordability, material reuse, and humanitarian relief. Construction methods and materials include repurposed shipping containers, straw bale construction, sandbag homes, and floating homes. The limits of sustainable design are shrinking. Because growth in goods and services consistently outpaces gains in efficiency. As a result, the net effect of sustainable design has simply been to improve the efficiency of rapidly increasing impacts. This problem is not solved by the current approach, which focuses on the efficiency of delivering individual goods and services. The fundamental dilemmas are as follows: the increasing complexity of efficiency improvements; the difficulty of implementing new technologies in societies built around old ones; the fact that the physical impacts of delivering goods and services are not localized, but are distributed across economies; and the fact that the scale of resource use is growing and not stabilizing. Sustainable architecture Sustainable architecture is the design of sustainable buildings. Sustainable architecture attempts to reduce the collective environmental impacts during the production of building components, during the construction process, as well as during the lifecycle of the building (heating, electricity use, carpet cleaning etc.) This design practice emphasizes efficiency of heating and cooling systems; alternative energy sources such as solar hot water, appropriate building siting, reused or recycled building materials; on-site power generation - solar technology, ground source heat pumps, wind power; rainwater harvesting for gardening, washing and aquifer recharge; and on-site waste management such as green roofs that filter and control stormwater runoff. This requires close cooperation of the design team, the architects, the engineers, and the client at all project stages, from site selection, scheme formation, material selection and procurement, to project implementation. This is also called a charrette. Appropriate building siting and smaller building footprints are vital to an environmentally sustainable design. Oftentimes, a building may be very well designed, and energy efficient but its location requires people to travel far back and forth – increasing pollution that may not be building produced but is directly as a result of the building anyway. Sustainable architecture must also cover the building beyond its useful life. Its disposal or recycling aspects also come under the wing of sustainability. Often, modular buildings are better to take apart and less energy intensive to put together too. The waste from the demolition site must be disposed of correctly and everything that can be harvested and used again should be designed to be extricated from the structure with ease, preventing unnecessary wastage when decommissioning the building. Another important aspect of sustainable architecture stems from the question of whether a structure is needed. Sometimes the best that can be done to make a structure sustainable is retrofitting or upgrading the building services and supplies instead of tearing it down. Abu Dhabi, for example has undergone and is undergoing major retrofitting to slash its energy and water consumption rather than demolishing and rebuilding new structures. Sustainable architects design with sustainable living in mind. Sustainable vs green design is the challenge that designs not only reflect healthy processes and uses but are powered by renewable energies and site specific resources. A test for sustainable design is — can the design function for its intended use without fossil fuel — unplugged. This challenge suggests architects and planners design solutions that can function without pollution rather than just reducing pollution. As technology progresses in architecture and design theories and as examples are built and tested, architects will soon be able to create not only passive, null-emission buildings, but rather be able to integrate the entire power system into the building design. In 2004 the 59 home housing community, the Solar Settlement, and a integrated retail, commercial and residential building, the Sun Ship, were completed by architect Rolf Disch in Freiburg, Germany. The Solar Settlement is the first housing community worldwide in which every home, all 59, produce a positive energy balance. An essential element of Sustainable Building Design is indoor environmental quality including air quality, illumination, thermal conditions, and acoustics. The integrated design of the indoor environment is essential and must be part of the integrated design of the entire structure. ASHRAE Guideline 10-2011 addresses the interactions among indoor environmental factors and goes beyond traditional standards. Concurrently, the recent movements of New Urbanism and New Classical Architecture promote a sustainable approach towards construction, that appreciates and develops smart growth, architectural tradition and classical design. This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing estates and suburban sprawl. Both trends started in the 1980s. The Driehaus Architecture Prize is an award that recognizes efforts in New Urbanism and New Classical Architecture, and is endowed with a prize money twice as high as that of the modernist Pritzker Prize. Several advances in sustainable architecture emerged in the late 20th Century that are now widely known by ordinary practitioners. These overlapping but distinct paradigms include Biophilic Urbanism, Permaculture, Biomimicry, Bioregional Planning, Regenerative Design, Circular Systems approaches ranging from Cradle to Cradle product design to the Circular Economy, Nature-Based Design, Net-zero Design, Nature Positive Design, and Net-Positive Design. These paradigms go beyond traditional sustainable design, which simply integrates sustainable design techniques and technologies into conventional urban planning patterns and building design templates. Instead, they represent a broader societal shift (from aiming for resource and energy efficiency) to creating environments that contribute towards net outcomes, such as 'net-positive sustainability'. Net-positive architecture aims to reverse planetary overshoot as well as improving socio-ecological conditions by changing the nature of built environment decision making, design and assessment. Green Design Green design has often been used interchangeably with environmentally sustainable design. It is the practice of creating structures by using environment friendly processes. There is a popular debate about this with several arguing that green design is in effect narrower than sustainable design, which takes into account a larger system. Green design focuses on the short-term goals and while it is a worthy goal, a larger impact is possible using sustainable design. It is included in the process of creating a sustainable design. Another factor to be considered is that green design has been stigmatized by popular personalities such as Pritzker Architecture Prize winner Frank Gehry, but this branding hasn't reached sustainable design. A large part of that is because of how environmentally sustainable design is generally used hand in hand with economically sustainable design and socially sustainable design. Finally, green design is although unintentionally, often associated only with architecture while sustainable design has been considered under a much larger scope. Engineering Design Sustainable engineering is the process of designing or operating systems such that they use energy and resources sustainably, in other words, at a rate that does not compromise the natural environment, or the ability of future generations to meet their own needs. Common engineering focuses revolve around water supply, production, sanitation, cleaning up of pollution and waste sites, restoring natural habitats etc. Sustainable Interior Design Achieving a healthy and aesthetic environment for the occupants of a space is one of the basic rules in the art of Interior design. When applying focus onto the sustainable aspects of the art, Interior Design can incorporate the study and involvement of functionality, accessibility, and aesthetics to environmentally friendly materials. The integrated design of the indoor environment is essential and must be part of the integrated design of the entire structure. Goals of Sustainable Interior Design Improving the overall building performance through the reduction of negative impacts on the environment is the primary goal. According to the Environmental Protection Agency (EPA), Americans spend approximately 90% of their time indoors, where the concentrations of some toxins and impurities are frequently two to five times higher than they are outside. Sustainable interior design solutions strive to create truly inspirational rooms while simultaneously enhancing indoor air quality and mitigating the environmental impact of interior design procedures. This requires interior designers to make ethical design choices and include environmental concerns into their work, as interiors and the environment are closely intertwined. Reducing consumption of non-renewable resources, minimizing waste and creating healthy, productive environments are the primary objectives of sustainability. Optimizing site potential, minimizing non-renewable energy consumption, using environmentally preferable products, protecting and conserving water, enhancing indoor environmental quality, and optimizing operational and maintenance practices are some of the primary principles. An essential element of Sustainable Building Design is indoor environmental quality including air quality, illumination, thermal conditions, and acoustic. Interior design, when done correctly, can harness the true power of sustainable architecture. Incorporating Sustainable Interior Design Sustainable Interior Design can be incorporated through various techniques: water efficiency, energy efficiency, using non-toxic, sustainable or recycled materials, using manufactured processes and producing products with more energy efficiency, building longer lasting and better functioning products, designing reusable and recyclable products, following the sustainable design standards and guidelines, and more. For example, a room with large windows to allow for maximum sunlight should have neutral colored interiors to help bounce the light around and increase comfort levels while reducing light energy requirement. The size should, however, be carefully considered to avoid window glare. Interior Designers must take types of paints, adhesives, and more into consideration during their designing and manufacturing phase so they do not contribute to harmful environmental factors. Choosing whether to use a wood floor to marble tiled floor or carpeted floor can reduce energy consumption by the level of insulation that they provide. Utilizing materials that can withhold 24-hour health care facilities, such as linoleum, scrubbable cotton wall coverings, recycled carpeting, low toxic adhesive, and more. Furthermore, incorporating sustainability can begin before the construction process begins. Purchasing items from sustainable local businesses, analyzing the longevity of a product, taking part in recycling by purchasing recycled materials, and more should be taken into consideration. Supporting local, sustainable businesses is the first step, as this not only increases the demand for sustainable products, but also reduces unsustainable methods. Traveling all over to find specific products or purchasing products from overseas contributes to carbon emissions in the atmosphere, pulling further away from the sustainable aspect. Once the products are found, it is important to check if the selection follows the Cradle-to-cradle design (C2C) method and they are also able to be reclaimed, recycled, and reused. Also paying close attention to energy-efficient products during this entire process contributes to the sustainability factors. The aesthetic of a space does not have to be sacrificed in order to achieve sustainable interior design. Every environment and space can incorporate materials and choices to reducing environmental impact, while still providing durability and functionality. Promotion of Sustainable Interior Design The mission to incorporate sustainable interior design into every aspect of life is slowly becoming a reality. The commercial Interior Design Association (IIDA) created the sustainability forum to encourage, support, and educate the design community and the public about sustainability. The Athena Sustainable Materials Institute ensures enabling smaller footprints by working with sustainability leaders in various ways in producing and consuming materials. Building Green considers themselves the most trusted voice for sustainable and healthy design, as they offer a variety of resources to dive deep into sustainability. Various acts, such as the Energy Policy Act (EPAct) of 2005 and the Energy Independence and Security Act (EISA) of 2007 have been revised and passed to achieve better efforts towards sustainable design. Federal efforts, such as the signing of a Memorandum of Understanding to the commitment of sustainable design and the Executive Order 13693 have also worked to achieve these concepts. Various guideline and standard documents have been published for the sake of sustainable interior design and companies like LEED (Leadership in Energy and Environmental Design) are guiding and certifying efforts put into motion to contribute to the mission. When the thought of incorporating sustainable design into an interior's design is kept as a top goal for a designer, creating an overall healthy and environmentally friendly space can be achieved. Global Examples of Sustainable Interior Design Proximity Hotel in North Carolina, United States of America: The Proximity Hotel was the first hotel to be granted the LEED Platinum certification from the U.S. Green Building Council. Shanghai Natural History Museum in Shanghai, China: This new museum incorporates evaporative cooling and maintained temperatures through is design and structure. Vancouver Convention Centre West in Vancouver, British Columbia, Canada: The West location of the Vancouver Convention Centre was the first convention center in the world to be granted LEED Platinum. Bullitt Center in Seattle, Washington, United States of America: Considered "The Greenest Commercial Building in the World," it is the first to achieve the Living Building Challenge certification. Sydney, Australia became the first city in the country to contribute Green roof and Green wall to their architecture following their "Sustainable Sydney 2030" set of goals. Sustainable urban planning Sustainable design of cities is the task of designing and planning the outline of cities such that they have a low carbon footprint, have better air quality, rely on more sustainable sources of energy, and have a healthy relationship with the environment. Sustainable urban planning involves many disciplines, including architecture, engineering, biology, environmental science, materials science, law, transportation, technology, economic development, accounting and finance, and government, among others. This kind of planning also develops innovative and practical approaches to land use and its impact on natural resources. New sustainable solutions for urban planning problems can include green buildings and housing, mixed-use developments, walkability, greenways and open spaces, alternative energy sources such as solar and wind, and transportation options. Good sustainable land use planning helps improve the welfare of people and their communities, shaping their urban areas and neighborhoods into healthier, more efficient spaces. Design and planning of neighbourhoods are a major challenge when creating a favourable urban environment. The challenge is based on the principles of integrated approach to different demands: social, architectural, artistic, economic, sanitary and hygienic. Social demands are aimed at constructing network and placing buildings in order to create favourable conditions for their convenient use. Architectural-artistic solutions are aimed at single spatial composition of an area with the surrounding landscape. Economic demands include rational utilization of area territories. Sanitary and hygienic demands are of more interest in terms of creating sustainable urban areas. Sustainable landscape and garden design Sustainable landscape architecture is a category of sustainable design and energy-efficient landscaping concerned with the planning and design of outdoor space. Plants and materials may be bought from local growers to reduce energy used in transportation. Design techniques include planting trees to shade buildings from the sun or protect them from wind, using local materials, and on-site composting and chipping not only to reduce green waste hauling but to increase organic matter and therefore carbon in the soil. Some designers and gardeners such as Beth Chatto also use drought-resistant plants in arid areas (xeriscaping) and elsewhere so that water is not taken from local landscapes and habitats for irrigation. Water from building roofs may be collected in rain gardens so that the groundwater is recharged, instead of rainfall becoming surface runoff and increasing the risk of flooding. Areas of the garden and landscape can also be allowed to grow wild to encourage bio-diversity. Native animals may also be encouraged in many other ways: by plants which provide food such as nectar and pollen for insects, or roosting or nesting habitats such as trees, or habitats such as ponds for amphibians and aquatic insects. Pesticides, especially persistent pesticides, must be avoided to avoid killing wildlife. Soil fertility can be managed sustainably by the use of many layers of vegetation from trees to ground-cover plants and mulches to increase organic matter and therefore earthworms and mycorrhiza; nitrogen-fixing plants instead of synthetic nitrogen fertilizers; and sustainably harvested seaweed extract to replace micronutrients. Sustainable landscapes and gardens can be productive as well as ornamental, growing food, firewood and craft materials from beautiful places. Sustainable landscape approaches and labels include organic farming and growing, permaculture, agroforestry, forest gardens, agroecology, vegan organic gardening, ecological gardening and climate-friendly gardening. Sustainable agriculture Sustainable agriculture adheres to three main goals: Environmental health, Economic profitability, Social and economic equity. A variety of philosophies, policies and practices have contributed to these goals. People in many different capacities, from farmers to consumers, have shared this vision and contributed to it. Despite the diversity of people and perspectives, the following themes commonly weave through definitions of sustainable agriculture. There are strenuous discussions — among others by the agricultural sector and authorities — if existing pesticide protocols and methods of soil conservation adequately protect topsoil and wildlife. Doubt has risen if these are sustainable, and if agrarian reforms would permit an efficient agriculture with fewer pesticides, therefore reducing the damage to the ecosystem. Energy sector Sustainable technology in the energy sector is based on utilizing renewable sources of energy such as solar, wind, hydro, bioenergy, geothermal, and hydrogen. Wind energy is the world's fastest growing energy source; it has been in use for centuries in Europe and more recently in the United States and other nations. Wind energy is captured through the use of wind turbines that generate and transfer electricity for utilities, homeowners and remote villages. Solar power can be harnessed through photovoltaics, concentrating solar, or solar hot water and is also a rapidly growing energy source. Advancements in the technology and modifications to photovoltaics cells provide a more in depth untouched method for creating and producing solar power. Researchers have found a potential way to use the photogalvanic effect to transform sunlight into electric energy. The availability, potential, and feasibility of primary renewable energy resources must be analyzed early in the planning process as part of a comprehensive energy plan. The plan must justify energy demand and supply and assess the actual costs and benefits to the local, regional, and global environments. Responsible energy use is fundamental to sustainable development and a sustainable future. Energy management must balance justifiable energy demand with appropriate energy supply. The process couples energy awareness, energy conservation, and energy efficiency with the use of primary renewable energy resources. Water sector Sustainable water technologies have become an important industry segment with several companies now providing important and scalable solutions to supply water in a sustainable manner. Beyond the use of certain technologies, Sustainable Design in Water Management also consists very importantly in correct implementation of concepts. Among these principal concepts is the fact normally in developed countries 100% of water destined for consumption, that is not necessarily for drinking purposes, is of potable water quality. This concept of differentiating qualities of water for different purposes has been called "fit-for-purpose". This more rational use of water achieves several economies, that are not only related to water itself, but also the consumption of energy, as to achieve water of drinking quality can be extremely energy intensive for several reasons. Domestic machinery and furniture Automobiles, home appliances and furnitures can be designed for repair and disassembly (for recycling), and constructed from recyclable materials such as steel, aluminum and glass, and renewable materials, such as wood and plastics from natural feedstocks. Careful selection of materials and manufacturing processes can often create products comparable in price and performance to non-sustainable products. Even mild design efforts can greatly increase the sustainable content of manufactured items. Improvements to heating, cooling, ventilation and water heating Absorption refrigerator Annualized geothermal solar Earth cooling tubes Geothermal heat pump Heat recovery ventilation Hot water heat recycling Passive cooling Renewable heat Seasonal thermal energy storage (STES) Solar air conditioning Solar hot water Superinsulation Design for sustainable manufacturing Sustainable manufacturing can be defined as the creation of a manufactured product through a concurrent improvement in the resulting effect on factory and product sustainability. The concept of sustainable manufacturing demands a renewed design of production systems in order to condition the related sustainability on product life cycle and Factory operations. Designing sustainable production systems imply, on the one hand, the analysis and optimization of intra-factory aspects that are related to manufacturing plants. Such aspects can regard the resource consumption restrain, the process efficiency, the ergonomics for the factory workers, the elimination of hazardous substances, the minimization of factory emissions and waste as well as internal emissions, the integrated management of information in the production facilities, and the technological updating of machines and plants. Other inter-factories aspects concern the sustainable design of manufactured products, product chain dematerialisation, management of the background and foreground supply chains, support of circular economy paradigm, and the labelling for sustainability. Advantageous reasons for why companies might choose to sustainably manufacture either their products or use a sustainable manufacturing process are: Increase operational efficiency by reducing costs and waste Respond to or reach new customers and increase competitive advantage Protect and strengthen brand and reputation and build public trust Build long-term business viability and success Respond to regulatory constraints and opportunities Sustainable technologies Sustainable technologies use less energy, fewer limited resources, do not deplete natural resources, do not directly or indirectly pollute the environment, and can be reused or recycled at the end of their useful life. They may also be technology that help identify areas of growth by giving feedback in terms of data or alerts allowed to be analyzed to improve environmental footprints. There is significant overlap with appropriate technology, which emphasizes the suitability of technology to the context, in particular considering the needs of people in developing countries. The most appropriate technology may not be the most sustainable one; and a sustainable technology may have high cost or maintenance requirements that make it unsuitable as an "appropriate technology", as that term is commonly used. "Technology is deeply entrenched in our society; without it, society would immediately collapse. Moreover, technological changes can be perceived as easier to accomplish than lifestyle changes that might be required to solve the problems that we face." The design of sustainable technology relies heavily on the flow of new information. Sustainable technology such as smart metering systems and intelligent sensors reduce energy consumption and help conserve water. These systems are ones that have more fundamental changes, rather than just switching to simple sustainable designs. Such designing requires constant updates and evolutions, to ensure true environmental sustainability, because the concept of sustainability is ever changing – with regards to our relationship with the environment. A large part of designing sustainable technology involves giving control to the users for their comfort and operation. For example, dimming controls help people adjust the light levels to their comfort. Sectioned lighting and lighting controls let people manipulate their lighting needs without worrying about affecting others – therefore reducing lighting loads. Innovation and development The precursor step to environmentally sustainable development must be a sustainable design. By definition, design is defined as purpose, planning, or intention that exists or is thought to exist behind an action, fact, or material object. Development utilizes design and executes it, helping areas, cities, or places to advance. Sustainable development is that development which adheres to the values of sustainability and provide for the society without endangering the ecosystem and its services. "Without development, design is useless. Without design, development is unusable." – Florian Popescu, How to bridge the gap between design and development. Eco-innovation is the design and development of products and processes that contribute to sustainable development, applying the commercial application of knowledge to elicit direct or indirect ecological improvements. This includes a range of related ideas, from environmentally friendly technological advances to socially acceptable innovative paths towards sustainability. WIPO GREEN is an online global marketplace for technology exchange connecting providers and seekers of inventions and innovations in sustainable technology innovations. Several factors drive design innovation in the environmental sphere. These include growing consumer awareness and demand for green products and services, development and (re)discovery of renewable materials, sustainable refurbishment, new technologies for manufacturing and growing use of artificial intelligence-based tools based to map needs and identify areas for improved efficiency. Whatever the industry or product, design rights (whether registered or unregistered) can harness innovative design. Design rights (known as design patents in some jurisdictions) are widely used to protect everything from marketing logos and packaging to the shape of furniture and vehicles and the user interfaces of computers and smartphones. Design rights are available in many jurisdictions and through regional systems. Protection can also be obtained internationally using the WIPO-administered Hague System for the International Registration of Designs. See also Active daylighting Bright green environmentalism Building Information Modeling Building services engineering Circles of Sustainability Climate-friendly gardening Cool roof Cradle to Cradle Daylighting Earth embassy Ecodistrict Ecological Restoration Ecosa Institute Ecosystem services Energy plus house Green chemistry Green transport Healthy building Landscape ecology Leadership in Energy and Environmental Design List of energy storage projects List of low-energy building techniques Metadesign Principles of Intelligent Urbanism Source reduction Sustainable art Terreform ONE Urban vitality Vertical garden Zero energy building References
0.789577
0.990161
0.781808
Biomedicine
Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners. Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century. It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine. Overview Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy. Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy. Biomedicine involves the study of (patho-) physiological processes with methods from biology and physiology. Approaches range from understanding molecular interactions to the study of the consequences at the in vivo level. These processes are studied with the particular point of view of devising new strategies for diagnosis and therapy. Depending on the severity of the disease, biomedicine pinpoints a problem within a patient and fixes the problem through medical intervention. Medicine focuses on curing diseases rather than improving one's health. In social sciences biomedicine is described somewhat differently. Through an anthropological lens biomedicine extends beyond the realm of biology and scientific facts; it is a socio-cultural system which collectively represents reality. While biomedicine is traditionally thought to have no bias due to the evidence-based practices, Gaines & Davis-Floyd (2004) highlight that biomedicine itself has a cultural basis and this is because biomedicine reflects the norms and values of its creators. Molecular biology Molecular biology is the process of synthesis and regulation of a cell's DNA, RNA, and protein. Molecular biology consists of different techniques including Polymerase chain reaction, Gel electrophoresis, and macromolecule blotting to manipulate DNA. Polymerase chain reaction is done by placing a mixture of the desired DNA, DNA polymerase, primers, and nucleotide bases into a machine. The machine heats up and cools down at various temperatures to break the hydrogen bonds binding the DNA and allows the nucleotide bases to be added onto the two DNA templates after it has been separated. Gel electrophoresis is a technique used to identify similar DNA between two unknown samples of DNA. This process is done by first preparing an agarose gel. This jelly-like sheet will have wells for DNA to be poured into. An electric current is applied so that the DNA, which is negatively charged due to its phosphate groups is attracted to the positive electrode. Different rows of DNA will move at different speeds because some DNA pieces are larger than others. Thus if two DNA samples show a similar pattern on the gel electrophoresis, one can tell that these DNA samples match. Macromolecule blotting is a process performed after gel electrophoresis. An alkaline solution is prepared in a container. A sponge is placed into the solution and an agarose gel is placed on top of the sponge. Next, nitrocellulose paper is placed on top of the agarose gel and a paper towels are added on top of the nitrocellulose paper to apply pressure. The alkaline solution is drawn upwards towards the paper towel. During this process, the DNA denatures in the alkaline solution and is carried upwards to the nitrocellulose paper. The paper is then placed into a plastic bag and filled with a solution full of the DNA fragments, called the probe, found in the desired sample of DNA. The probes anneal to the complementary DNA of the bands already found on the nitrocellulose sample. Afterwards, probes are washed off and the only ones present are the ones that have annealed to complementary DNA on the paper. Next the paper is stuck onto an x ray film. The radioactivity of the probes creates black bands on the film, called an autoradiograph. As a result, only similar patterns of DNA to that of the probe are present on the film. This allows us the compare similar DNA sequences of multiple DNA samples. The overall process results in a precise reading of similarities in both similar and different DNA sample. Biochemistry Biochemistry is the science of the chemical processes which takes place within living organisms. Living organisms need essential elements to survive, among which are carbon, hydrogen, nitrogen, oxygen, calcium, and phosphorus. These elements make up the four macromolecules that living organisms need to survive: carbohydrates, lipids, proteins, and nucleic acids. Carbohydrates, made up of carbon, hydrogen, and oxygen, are energy-storing molecules. The simplest carbohydrate is glucose, CHO, is used in cellular respiration to produce ATP, adenosine triphosphate, which supplies cells with energy. Proteins are chains of amino acids that function, among other things, to contract skeletal muscle, as catalysts, as transport molecules, and as storage molecules. Protein catalysts can facilitate biochemical processes by lowering the activation energy of a reaction. Hemoglobins are also proteins, carrying oxygen to an organism's cells. Lipids, also known as fats, are small molecules derived from biochemical subunits from either the ketoacyl or isoprene groups. Creating eight distinct categories: fatty acids, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits). Their primary purpose is to store energy over the long term. Due to their unique structure, lipids provide more than twice the amount of energy that carbohydrates do. Lipids can also be used as insulation. Moreover, lipids can be used in hormone production to maintain a healthy hormonal balance and provide structure to cell membranes. Nucleic acids are a key component of DNA, the main genetic information-storing substance, found oftentimes in the cell nucleus, and controls the metabolic processes of the cell. DNA consists of two complementary antiparallel strands consisting of varying patterns of nucleotides. RNA is a single strand of DNA, which is transcribed from DNA and used for DNA translation, which is the process for making proteins out of RNA sequences. See also References External links Branches of biology Veterinary medicine Western culture
0.789404
0.9903
0.781747
Human biology
Human biology is an interdisciplinary area of academic study that examines humans through the influences and interplay of many diverse fields such as genetics, evolution, physiology, anatomy, epidemiology, anthropology, ecology, nutrition, population genetics, and sociocultural influences. It is closely related to the biomedical sciences, biological anthropology and other biological fields tying in various aspects of human functionality. It wasn't until the 20th century when biogerontologist, Raymond Pearl, founder of the journal Human Biology, phrased the term "human biology" in a way to describe a separate subsection apart from biology. It is also a portmanteau term that describes all biological aspects of the human body, typically using the human body as a type organism for Mammalia, and in that context it is the basis for many undergraduate University degrees and modules. Most aspects of human biology are identical or very similar to general mammalian biology. In particular, and as examples, humans : maintain their body temperature have an internal skeleton have a circulatory system have a nervous system to provide sensory information and operate and coordinate muscular activity. have a reproductive system in which they bear live young and produce milk. have an endocrine system and produce and eliminate hormones and other bio-chemical signalling agents have a respiratory system where air is inhaled into lungs and oxygen is used to produce energy. have an immune system to protect against disease Excrete waste as urine and feces. History The study of integrated human biology started in the 1920s, sparked by Charles Darwin's theories which were re-conceptualized by many scientists. Human attributes, such as child growth and genetics, were put into question and thus human biology was created. Typical human attributes The key aspects of human biology are those ways in which humans are substantially different from other mammals. Humans have a very large brain in a head that is very large for the size of the animal. This large brain has enabled a range of unique attributes including the development of complex languages and the ability to make and use a complex range of tools. The upright stance and bipedal locomotion is not unique to humans but humans are the only species to rely almost exclusively on this mode of locomotion. This has resulted in significant changes in the structure of the skeleton including the articulation of the pelvis and the femur and in the articulation of the head. In comparison with most other mammals, humans are very long lived with an average age at death in the developed world of nearly 80 years old. Humans also have the longest childhood of any mammal with sexual maturity taking 12 to 16 years on average to be completed. Humans lack fur. Although there is a residual covering of fine hair, which may be more developed in some people, and localised hair covering on the head, axillary and pubic regions, in terms of protection from cold, humans are almost naked. The reason for this development is still much debated. The human eye can see objects in colour but is not well adapted to low light conditions. The sense of smell and of taste are present but are relatively inferior to a wide range of other mammals. Human hearing is efficient but lacks the acuity of some other mammals. Similarly human sense of touch is well developed especially in the hands where dextrous tasks are performed but the sensitivity is still significantly less than in other animals, particularly those equipped with sensory bristles such as cats. Scientific investigation Human biology tries to understand and promotes research on humans as living beings as a scientific discipline. It makes use of various scientific methods, such as experiments and observations, to detail the biochemical and biophysical foundations of human life describe and formulate the underlying processes using models. As a basic science, it provides the knowledge base for medicine. A number of sub-disciplines include anatomy, cytology, histology and morphology. Medicine The capabilities of the human brain and the human dexterity in making and using tools, has enabled humans to understand their own biology through scientific experiment, including dissection, autopsy, prophylactic medicine which has, in turn, enable humans to extend their life-span by understanding and mitigating the effects of diseases. Understanding human biology has enabled and fostered a wider understanding of mammalian biology and by extension, the biology of all living organisms. Nutrition Human nutrition is typical of mammalian omnivorous nutrition requiring a balanced input of carbohydrates, fats, proteins, vitamins, and minerals. However, the human diet has a few very specific requirements. These include two specific amino acids, alpha-linolenic acid and linoleic acid without which life is not sustainable in the medium to long term. All other fatty acids can be synthesized from dietary fats. Similarly, human life requires a range of vitamins to be present in food and if these are missing or are supplied at unacceptably low levels, metabolic disorders result which can end in death. The human metabolism is similar to most other mammals except for the need to have an intake of Vitamin C to prevent scurvy and other deficiency diseases. Unusually amongst mammals, a human can synthesize Vitamin D3 using natural UV light from the sun on the skin. This capability may be widespread in the mammalian world but few other mammals share the almost naked skin of humans. The darker the human's skin, the less it can manufacture Vitamin D3. Other organisms Human biology also encompasses all those organisms that live on or in the human body. Such organisms range from parasitic insects such as fleas and ticks, parasitic helminths such as liver flukes through to bacterial and viral pathogens. Many of the organisms associated with human biology are the specialised biome in the large intestine and the biotic flora of the skin and pharyngeal and nasal region. Many of these biotic assemblages help protect humans from harm and assist in digestion, and are now known to have complex effects on mood, and well-being. Social behaviour Humans in all civilizations are social animals and use their language skills and tool making skills to communicate. These communication skills enable civilizations to grow and allow for the production of art, literature and music, and for the development of technology. All of these are wholly dependent on the human biological specialisms. The deployment of these skills has allowed the human race to dominate the terrestrial biome to the detriment of most of the other species. References External links Human Biology Association Biology Dictionary Humans
0.790217
0.988973
0.781503
Schneider's dynamic model
Edgar W. Schneider's dynamic model of postcolonial Englishes adopts an evolutionary perspective emphasizing language ecologies. It shows how language evolves as a process of 'competition-and-selection', and how certain linguistic features emerge. The Dynamic Model illustrates how the histories and ecologies will determine language structures in the different varieties of English, and how linguistic and social identities are maintained. Underlying principles Five underlying principles underscore the Dynamic Model: The closer the contact, or higher the degree of bilingualism or multilingualism in a community, the stronger the effects of contact. The structural effects of language contact depend on social conditions. Therefore, history will play an important part. Contact-induced changes can be achieved by a variety of mechanisms, from code-switching to code alternation to acquisition strategies. Language evolution, and the emergence of contact-induced varieties, can be regarded as speakers making selections from a pool of linguistic variants made available to them. Which features will be ultimately adopted depends on the complete “ecology” of the contact situation, including factors such as demography, social relationships, and surface similarities between languages etc. The Dynamic Model outlines five major stages of the evolution of world Englishes. These stages will take into account the perspectives of the two major parties of agents – settlers (STL) and indigenous residents (IDG). Each phase is defined by four parameters: Extralinguistic factors (e.g. historical events) Characteristic identity constructions for both parties Sociolinguistic determinants of contact setting Structural effects that emerge See also Bilingualism Identity (social science) Indigenous languages Language change Language contact World Englishes References Anglic languages Language contact Sociolinguistics Theories of language
0.80133
0.975139
0.781408
Protein biosynthesis
Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences. Protein synthesis can be divided broadly into two phases: transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a template molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain. Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications, which can alter the protein's ability to function, its location within the cell (e.g. cytoplasm or nucleus) and its ability to interact with other proteins. Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins have a tendency to form dense protein clumps, which are often implicated in diseases, particularly neurological disorders including Alzheimer's and Parkinson's disease. Transcription Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription. Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNAcorresponding to a geneto unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double-stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis; this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand. Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'. The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesizes a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete. The pre-mRNA molecule synthesized is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases: guanine, cytosine, adenine and thymine (G, C, A and T). RNA is also composed of four bases: guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil. Post-transcriptional modifications Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule. There are 3 key steps within post-transcriptional modifications: Addition of a 5' cap to the 5' end of the pre-mRNA molecule Addition of a 3' poly(A) tail is added to the 3' end pre-mRNA molecule Removal of introns via RNA splicing The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100-200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present. This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus. Translation During translation, ribosomes synthesize polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm. Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. To translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70-80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid. The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets; three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyze the formation of the covalent peptide bond between the two adjacent amino acids. The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Dr. Har Gobind Khorana, a scientist originating from India, decoded the RNA sequences for about 20 amino acids. He was awarded the Nobel prize in 1968, along with two other scientists, for his work. Protein folding Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function. The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin. Post-translation events There are events that follow protein biosynthesis such as proteolysis and protein-folding. Proteolysis refers to the cleavage of proteins by proteases and the breakdown of proteins into amino acids by the action of enzymes. Post-translational modifications When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude. There are four key classes of post-translational modification: Cleavage Addition of chemical groups Addition of complex molecules Formation of intramolecular bonds Cleavage Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities. Addition of chemical groups Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation. Methylation is the reversible addition of a methyl group onto an amino acid catalyzed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed. Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription. The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a byproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate. Addition of complex molecules Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification. In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins. There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure. Formation of covalent bonds Many proteins produced within the cell are secreted outside the cell to function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. To stabilize the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a Sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilize the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidizing environment to react. As a result, disulfide bonds are typically formed in the oxidizing environment of the endoplasmic reticulum catalyzed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment. Role of protein synthesis in disease Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders. Sickle cell disease Sickle cell disease is a group of diseases caused by a mutation in a subunit of hemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anemia. Sickle cell anemia is the most common homozygous recessive single gene disorder, meaning the affected individual must carry a mutation in both copies of the affected gene (one inherited from each parent) to experience the disease. Hemoglobin has a complex quaternary structure and is composed of four polypeptide subunitstwo A subunits and two B subunits. Patients with sickle cell anemia have a missense or substitution mutation in the gene encoding the hemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the hemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine. This change in the primary structure of the hemoglobin B subunit polypeptide chain alters the functionality of the hemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual. Cancer Cancers form as a result of gene mutations as well as improper protein translation. In addition to cancer cells proliferating abnormally, they suppress the expression of anti-apoptotic or pro-apoptotic genes or proteins. Most cancer cells see a mutation in the signaling protein Ras, which functions as an on/off signal transductor in cells. In cancer cells, the RAS protein becomes persistently active, thus promoting the proliferation of the cell due to the absence of any regulation. Additionally, most cancer cells carry two mutant copies of the regulator gene p53, which acts as a gatekeeper for damaged genes and initiates apoptosis in malignant cells. In its absence, the cell cannot initiate apoptosis or signal for other cells to destroy it. As the tumor cells proliferate, they either remain confined to one area and are called benign, or become malignant cells that migrate to other areas of the body. Oftentimes, these malignant cells secrete proteases that break apart the extracellular matrix of tissues. This then allows the cancer to enter its terminal stage called Metastasis, in which the cells enter the bloodstream or the lymphatic system to travel to a new part of the body. See also Central dogma of molecular biology Genetic code References External links A more advanced video detailing the different types of post-translational modifications and their chemical structures A useful video visualising the process of converting DNA to protein via transcription and translation Video visualising the process of protein folding from the non-functional primary structure to a mature, folded 3D protein structure with reference to the role of mutations and protein mis-folding in disease Gene expression Proteins Biosynthesis Metabolism
0.784368
0.996062
0.781279
History of biology
The history of biology traces the study of the living world from ancient to modern times. Although the concept of biology as a single coherent field arose in the 19th century, the biological sciences emerged from traditions of medicine and natural history reaching back to Ayurveda, ancient Egyptian medicine and the works of Aristotle, Theophrastus and Galen in the ancient Greco-Roman world. This ancient work was further developed in the Middle Ages by Muslim physicians and scholars such as Avicenna. During the European Renaissance and early modern period, biological thought was revolutionized in Europe by a renewed interest in empiricism and the discovery of many novel organisms. Prominent in this movement were Vesalius and Harvey, who used experimentation and careful observation in physiology, and naturalists such as Linnaeus and Buffon who began to classify the diversity of life and the fossil record, as well as the development and behavior of organisms. Antonie van Leeuwenhoek revealed by means of microscopy the previously unknown world of microorganisms, laying the groundwork for cell theory. The growing importance of natural theology, partly a response to the rise of mechanical philosophy, encouraged the growth of natural history (although it entrenched the argument from design). Over the 18th and 19th centuries, biological sciences such as botany and zoology became increasingly professional scientific disciplines. Lavoisier and other physical scientists began to connect the animate and inanimate worlds through physics and chemistry. Explorer-naturalists such as Alexander von Humboldt investigated the interaction between organisms and their environment, and the ways this relationship depends on geography—laying the foundations for biogeography, ecology and ethology. Naturalists began to reject essentialism and consider the importance of extinction and the mutability of species. Cell theory provided a new perspective on the fundamental basis of life. These developments, as well as the results from embryology and paleontology, were synthesized in Charles Darwin's theory of evolution by natural selection. The end of the 19th century saw the fall of spontaneous generation and the rise of the germ theory of disease, though the mechanism of inheritance remained a mystery. In the early 20th century, the rediscovery of Mendel's work in botany by Carl Correns led to the rapid development of genetics applied to fruit flies by Thomas Hunt Morgan and his students, and by the 1930s the combination of population genetics and natural selection in the "neo-Darwinian synthesis". New disciplines developed rapidly, especially after Watson and Crick proposed the structure of DNA. Following the establishment of the Central Dogma and the cracking of the genetic code, biology was largely split between organismal biology—the fields that deal with whole organisms and groups of organisms—and the fields related to cellular and molecular biology. By the late 20th century, new fields like genomics and proteomics were reversing this trend, with organismal biologists using molecular techniques, and molecular and cell biologists investigating the interplay between genes and the environment, as well as the genetics of natural populations of organisms. Prehistoric times The earliest humans must have had and passed on knowledge about plants and animals to increase their chances of survival. This may have included knowledge of human and animal anatomy and aspects of animal behavior (such as migration patterns). However, the first major turning point in biological knowledge came with the Neolithic Revolution about 10,000 years ago. Humans first domesticated plants for farming, then livestock animals to accompany the resulting sedentary societies. Earliest roots Between around 3000 and 1200 BCE, the Ancient Egyptians and Mesopotamians made contributions to astronomy, mathematics, and medicine, which later entered and shaped Greek natural philosophy of classical antiquity, a period that profoundly influenced the development of what came to be known as biology. Ancient Egypt Over a dozen medical papyri have been preserved, most notably the Edwin Smith Papyrus (the oldest extant surgical handbook) and the Ebers Papyrus (a handbook of preparing and using materia medica for various diseases), both from around 1600 BCE. Ancient Egypt is also known for developing embalming, which was used for mummification, in order to preserve human remains and forestall decomposition. Mesopotamia The Mesopotamians seem to have had little interest in the natural world as such, preferring to study how the gods had ordered the universe. Animal physiology was studied for divination, including especially the anatomy of the liver, seen as an important organ in haruspicy. Animal behavior too was studied for divinatory purposes. Most information about the training and domestication of animals was probably transmitted orally, but one text dealing with the training of horses has survived. The ancient Mesopotamians had no distinction between "rational science" and magic. When a person became ill, doctors prescribed both magical formulas to be recited and medicinal treatments. The earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur. The most extensive Babylonian medical text, however, is the Diagnostic Handbook written by the ummânū, or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069 – 1046 BCE). In East Semitic cultures, the main medicinal authority was an exorcist-healer known as an āšipu. The profession was passed down from father to son and was held in high regard. Of less frequent recourse was the asu, a healer who treated physical symptoms using remedies composed of herbs, animal products, and minerals, as well as potions, enemas, and ointments or poultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practiced prophylaxis and took measures to prevent the spread of disease. Separate developments in China and India Observations and theories regarding nature and human health, separate from Western traditions, had emerged independently in other civilizations such as those in China and the Indian subcontinent. In ancient China, earlier conceptions can be found dispersed across several different disciplines, including the work of herbologists, physicians, alchemists, and philosophers. The Taoist tradition of Chinese alchemy, for example, emphasized health (with the ultimate goal being the elixir of life). The system of classical Chinese medicine usually revolved around the theory of yin and yang, and the five phases. Taoist philosophers, such as Zhuangzi in the 4th century BCE, also expressed ideas related to evolution, such as denying the fixity of biological species and speculating that species had developed differing attributes in response to differing environments. One of the oldest organised systems of medicine is known from ancient India in the form of Ayurveda, which originated around 1500 BCE from Atharvaveda (one of the four most ancient books of Indian knowledge, wisdom and culture). The ancient Indian Ayurveda tradition independently developed the concept of three humours, resembling that of the four humours of ancient Greek medicine, though the Ayurvedic system included further complications, such as the body being composed of five elements and seven basic tissues. Ayurvedic writers also classified living things into four categories based on the method of birth (from the womb, eggs, heat & moisture, and seeds) and explained the conception of a fetus in detail. They also made considerable advances in the field of surgery, often without the use of human dissection or animal vivisection. One of the earliest Ayurvedic treatises was the Sushruta Samhita, attributed to Sushruta in the 6th century BCE. It was also an early materia medica, describing 700 medicinal plants, 64 preparations from mineral sources, and 57 preparations based on animal sources. Classical antiquity The pre-Socratic philosophers asked many questions about life but produced little systematic knowledge of specifically biological interest—though the attempts of the atomists to explain life in purely physical terms would recur periodically through the history of biology. However, the medical theories of Hippocrates and his followers, especially humorism, had a lasting impact. The philosopher Aristotle was the most influential scholar of the living world from classical antiquity. Though his early work in natural philosophy was speculative, Aristotle's later biological writings were more empirical, focusing on biological causation and the diversity of life. He made countless observations of nature, especially the habits and attributes of plants and animals in the world around him, which he devoted considerable attention to categorizing. In all, Aristotle classified 540 animal species, and dissected at least 50. He believed that intellectual purposes, formal causes, guided all natural processes. Aristotle's successor at the Lyceum, Theophrastus, wrote a series of books on botany, the History of Plants, which survived as the most important contribution of antiquity to botany, even into the Middle Ages. Many of Theophrastus' names survive into modern times, such as karpós for fruit, and perikárpion for seed vessel. Dioscorides wrote a pioneering and encyclopedic pharmacopoeia, De materia medica, incorporating descriptions of some 600 plants and their uses in medicine. Pliny the Elder, in his Natural History, assembled a similarly encyclopaedic account of things in nature, including accounts of many plants and animals. Aristotle, and nearly all Western scholars after him until the 18th century, believed that creatures were arranged in a graded scale of perfection rising from plants on up to humans: the scala naturae or Great Chain of Being. A few scholars in the Hellenistic period under the Ptolemies—particularly Herophilus of Chalcedon and Erasistratus of Chios—amended Aristotle's physiological work, even performing dissections and vivisections. Claudius Galen became the most important authority on medicine and anatomy. Though a few ancient atomists such as Lucretius challenged the teleological Aristotelian viewpoint that all aspects of life are the result of design or purpose, teleology (and after the rise of Christianity, natural theology) would remain central to biological thought essentially until the 18th and 19th centuries. Ernst W. Mayr argued that "Nothing of any real consequence happened in biology after Lucretius and Galen until the Renaissance." The ideas of the Greek traditions of natural history and medicine survived, but they were generally taken unquestioningly in medieval Europe. Middle Ages The decline of the Roman Empire led to the disappearance or destruction of much knowledge, though physicians still incorporated many aspects of the Greek tradition into training and practice. In Byzantium and the Islamic world, many of the Greek works were translated into Arabic and many of the works of Aristotle were preserved. During the High Middle Ages, a few European scholars such as Hildegard of Bingen, Albertus Magnus and Frederick II wrote on natural history. The rise of European universities, though important for the development of physics and philosophy, had little impact on biological scholarship. Renaissance The European Renaissance brought expanded interest in both empirical natural history and physiology. In 1543, Andreas Vesalius inaugurated the modern era of Western medicine with his seminal human anatomy treatise De humani corporis fabrica, which was based on dissection of corpses. Vesalius was the first in a series of anatomists who gradually replaced scholasticism with empiricism in physiology and medicine, relying on first-hand experience rather than authority and abstract reasoning. Via herbalism, medicine was also indirectly the source of renewed empiricism in the study of plants. Otto Brunfels, Hieronymus Bock and Leonhart Fuchs wrote extensively on wild plants, the beginning of a nature-based approach to the full range of plant life. Bestiaries—a genre that combines both the natural and figurative knowledge of animals—also became more sophisticated, especially with the work of William Turner, Pierre Belon, Guillaume Rondelet, Conrad Gessner, and Ulisse Aldrovandi. Artists such as Albrecht Dürer and Leonardo da Vinci, often working with naturalists, were also interested in the bodies of animals and humans, studying physiology in detail and contributing to the growth of anatomical knowledge. The traditions of alchemy and natural magic, especially in the work of Paracelsus, also laid claim to knowledge of the living world. Alchemists subjected organic matter to chemical analysis and experimented liberally with both biological and mineral pharmacology. This was part of a larger transition in world views (the rise of the mechanical philosophy) that continued into the 17th century, as the traditional metaphor of nature as organism was replaced by the nature as machine metaphor. Age of Enlightenment Systematizing, naming and classifying dominated natural history throughout much of the 17th and 18th centuries. Carl Linnaeus published a basic taxonomy for the natural world in 1735 (variations of which have been in use ever since), and in the 1750s introduced scientific names for all his species. While Linnaeus conceived of species as unchanging parts of a designed hierarchy, the other great naturalist of the 18th century, Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living forms as malleable—even suggesting the possibility of common descent. Though he was opposed to evolution, Buffon is a key figure in the history of evolutionary thought; his work would influence the evolutionary theories of both Lamarck and Darwin. The discovery and description of new species and the collection of specimens became a passion of scientific gentlemen and a lucrative enterprise for entrepreneurs; many naturalists traveled the globe in search of scientific knowledge and adventure. Extending the work of Vesalius into experiments on still living bodies (of both humans and animals), William Harvey and other natural philosophers investigated the roles of blood, veins and arteries. Harvey's De motu cordis in 1628 was the beginning of the end for Galenic theory, and alongside Santorio Santorio's studies of metabolism, it served as an influential model of quantitative approaches to physiology. In the early 17th century, the micro-world of biology was just beginning to open up. A few lensmakers and natural philosophers had been creating crude microscopes since the late 16th century, and Robert Hooke published the seminal Micrographia based on observations with his own compound microscope in 1665. But it was not until Antonie van Leeuwenhoek's dramatic improvements in lensmaking beginning in the 1670s—ultimately producing up to 200-fold magnification with a single lens—that scholars discovered spermatozoa, bacteria, infusoria and the sheer strangeness and diversity of microscopic life. Similar investigations by Jan Swammerdam led to a new interest in entomology and built the basic techniques of microscopic dissection and staining. As the microscopic world was expanding, the macroscopic world was shrinking. Botanists such as John Ray worked to incorporate the flood of newly discovered organisms shipped from across the globe into a coherent taxonomy, and a coherent theology (natural theology). Debate over another flood, the Noachian, catalyzed the development of paleontology; in 1669 Nicholas Steno published an essay on how the remains of living organisms could be trapped in layers of sediment and mineralized to produce fossils. Although Steno's ideas about fossilization were well known and much debated among natural philosophers, an organic origin for all fossils would not be accepted by all naturalists until the end of the 18th century due to philosophical and theological debate about issues such as the age of the earth and extinction. 19th century: the emergence of biological disciplines Up through the 19th century, the scope of biology was largely divided between medicine, which investigated questions of form and function (i.e., physiology), and natural history, which was concerned with the diversity of life and interactions among different forms of life and between life and non-life. By 1900, much of these domains overlapped, while natural history (and its counterpart natural philosophy) had largely given way to more specialized scientific disciplines—cytology, bacteriology, morphology, embryology, geography, and geology. Use of the term biology The term biology in its modern sense appears to have been introduced independently by Thomas Beddoes (in 1799), Karl Friedrich Burdach (in 1800), Gottfried Reinhold Treviranus (Biologie oder Philosophie der lebenden Natur, 1802) and Jean-Baptiste Lamarck (Hydrogéologie, 1802). The word itself appears in the title of Volume 3 of Michael Christoph Hanow's Philosophiae naturalis sive physicae dogmaticae: Geologia, biologia, phytologia generalis et dendrologia, published in 1766. The term biology derives from the Greek βίος (bíos) 'life', and λογία (logia) 'branch of study'. Before biology, there were several terms used for the study of animals and plants. Natural history referred to the descriptive aspects of biology, though it also included mineralogy and other non-biological fields; from the Middle Ages through the Renaissance, the unifying framework of natural history was the scala naturae or Great Chain of Being. Natural philosophy and natural theology encompassed the conceptual and metaphysical basis of plant and animal life, dealing with problems of why organisms exist and behave the way they do, though these subjects also included what is now geology, physics, chemistry, and astronomy. Physiology and (botanical) pharmacology were the province of medicine. Botany, Zoology, and (in the case of fossils) Geology replaced natural history and natural philosophy in the 18th and 19th centuries before biology was widely adopted. To this day, "botany" and "zoology" are widely used, although they have been joined by other sub-disciplines of biology. Natural history and natural philosophy Widespread travel by naturalists in the early-to-mid-19th century resulted in a wealth of new information about the diversity and distribution of living organisms. Of particular importance was the work of Alexander von Humboldt, which analyzed the relationship between organisms and their environment (i.e., the domain of natural history) using the quantitative approaches of natural philosophy (i.e., physics and chemistry). Humboldt's work laid the foundations of biogeography and inspired several generations of scientists. Geology and paleontology The emerging discipline of geology also brought natural history and natural philosophy closer together; the establishment of the stratigraphic column linked the spatial distribution of organisms to their temporal distribution, a key precursor to concepts of evolution. Georges Cuvier and others made great strides in comparative anatomy and paleontology in the late 1790s and early 19th century. In a series of lectures and papers that made detailed comparisons between living mammals and fossil remains Cuvier was able to establish that the fossils were remains of species that had become extinct—rather than being remains of species still alive elsewhere in the world, as had been widely believed. Fossils discovered and described by Gideon Mantell, William Buckland, Mary Anning, and Richard Owen among others helped establish that there had been an 'age of reptiles' that had preceded even the prehistoric mammals. These discoveries captured the public imagination and focused attention on the history of life on earth. Most of these geologists held to catastrophism, but Charles Lyell's influential Principles of Geology (1830) popularised Hutton's uniformitarianism, a theory that explained the geological past and present on equal terms. Evolution and biogeography The most significant evolutionary theory before Darwin's was that of Jean-Baptiste Lamarck; based on the inheritance of acquired characteristics (an inheritance mechanism that was widely accepted until the 20th century), it described a chain of development stretching from the lowliest microbe to humans. The British naturalist Charles Darwin, combining the biogeographical approach of Humboldt, the uniformitarian geology of Lyell, Thomas Malthus's writings on population growth, and his own morphological expertise, created a more successful evolutionary theory based on natural selection; similar evidence led Alfred Russel Wallace to independently reach the same conclusions. The 1859 publication of Darwin's theory in On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life is often considered the central event in the history of modern biology. Darwin's established credibility as a naturalist, the sober tone of the work, and most of all the sheer strength and volume of evidence presented, allowed Origin to succeed where previous evolutionary works such as the anonymous Vestiges of Creation had failed. Most scientists were convinced of evolution and common descent by the end of the 19th century. However, natural selection would not be accepted as the primary mechanism of evolution until well into the 20th century, as most contemporary theories of heredity seemed incompatible with the inheritance of random variation. Wallace, following on earlier work by de Candolle, Humboldt and Darwin, made major contributions to zoogeography. Because of his interest in the transmutation hypothesis, he paid particular attention to the geographical distribution of closely allied species during his field work first in South America and then in the Malay Archipelago. While in the archipelago he identified the Wallace line, which runs through the Spice Islands dividing the fauna of the archipelago between an Asian zone and a New Guinea/Australian zone. His key question, as to why the fauna of islands with such similar climates should be so different, could only be answered by considering their origin. In 1876 he wrote The Geographical Distribution of Animals, which was the standard reference work for over half a century, and a sequel, Island Life, in 1880 that focused on island biogeography. He extended the six-zone system developed by Philip Sclater for describing the geographical distribution of birds to animals of all kinds. His method of tabulating data on animal groups in geographic zones highlighted the discontinuities; and his appreciation of evolution allowed him to propose rational explanations, which had not been done before. The scientific study of heredity grew rapidly in the wake of Darwin's Origin of Species with the work of Francis Galton and the biometricians. The origin of genetics is usually traced to the 1866 work of the monk Gregor Mendel, who would later be credited with the laws of inheritance. However, his work was not recognized as significant until 35 years afterward. In the meantime, a variety of theories of inheritance (based on pangenesis, orthogenesis, or other mechanisms) were debated and investigated vigorously. Embryology and ecology also became central biological fields, especially as linked to evolution and popularized in the work of Ernst Haeckel. Most of the 19th century work on heredity, however, was not in the realm of natural history, but that of experimental physiology. Physiology Over the course of the 19th century, the scope of physiology expanded greatly, from a primarily medically oriented field to a wide-ranging investigation of the physical and chemical processes of life—including plants, animals, and even microorganisms in addition to man. Living things as machines became a dominant metaphor in biological (and social) thinking. Cell theory, embryology and germ theory Advances in microscopy also had a profound impact on biological thinking. In the early 19th century, a number of biologists pointed to the central importance of the cell. In 1838 and 1839, Schleiden and Schwann began promoting the ideas that (1) the basic unit of organisms is the cell and (2) that individual cells have all the characteristics of life, though they opposed the idea that (3) all cells come from the division of other cells. Thanks to the work of Robert Remak and Rudolf Virchow, however, by the 1860s most biologists accepted all three tenets of what came to be known as cell theory. Cell theory led biologists to re-envision individual organisms as interdependent assemblages of individual cells. Scientists in the rising field of cytology, armed with increasingly powerful microscopes and new staining methods, soon found that even single cells were far more complex than the homogeneous fluid-filled chambers described by earlier microscopists. Robert Brown had described the nucleus in 1831, and by the end of the 19th century cytologists identified many of the key cell components: chromosomes, centrosomes mitochondria, chloroplasts, and other structures made visible through staining. Between 1874 and 1884 Walther Flemming described the discrete stages of mitosis, showing that they were not artifacts of staining but occurred in living cells, and moreover, that chromosomes doubled in number just before the cell divided and a daughter cell was produced. Much of the research on cell reproduction came together in August Weismann's theory of heredity: he identified the nucleus (in particular chromosomes) as the hereditary material, proposed the distinction between somatic cells and germ cells (arguing that chromosome number must be halved for germ cells, a precursor to the concept of meiosis), and adopted Hugo de Vries's theory of pangenes. Weismannism was extremely influential, especially in the new field of experimental embryology. By the mid-1850s the miasma theory of disease was largely superseded by the germ theory of disease, creating extensive interest in microorganisms and their interactions with other forms of life. By the 1880s, bacteriology was becoming a coherent discipline, especially through the work of Robert Koch, who introduced methods for growing pure cultures on agar gels containing specific nutrients in Petri dishes. The long-held idea that living organisms could easily originate from nonliving matter (spontaneous generation) was attacked in a series of experiments carried out by Louis Pasteur, while debates over vitalism vs. mechanism (a perennial issue since the time of Aristotle and the Greek atomists) continued apace. Rise of organic chemistry and experimental physiology In chemistry, one central issue was the distinction between organic and inorganic substances, especially in the context of organic transformations such as fermentation and putrefaction. Since Aristotle these had been considered essentially biological (vital) processes. However, Friedrich Wöhler, Justus Liebig and other pioneers of the rising field of organic chemistry—building on the work of Lavoisier—showed that the organic world could often be analyzed by physical and chemical methods. In 1828 Wöhler showed that the organic substance urea could be created by chemical means that do not involve life, providing a powerful challenge to vitalism. Cell extracts ("ferments") that could effect chemical transformations were discovered, beginning with diastase in 1833. By the end of the 19th century the concept of enzymes was well established, though equations of chemical kinetics would not be applied to enzymatic reactions until the early 20th century. Physiologists such as Claude Bernard explored (through vivisection and other experimental methods) the chemical and physical functions of living bodies to an unprecedented degree, laying the groundwork for endocrinology (a field that developed quickly after the discovery of the first hormone, secretin, in 1902), biomechanics, and the study of nutrition and digestion. The importance and diversity of experimental physiology methods, within both medicine and biology, grew dramatically over the second half of the 19th century. The control and manipulation of life processes became a central concern, and experiment was placed at the center of biological education. Twentieth century biological sciences At the beginning of the 20th century, biological research was largely a professional endeavour. Most work was still done in the natural history mode, which emphasized morphological and phylogenetic analysis over experiment-based causal explanations. However, anti-vitalist experimental physiologists and embryologists, especially in Europe, were increasingly influential. The tremendous success of experimental approaches to development, heredity, and metabolism in the 1900s and 1910s demonstrated the power of experimentation in biology. In the following decades, experimental work replaced natural history as the dominant mode of research. Ecology and environmental science In the early 20th century, naturalists were faced with increasing pressure to add rigor and preferably experimentation to their methods, as the newly prominent laboratory-based biological disciplines had done. Ecology had emerged as a combination of biogeography with the biogeochemical cycle concept pioneered by chemists; field biologists developed quantitative methods such as the quadrat and adapted laboratory instruments and cameras for the field to further set their work apart from traditional natural history. Zoologists and botanists did what they could to mitigate the unpredictability of the living world, performing laboratory experiments and studying semi-controlled natural environments such as gardens; new institutions like the Carnegie Station for Experimental Evolution and the Marine Biological Laboratory provided more controlled environments for studying organisms through their entire life cycles. The ecological succession concept, pioneered in the 1900s and 1910s by Henry Chandler Cowles and Frederic Clements, was important in early plant ecology. Alfred Lotka's predator-prey equations, G. Evelyn Hutchinson's studies of the biogeography and biogeochemical structure of lakes and rivers (limnology) and Charles Elton's studies of animal food chains were pioneers among the succession of quantitative methods that colonized the developing ecological specialties. Ecology became an independent discipline in the 1940s and 1950s after Eugene P. Odum synthesized many of the concepts of ecosystem ecology, placing relationships between groups of organisms (especially material and energy relationships) at the center of the field. In the 1960s, as evolutionary theorists explored the possibility of multiple units of selection, ecologists turned to evolutionary approaches. In population ecology, debate over group selection was brief but vigorous; by 1970, most biologists agreed that natural selection was rarely effective above the level of individual organisms. The evolution of ecosystems, however, became a lasting research focus. Ecology expanded rapidly with the rise of the environmental movement; the International Biological Program attempted to apply the methods of big science (which had been so successful in the physical sciences) to ecosystem ecology and pressing environmental issues, while smaller-scale independent efforts such as island biogeography and the Hubbard Brook Experimental Forest helped redefine the scope of an increasingly diverse discipline. Classical genetics, the modern synthesis, and evolutionary theory 1900 marked the so-called rediscovery of Mendel by Carl Correns, who arrived at Mendel's laws (which were not actually present in Mendel's work). Soon after, cytologists (cell biologists) proposed that chromosomes were the hereditary material. This was taken up by Carl Correns and others between 1910 and 1915 as the "Mendelian-chromosome theory" of heredity. Thomas Hunt Morgan and the "Drosophilists" in his fly lab applied this to a new model organism. They hypothesized crossing over to explain linkage and constructed genetic maps of the fruit fly Drosophila melanogaster, which became a widely used model organism. Hugo de Vries tried to link the new genetics with evolution; building on his work with heredity and hybridization, he proposed a theory of mutationism, which was widely accepted in the early 20th century. Lamarckism, or the theory of inheritance of acquired characteristics also had many adherents. Darwinism was seen as incompatible with the continuously variable traits studied by biometricians, which seemed only partially heritable. In the 1920s and 1930s—following the acceptance of the Mendelian-chromosome theory— the emergence of the discipline of population genetics, with the work of R.A. Fisher, J.B.S. Haldane and Sewall Wright, unified the idea of evolution by natural selection with Mendelian genetics, producing the modern synthesis. The inheritance of acquired characters was rejected, while mutationism gave way as genetic theories matured. In the second half of the century the ideas of population genetics began to be applied in the new discipline of the genetics of behavior, sociobiology, and, especially in humans, evolutionary psychology. In the 1960s W.D. Hamilton and others developed game theory approaches to explain altruism from an evolutionary perspective through kin selection. The possible origin of higher organisms through endosymbiosis, and contrasting approaches to molecular evolution in the gene-centered view (which held selection as the predominant cause of evolution) and the neutral theory (which made genetic drift a key factor) spawned perennial debates over the proper balance of adaptationism and contingency in evolutionary theory. In the 1970s Stephen Jay Gould and Niles Eldredge proposed the theory of punctuated equilibrium which holds that stasis is the most prominent feature of the fossil record, and that most evolutionary changes occur rapidly over relatively short periods of time. In 1980 Luis Alvarez and Walter Alvarez proposed the hypothesis that an impact event was responsible for the Cretaceous–Paleogene extinction event. Also in the early 1980s, statistical analysis of the fossil record of marine organisms published by Jack Sepkoski and David M. Raup led to a better appreciation of the importance of mass extinction events to the history of life on earth. Biochemistry, microbiology, and molecular biology By the end of the 19th century all of the major pathways of drug metabolism had been discovered, along with the outlines of protein and fatty acid metabolism and urea synthesis. In the early decades of the 20th century, the minor components of foods in human nutrition, the vitamins, began to be isolated and synthesized. Improved laboratory techniques such as chromatography and electrophoresis led to rapid advances in physiological chemistry, which—as biochemistry—began to achieve independence from its medical origins. In the 1920s and 1930s, biochemists—led by Hans Krebs and Carl and Gerty Cori—began to work out many of the central metabolic pathways of life: the citric acid cycle, glycogenesis and glycolysis, and the synthesis of steroids and porphyrins. Between the 1930s and 1950s, Fritz Lipmann and others established the role of ATP as the universal carrier of energy in the cell, and mitochondria as the powerhouse of the cell. Such traditionally biochemical work continued to be very actively pursued throughout the 20th century and into the 21st. Origins of molecular biology Following the rise of classical genetics, many biologists—including a new wave of physical scientists in biology—pursued the question of the gene and its physical nature. Warren Weaver—head of the science division of the Rockefeller Foundation—issued grants to promote research that applied the methods of physics and chemistry to basic biological problems, coining the term molecular biology for this approach in 1938; many of the significant biological breakthroughs of the 1930s and 1940s were funded by the Rockefeller Foundation. Like biochemistry, the overlapping disciplines of bacteriology and virology (later combined as microbiology), situated between science and medicine, developed rapidly in the early 20th century. Félix d'Herelle's isolation of bacteriophage during World War I initiated a long line of research focused on phage viruses and the bacteria they infect. The development of standard, genetically uniform organisms that could produce repeatable experimental results was essential for the development of molecular genetics. After early work with Drosophila and maize, the adoption of simpler model systems like the bread mold Neurospora crassa made it possible to connect genetics to biochemistry, most importantly with Beadle and Tatum's one gene-one enzyme hypothesis in 1941. Genetics experiments on even simpler systems like tobacco mosaic virus and bacteriophage, aided by the new technologies of electron microscopy and ultracentrifugation, forced scientists to re-evaluate the literal meaning of life; virus heredity and reproducing nucleoprotein cell structures outside the nucleus ("plasmagenes") complicated the accepted Mendelian-chromosome theory. Oswald Avery showed in 1943 that DNA was likely the genetic material of the chromosome, not its protein; the issue was settled decisively with the 1952 Hershey–Chase experiment—one of many contributions from the so-called phage group centered around physicist-turned-biologist Max Delbrück. In 1953 James Watson and Francis Crick, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix. In their famous paper "Molecular structure of Nucleic Acids", Watson and Crick noted coyly, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." After the 1958 Meselson–Stahl experiment confirmed the semiconservative replication of DNA, it was clear to most biologists that nucleic acid sequence must somehow determine amino acid sequence in proteins; physicist George Gamow proposed that a fixed genetic code connected proteins and DNA. Between 1953 and 1961, there were few known biological sequences—either DNA or protein—but an abundance of proposed code systems, a situation made even more complicated by expanding knowledge of the intermediate role of RNA. In 1961, it was demonstrated that when a gene encodes a protein, three sequential bases of a gene’s DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid. Furthermore, it was shown that the codons do not overlap with each other in the DNA sequence encoding a protein, and that each sequence is read from a fixed starting point. To actually decipher the code, it took an extensive series of experiments in biochemistry and bacterial genetics, between 1961 and 1966—most importantly the work of Nirenberg and Khorana. During 1962-1964, numerous conditional lethal mutants of a bacterial virus were isolated. These mutants were used in several different labs to advance fundamental understanding of the functions and interactions of the proteins employed in the machinery of DNA replication, DNA repair, DNA recombination, and in the assembly of molecular structures. Expansion of molecular biology In addition to the Division of Biology at Caltech, the Laboratory of Molecular Biology (and its precursors) at Cambridge, and a handful of other institutions, the Pasteur Institute became a major center for molecular biology research in the late 1950s. Scientists at Cambridge, led by Max Perutz and John Kendrew, focused on the rapidly developing field of structural biology, combining X-ray crystallography with Molecular modelling and the new computational possibilities of digital computing (benefiting both directly and indirectly from the military funding of science). A number of biochemists led by Frederick Sanger later joined the Cambridge lab, bringing together the study of macromolecular structure and function. At the Pasteur Institute, François Jacob and Jacques Monod followed the 1959 PaJaMo experiment with a series of publications regarding the lac operon that established the concept of gene regulation and identified what came to be known as messenger RNA. By the mid-1960s, the intellectual core of molecular biology—a model for the molecular basis of metabolism and reproduction— was largely complete. The late 1950s to the early 1970s was a period of intense research and institutional expansion for molecular biology, which had only recently become a somewhat coherent discipline. In what organismic biologist E. O. Wilson called "The Molecular Wars", the methods and practitioners of molecular biology spread rapidly, often coming to dominate departments and even entire disciplines. Molecularization was particularly important in genetics, immunology, embryology, and neurobiology, while the idea that life is controlled by a "genetic program"—a metaphor Jacob and Monod introduced from the emerging fields of cybernetics and computer science—became an influential perspective throughout biology. Immunology in particular became linked with molecular biology, with innovation flowing both ways: the clonal selection theory developed by Niels Jerne and Frank Macfarlane Burnet in the mid-1950s helped shed light on the general mechanisms of protein synthesis. Resistance to the growing influence of molecular biology was especially evident in evolutionary biology. Protein sequencing had great potential for the quantitative study of evolution (through the molecular clock hypothesis), but leading evolutionary biologists questioned the relevance of molecular biology for answering the big questions of evolutionary causation. Departments and disciplines fractured as organismic biologists asserted their importance and independence: Theodosius Dobzhansky made the famous statement that "nothing in biology makes sense except in the light of evolution" as a response to the molecular challenge. The issue became even more critical after 1968; Motoo Kimura's neutral theory of molecular evolution suggested that natural selection was not the ubiquitous cause of evolution, at least at the molecular level, and that molecular evolution might be a fundamentally different process from morphological evolution. (Resolving this "molecular/morphological paradox" has been a central focus of molecular evolution research since the 1960s.) Biotechnology, genetic engineering, and genomics Biotechnology in the general sense has been an important part of biology since the late 19th century. With the industrialization of brewing and agriculture, chemists and biologists became aware of the great potential of human-controlled biological processes. In particular, fermentation proved a great boon to chemical industries. By the early 1970s, a wide range of biotechnologies were being developed, from drugs like penicillin and steroids to foods like Chlorella and single-cell protein to gasohol—as well as a wide range of hybrid high-yield crops and agricultural technologies, the basis for the Green Revolution. Recombinant DNA Biotechnology in the modern sense of genetic engineering began in the 1970s, with the invention of recombinant DNA techniques. Restriction enzymes were discovered and characterized in the late 1960s, following on the heels of the isolation, then duplication, then synthesis of viral genes. Beginning with the lab of Paul Berg in 1972 (aided by EcoRI from Herbert Boyer's lab, building on work with ligase by Arthur Kornberg's lab), molecular biologists put these pieces together to produce the first transgenic organisms. Soon after, others began using plasmid vectors and adding genes for antibiotic resistance, greatly increasing the reach of the recombinant techniques. Wary of the potential dangers (particularly the possibility of a prolific bacteria with a viral cancer-causing gene), the scientific community as well as a wide range of scientific outsiders reacted to these developments with both enthusiasm and fearful restraint. Prominent molecular biologists led by Berg suggested a temporary moratorium on recombinant DNA research until the dangers could be assessed and policies could be created. This moratorium was largely respected, until the participants in the 1975 Asilomar Conference on Recombinant DNA created policy recommendations and concluded that the technology could be used safely. Following Asilomar, new genetic engineering techniques and applications developed rapidly. DNA sequencing methods improved greatly (pioneered by Frederick Sanger and Walter Gilbert), as did oligonucleotide synthesis and transfection techniques. Researchers learned to control the expression of transgenes, and were soon racing—in both academic and industrial contexts—to create organisms capable of expressing human genes for the production of human hormones. However, this was a more daunting task than molecular biologists had expected; developments between 1977 and 1980 showed that, due to the phenomena of split genes and splicing, higher organisms had a much more complex system of gene expression than the bacteria models of earlier studies. The first such race, for synthesizing human insulin, was won by Genentech. This marked the beginning of the biotech boom (and with it, the era of gene patents), with an unprecedented level of overlap between biology, industry, and law. Molecular systematics and genomics By the 1980s, protein sequencing had already transformed methods of scientific classification of organisms (especially cladistics) but biologists soon began to use RNA and DNA sequences as characters; this expanded the significance of molecular evolution within evolutionary biology, as the results of molecular systematics could be compared with traditional evolutionary trees based on morphology. Following the pioneering ideas of Lynn Margulis on endosymbiotic theory, which holds that some of the organelles of eukaryotic cells originated from free living prokaryotic organisms through symbiotic relationships, even the overall division of the tree of life was revised. Into the 1990s, the five domains (Plants, Animals, Fungi, Protists, and Monerans) became three (the Archaea, the Bacteria, and the Eukarya) based on Carl Woese's pioneering molecular systematics work with 16S rRNA sequencing. The development and popularization of the polymerase chain reaction (PCR) in mid-1980s (by Kary Mullis and others at Cetus Corp.) marked another watershed in the history of modern biotechnology, greatly increasing the ease and speed of genetic analysis. Coupled with the use of expressed sequence tags, PCR led to the discovery of many more genes than could be found through traditional biochemical or genetic methods and opened the possibility of sequencing entire genomes. The unity of much of the morphogenesis of organisms from fertilized egg to adult began to be unraveled after the discovery of the homeobox genes, first in fruit flies, then in other insects and animals, including humans. These developments led to advances in the field of evolutionary developmental biology towards understanding how the various body plans of the animal phyla have evolved and how they are related to one another. The Human Genome Project—the largest, most costly single biological study ever undertaken—began in 1988 under the leadership of James D. Watson, after preliminary work with genetically simpler model organisms such as E. coli, S. cerevisiae and C. elegans. Shotgun sequencing and gene discovery methods pioneered by Craig Venter—and fueled by the financial promise of gene patents with Celera Genomics— led to a public–private sequencing competition that ended in compromise with the first draft of the human DNA sequence announced in 2000. Twenty-first century biological sciences At the beginning of the 21st century, biological sciences converged with previously differentiated new and classic disciplines like physics into research fields like biophysics. Advances were made in analytical chemistry and physics instrumentation including improved sensors, optics, tracers, instrumentation, signal processing, networks, robots, satellites, and compute power for data collection, storage, analysis, modeling, visualization, and simulations. These technological advances allowed theoretical and experimental research including internet publication of molecular biochemistry, biological systems, and ecosystems science. This enabled worldwide access to better measurements, theoretical models, complex simulations, theory predictive model experimentation, analysis, worldwide internet observational data reporting, open peer-review, collaboration, and internet publication. New fields of biological sciences research emerged including bioinformatics, neuroscience, theoretical biology, computational genomics, astrobiology and synthetic biology. See also History of botany Outline of biology Timeline of biology and organic chemistry References Citations Sources Agar, Jon. Science in the Twentieth Century and Beyond. Polity Press: Cambridge, 2012. Allen, Garland E. Thomas Hunt Morgan: The Man and His Science. Princeton University Press: Princeton, 1978. Allen, Garland E. Life Science in the Twentieth Century. Cambridge University Press, 1975. Annas, Julia Classical Greek Philosophy. In Boardman, John; Griffin, Jasper; Murray, Oswyn (ed.) The Oxford History of the Classical World. Oxford University Press: New York, 1986. Barnes, Jonathan Hellenistic Philosophy and Science. In Boardman, John; Griffin, Jasper; Murray, Oswyn (ed.) The Oxford History of the Classical World. Oxford University Press: New York, 1986. Bowler, Peter J. The Earth Encompassed: A History of the Environmental Sciences. W. W. Norton & Company: New York, 1992. Bowler, Peter J. The Eclipse of Darwinism: Anti-Darwinian Evolution Theories in the Decades around 1900. The Johns Hopkins University Press: Baltimore, 1983. Bowler, Peter J. Evolution: The History of an Idea. University of California Press, 2003. . Browne, Janet. The Secular Ark: Studies in the History of Biogeography. Yale University Press: New Haven, 1983. Bud, Robert. The Uses of Life: A History of Biotechnology. Cambridge University Press: London, 1993. Caldwell, John. "Drug metabolism and pharmacogenetics: the British contribution to fields of international significance." British Journal of Pharmacology, Vol. 147, Issue S1 (January 2006), pp S89–S99. Coleman, William Biology in the Nineteenth Century: Problems of Form, Function, and Transformation. Cambridge University Press: New York, 1977. Creager, Angela N. H. The Life of a Virus: Tobacco Mosaic Virus as an Experimental Model, 1930–1965. University of Chicago Press: Chicago, 2002. Creager, Angela N. H. "Building Biology across the Atlantic," essay review in Journal of the History of Biology, Vol. 36, No. 3 (September 2003), pp. 579–589. de Chadarevian, Soraya. Designs for Life: Molecular Biology after World War II. Cambridge University Press: Cambridge, 2002. Dietrich, Michael R. "Paradox and Persuasion: Negotiating the Place of Molecular Evolution within Evolutionary Biology," in Journal of the History of Biology, Vol. 31 (1998), pp. 85–111. Davies, Kevin. Cracking the Genome: Inside the Race to Unlock Human DNA. The Free Press: New York, 2001. Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999. Gottweis, Herbert. Governing Molecules: The Discursive Politics of Genetic Engineering in Europe and the United States. MIT Press: Cambridge, MA, 1998. Gould, Stephen Jay. The Structure of Evolutionary Theory. The Belknap Press of Harvard University Press: Cambridge, 2002. Hagen, Joel B. An Entangled Bank: The Origins of Ecosystem Ecology. Rutgers University Press: New Brunswick, 1992. Hall, Stephen S. Invisible Frontiers: The Race to Synthesize a Human Gene. Atlantic Monthly Press: New York, 1987. Holmes, Frederic Lawrence. Meselson, Stahl, and the Replication of DNA: A History of "The Most Beautiful Experiment in Biology". Yale University Press: New Haven, 2001. Junker, Thomas. Geschichte der Biologie. C. H. Beck: München, 2004. Kay, Lily E. The Molecular Vision of Life: Caltech, The Rockefeller Foundation, and the Rise of the New Biology. Oxford University Press: New York, 1993. Kohler, Robert E. Lords of the Fly: Drosophila Genetics and the Experimental Life. Chicago University Press: Chicago, 1994. Kohler, Robert E. Landscapes and Labscapes: Exploring the Lab-Field Border in Biology. University of Chicago Press: Chicago, 2002. Krimsky, Sheldon. Biotechnics and Society: The Rise of Industrial Genetics. Praeger Publishers: New York, 1991. Larson, Edward J. Evolution: The Remarkable History of a Scientific Theory. The Modern Library: New York, 2004. Lovejoy, Arthur O. The Great Chain of Being: A Study of the History of an Idea. Harvard University Press, 1936. Reprinted by Harper & Row, , 2005 paperback: . Magner, Lois N. A History of the Life Sciences, third edition. Marcel Dekker, Inc.: New York, 2002. Mason, Stephen F. A History of the Sciences. Collier Books: New York, 1956. Mayr, Ernst. The Growth of Biological Thought: Diversity, Evolution, and Inheritance. The Belknap Press of Harvard University Press: Cambridge, Massachusetts, 1982. Mayr, Ernst and William B. Provine, eds. The Evolutionary Synthesis: Perspectives on the Unification of Biology. Harvard University Press: Cambridge, 1998. Morange, Michel. A History of Molecular Biology, translated by Matthew Cobb. Harvard University Press: Cambridge, 1998. Rabinbach, Anson. The Human Motor: Energy, Fatigue, and the Origins of Modernity. University of California Press, 1992. Rabinow, Paul. Making PCR: A Story of Biotechnology. University of Chicago Press: Chicago, 1996. Rudwick, Martin J.S. The Meaning of Fossils. The University of Chicago Press: Chicago, 1972. Raby, Peter. Bright Paradise: Victorian Scientific Travellers. Princeton University Press: Princeton, 1997. Rothman, Sheila M. and David J. Rothman. The Pursuit of Perfection: The Promise and Perils of Medical Enhancement. Vintage Books: New York, 2003. Sapp, Jan. Genesis: The Evolution of Biology. Oxford University Press: New York, 2003. Secord, James A. Victorian Sensation: The Extraordinary Publication, Reception, and Secret Authorship of Vestiges of the Natural History of Creation. University of Chicago Press: Chicago, 2000. Serafini, Anthony The Epic History of Biology, Perseus Publishing, 1993. Sulston, John. The Common Thread: A Story of Science, Politics, Ethics and the Human Genome. National Academy Press, 2002. Smocovitis, Vassiliki Betty. Unifying Biology: The Evolutionary Synthesis and Evolutionary Biology. Princeton University Press: Princeton, 1996. Summers, William C. Félix d'Herelle and the Origins of Molecular Biology, Yale University Press: New Haven, 1999. Sturtevant, A. H. A History of Genetics. Cold Spring Harbor Laboratory Press: Cold Spring Harbor, 2001. Thackray, Arnold, ed. Private Science: Biotechnology and the Rise of the Molecular Sciences. University of Pennsylvania Press: Philadelphia, 1998. Wilson, Edward O. Naturalist. Island Press, 1994. Zimmer, Carl. Evolution: the triumph of an idea. HarperCollins: New York, 2001. External links International Society for History, Philosophy, and Social Studies of Biology – professional history of biology organization History of Biology – Historyworld article History of Biology at Bioexplorer.Net – a collection of history of biology links Biology – historically oriented article on Citizendium Miall, L. C. (1911) History of biology. Watts & Co. London Articles containing video clips
0.787193
0.992406
0.781215
Colonisation (biology)
Colonisation or colonization is the spread and development of an organism in a new area or habitat. Colonization comprises the physical arrival of a species in a new area, but also its successful establishment within the local community. In ecology, it is represented by the symbol λ (lowercase lambda) to denote the long-term intrinsic growth rate of a population. Surrounding theories and applicable process have been introduced below. These include dispersal, colonisation-competition trade off and prominent examples that have been previously studied. One classic scientific model in biogeography posits that a species must continue to colonize new areas through its life cycle (called a taxon cycle) in order to persist. Accordingly, colonisation and extinction are key components of island biogeography, a theory that has many applications in ecology, such as metapopulations. Another factor included in this scientific model is the competition-colonisation trade off. This idea goes into the driving factors of colonisation through many species that all share a need to expand. Scale Colonisation occurs on several scales. In the most basic form, as biofilm in the formation of communities of microorganisms on surfaces. This microbiological colonisation also takes place within each animal or plant and is called microbiome. In small scales such as colonising new sites, perhaps as a result of environmental change. And on larger scales where a species expands its range to encompass new areas. This can be through a series of small encroachments, such as in woody plant encroachment, or by long-distance dispersal. The term range expansion is also used. Dispersal Dispersion in biology is the dissemination, or scattering, of organisms over periods within a given area or over the Earth. The dispersion of species into new locations can be inspired by many causes. Often times species naturally disperse due to physiological adaptations which allows for a higher survival rate of progeny in new ecosystems. Other times these driving factors are environmentally related, for example global warming, disease, competition, predation. Dispersion of different species can come in many forms. Some prime examples of this is flight of species across long distances, wind dispersal of plant and fungi progeny, long distance of travel in packs, etc. Competition-Colonisation Trade-off The competition-colonisation trade-off refers to a driving factor that has a large influence over diversity and how it is maintained in a community. This is considered a driving factor because all species have to make a decision to entertain competition with others in the community or disperse from the community in hopes of a more optimal environment. This can span from available nutrient sources, light exposure, oxygen availability, reproduction competition, etc.. These trade offs are critical in the explanation of colonisation and why it happens. Use The term is generally only used to refer to the spread of a species into new areas by natural means, as opposed to unnatural introduction or translocation by humans, which may lead to invasive species. Colonisation events Large-scale notable pre-historic colonisation events include: Arthropods the colonisation of the Earth's land by the first animals, the arthropods. The first fossils of land animals come from millipedes. These were seen about 450 million years ago. Humans the early human migration and colonisation of areas outside Africa according to the recent African origin paradigm, resulting in the extinction of Pleistocene megafauna, although the role of humans in this event is controversial. Birds the colonisation of the New World by the cattle egret and the little egret the colonisation of Britain by the little egret the colonisation of western North America by the barred owl the colonisation of the East Coast of North America by the Brewer's blackbird the colonisation-westwards spread across Europe of the collared dove the spread across the eastern USA of the house finch the expansion into the southern and western areas of South Africa by the Hadeda Ibis Reptiles the colonisation of Anguilla by Green iguanas following a rafting event in 1995 the colonisation of Burmese pythons into the Florida Everglades. The release of snakes came from the desire to breed them and sell them as exotic pets. As they grew people became unable to care for the animals and began to release them into the Everglades. Dragonflies the colonisation of Britain by the small red-eyed damselfly Moths the colonisation of Britain by Blair's shoulder-knot Land Vertebrates The colonisation of Madagascar by land-bound vertebrates. Plants The colonisation of Pinus species through wind dispersion. See also Colony (biology) Invasive species Pioneer species References Further reading Community ecology Ecological processes Ecology terminology
0.800057
0.976428
0.781199
Mechanism (biology)
In biology, a mechanism is a system of causally interacting parts and processes that produce one or more effects. Phenomena can be explained by describing their mechanisms. For example, natural selection is a mechanism of evolution; other mechanisms of evolution include genetic drift, mutation, and gene flow. In ecology, mechanisms such as predation and host-parasite interactions produce change in ecological systems. In practice, no description of a mechanism is ever complete because not all details of the parts and processes of a mechanism are fully known. For example, natural selection is a mechanism of evolution that includes countless, inter-individual interactions with other individuals, components, and processes of the environment in which natural selection operates. Characterizations/ definitions Many characterizations/definitions of mechanisms in the philosophy of science/biology have been provided in the past decades. For example, one influential characterization of neuro- and molecular biological mechanisms by Peter K. Machamer, Lindley Darden and Carl Craver is as follows: mechanisms are entities and activities organized such that they are productive of regular changes from start to termination conditions. Other characterizations have been proposed by Stuart Glennan (1996, 2002), who articulates an interactionist account of mechanisms, and William Bechtel (1993, 2006), who emphasizes parts and operations. The characterization by Machemer et al. is as follows: mechanisms are entities and activities organized such that they are productive of changes from start conditions to termination conditions. There are three distinguishable aspects of this characterization: Ontic aspect The ontic constituency of biological mechanisms includes entities and activities. Thus, this conception postulates a dualistic ontology of mechanisms, where entities are substantial components, and activities are reified components of mechanisms. This augmented ontology increases the explanatory power of this conception. Descriptive aspect Most descriptions of mechanisms (as found in the scientific literature) include specifications of the entities and activities involved, as well as the start and termination conditions. This aspect is mostly limited to linear mechanisms, which have relatively unambiguous beginning and end points between which they produce their phenomenon, although it may be possible to arbitrarily select such points in cyclical mechanisms (e.g., the Krebs cycle). Epistemic aspect Mechanisms are dynamic producers of phenomena. This conception emphasizes activities, which are causes that are reified. It is because of activities that this conception of mechanisms is able to capture the dynamicity of mechanisms as they bring about a phenomenon. Analysis Mechanisms in science/biology have reappeared as a subject of philosophical analysis and discussion in the last several decades because of a variety of factors, many of which relate to metascientific issues such as explanation and causation. For example, the decline of Covering Law (CL) models of explanation, e.g., Hempel's deductive-nomological model, has stimulated interest how mechanisms might play an explanatory role in certain domains of science, especially higher-level disciplines such as biology (i.e., neurobiology, molecular biology, neuroscience, and so on). This is not just because of the philosophical problem of giving some account of what "laws of nature," which CL models encounter, but also the incontrovertible fact that most biological phenomena are not characterizable in nomological terms (i.e., in terms of lawful relationships). For example, protein biosynthesis does not occur according to any law, and therefore, on the DN model, no explanation for the biosynthesis phenomenon could be given. Explanations Mechanistic explanations come in many forms. Wesley Salmon proposed what he called the "ontic" conception of explanation, which states that explanations are mechanisms and causal processes in the world. There are two such kinds of explanation: etiological and constitutive. Salmon focused primarily on etiological explanation, with respect to which one explains some phenomenon P by identifying its causes (and, thus, locating it within the causal structure of the world). Constitutive (or componential) explanation, on the other hand, involves describing the components of a mechanism M that is productive of (or causes) P. Indeed, whereas (a) one may differentiate between descriptive and explanatory adequacy, where the former is characterized as the adequacy of a theory to account for at least all the items in the domain (which need explaining), and the latter as the adequacy of a theory to account for no more than those domain items, and (b) past philosophies of science differentiate between descriptions of phenomena and explanations of those phenomena, in the non-ontic context of mechanism literature, descriptions and explanations seem to be identical. This is to say, to explain a mechanism M is to describe it (specify its components, as well as background, enabling, and so on, conditions that constitute, in the case of a linear mechanism, its "start conditions"). See also Aristotle's biology Notes and references Biological concepts
0.808335
0.9663
0.781094
Organism
An organism is defined in a medical dictionary as any living thing that functions as an individual. Such a definition raises more problems than it solves, not least because the concept of an individual is also difficult. Many criteria, few of them widely accepted, have been proposed to define what an organism is. Among the most common is that an organism has autonomous reproduction, growth, and metabolism. This would exclude viruses, despite the fact that they evolve like organisms. Other problematic cases include colonial organisms; a colony of eusocial insects is organised adaptively, and has germ-soma specialisation, with some insects reproducing, others not, like cells in an animal's body. The body of a siphonophore, a jelly-like marine animal, is composed of organism-like zooids, but the whole structure looks and functions much like an animal such as a jellyfish, the parts collaborating to provide the functions of the colonial organism. The evolutionary biologists David Queller and Joan Strassmann state that "organismality", the qualities or attributes that define an entity as an organism, has evolved socially as groups of simpler units (from cells upwards) came to cooperate without conflicts. They propose that cooperation should be used as the "defining trait" of an organism. This would treat many types of collaboration, including the fungus/alga partnership of different species in a lichen, or the permanent sexual partnership of an anglerfish, as an organism. Etymology The term "organism" (from the Ancient Greek , derived from , meaning instrument, implement, tool, organ of sense or apprehension) first appeared in the English language in the 1660s with the now-obsolete meaning of an organic structure or organization. It is related to the verb "organize". In his 1790 Critique of Judgment, Immanuel Kant defined an organism as "both an organized and a self-organizing being". Whether criteria exist, or are needed Among the criteria that have been proposed for being an organism are: autonomous reproduction, growth, and metabolism noncompartmentability – structure cannot be divided without losing functionality. Richard Dawkins stated this as "the quality of being sufficiently heterogeneous in form to be rendered non-functional if cut in half". However, many organisms can be cut into pieces which then grow into whole organisms. individuality – the entity has simultaneous holdings of genetic uniqueness, genetic homogeneity and autonomy an immune response, separating self from foreign "anti-entropy", the ability to maintain order, a concept first proposed by Erwin Schrödinger; or in another form, that Claude Shannon's information theory can be used to identify organisms as capable of self-maintaining their information content Other scientists think that the concept of the organism is inadequate in biology; that the concept of individuality is problematic; and from a philosophical point of view, question whether such a definition is necessary. Problematic cases include colonial organisms: for instance, a colony of eusocial insects fulfills criteria such as adaptive organisation and germ-soma specialisation. If so, the same argument, or a criterion of high co-operation and low conflict, would include some mutualistic (e.g. lichens) and sexual partnerships (e.g. anglerfish) as organisms. If group selection occurs, then a group could be viewed as a superorganism, optimized by group adaptation. Another view is that attributes like autonomy, genetic homogeneity and genetic uniqueness should be examined separately rather than demanding that an organism should have all of them; if so, there are multiple dimensions to biological individuality, resulting in several types of organism. Organisms at differing levels of biological organisation A unicellular organism is a microorganism such as a protist, bacterium, or archaean, composed of a single cell, which may contain functional structures called organelles. A multicellular organism such as an animal, plant, fungus, or alga is composed of many cells, often specialised. A colonial organism such as a siphonophore is a being which functions as an individual but is composed of communicating individuals. A superorganism is a colony, such as of ants, consisting of many individuals working together as a single functional or social unit. A mutualism is a partnership of two or more species which each provide some of the needs of the other. A lichen consists of fungi and algae or cyanobacteria, with a bacterial microbiome; together, they are able to flourish as a kind of organism, the components having different functions, in habitats such as dry rocks where neither could grow alone. The evolutionary biologists David Queller and Joan Strassmann state that "organismality" has evolved socially, as groups of simpler units (from cells upwards) came to cooperate without conflicts. They propose that cooperation should be used as the "defining trait" of an organism. Samuel Díaz‐Muñoz and colleagues (2016) accept Queller and Strassmann's view that organismality can be measured wholly by degrees of cooperation and of conflict. They state that this situates organisms in evolutionary time, so that organismality is context dependent. They suggest that highly integrated life forms, which are not context dependent, may evolve through context-dependent stages towards complete unification. Boundary cases Viruses Viruses are not typically considered to be organisms, because they are incapable of autonomous reproduction, growth, metabolism, or homeostasis. Although viruses have a few enzymes and molecules like those in living organisms, they have no metabolism of their own; they cannot synthesize the organic compounds from which they are formed. In this sense, they are similar to inanimate matter. Viruses have their own genes, and they evolve. Thus, an argument that viruses should be classed as living organisms is their ability to undergo evolution and replicate through self-assembly. However, some scientists argue that viruses neither evolve nor self-reproduce. Instead, viruses are evolved by their host cells, meaning that there was co-evolution of viruses and host cells. If host cells did not exist, viral evolution would be impossible. As for reproduction, viruses rely on hosts' machinery to replicate. The discovery of viruses with genes coding for energy metabolism and protein synthesis fuelled the debate about whether viruses are living organisms, but the genes have a cellular origin. Most likely, they were acquired through horizontal gene transfer from viral hosts. There is an argument for viewing viruses as cellular organisms. Some researchers perceive viruses not as virions alone, which they believe are just spores of an organism, but as a virocell - an ontologically mature viral organism that has cellular structure. Such virus is a result of infection of a cell and shows all major physiological properties of other organisms: metabolism, growth, and reproduction, therefore, life in its effective presence. Organism-like colonies The philosopher Jack A. Wilson examines some boundary cases to demonstrate that the concept of organism is not sharply defined. In his view, sponges, lichens, siphonophores, slime moulds, and eusocial colonies such as those of ants or naked molerats, all lie in the boundary zone between being definite colonies and definite organisms (or superorganisms). Synthetic organisms Scientists and bio-engineers are experimenting with different types of synthetic organism, from chimaeras composed of cells from two or more species, cyborgs including electromechanical limbs, hybrots containing both electronic and biological elements, and other combinations of systems that have variously evolved and been designed. An evolved organism takes its form by the partially understood mechanisms of evolutionary developmental biology, in which the genome directs an elaborated series of interactions to produce successively more elaborate structures. The existence of chimaeras and hybrids demonstrates that these mechanisms are "intelligently" robust in the face of radically altered circumstances at all levels from molecular to organismal. Synthetic organisms already take diverse forms, and their diversity will increase. What they all have in common is a teleonomic or goal-seeking behaviour that enables them to correct errors of many kinds so as to achieve whatever result they are designed for. Such behaviour is reminiscent of intelligent action by organisms; intelligence is seen as an embodied form of cognition. Early evolution of organisms All organisms that exist today possess a self-replicating informational molecule (genome), and such an informational molecule is likely intrinsic to life. Thus, the earliest organisms also presumably possessed a self-replicating informational molecule (genome), perhaps RNA or an informational molecule more primitive than RNA. The specific nucleotide sequences in all currently extant organisms contain information that functions to promote survival, reproduction, and the ability to acquire resources necessary for reproduction, and sequences with such functions probably emerged early in the evolution of life. It is also likely that survival sequences present early in the evolution of organisms included sequences that facilitate the avoidance of damage to the self-replicating molecule and promote the capability to repair such damages that do occur. Repair of some of the genome damages in these early organisms may have involved the capacity to use undamaged information from another similar genome by a process of recombination (a primitive form of sexual interaction). References External links aims to enumerate all known species. Organisms ceb:Organismo
0.781817
0.999055
0.781079
Ecological engineering
Ecological engineering uses ecology and engineering to predict, design, construct or restore, and manage ecosystems that integrate "human society with its natural environment for the benefit of both". Origins, key concepts, definitions, and applications Ecological engineering emerged as a new idea in the early 1960s, but its definition has taken several decades to refine. Its implementation is still undergoing adjustment, and its broader recognition as a new paradigm is relatively recent. Ecological engineering was introduced by Howard Odum and others as utilizing natural energy sources as the predominant input to manipulate and control environmental systems. The origins of ecological engineering are in Odum's work with ecological modeling and ecosystem simulation to capture holistic macro-patterns of energy and material flows affecting the efficient use of resources. Mitsch and Jorgensen summarized five basic concepts that differentiate ecological engineering from other approaches to addressing problems to benefit society and nature: 1) it is based on the self-designing capacity of ecosystems; 2) it can be the field (or acid) test of ecological theories; 3) it relies on system approaches; 4) it conserves non-renewable energy sources; and 5) it supports ecosystem and biological conservation. Mitsch and Jorgensen were the first to define ecological engineering as designing societal services such that they benefit society and nature, and later noted the design should be systems based, sustainable, and integrate society with its natural environment. Bergen et al. defined ecological engineering as: 1) utilizing ecological science and theory; 2) applying to all types of ecosystems; 3) adapting engineering design methods; and 4) acknowledging a guiding value system. Barrett (1999) offers a more literal definition of the term: "the design, construction, operation and management (that is, engineering) of landscape/aquatic structures and associated plant and animal communities (that is, ecosystems) to benefit humanity and, often, nature." Barrett continues: "other terms with equivalent or similar meanings include ecotechnology and two terms most often used in the erosion control field: soil bioengineering and biotechnical engineering. However, ecological engineering should not be confused with 'biotechnology' when describing genetic engineering at the cellular level, or 'bioengineering' meaning construction of artificial body parts." The applications in ecological engineering can be classified into 3 spatial scales: 1) mesocosms (~0.1 to hundreds of meters); 2) ecosystems (~one to tens of km); and 3) regional systems (>tens of km). The complexity of the design likely increases with the spatial scale. Applications are increasing in breadth and depth, and likely impacting the field's definition, as more opportunities to design and use ecosystems as interfaces between society and nature are explored. Implementation of ecological engineering has focused on the creation or restoration of ecosystems, from degraded wetlands to multi-celled tubs and greenhouses that integrate microbial, fish, and plant services to process human wastewater into products such as fertilizers, flowers, and drinking water. Applications of ecological engineering in cities have emerged from collaboration with other fields such as landscape architecture, urban planning, and urban horticulture, to address human health and biodiversity, as targeted by the UN Sustainable Development Goals, with holistic projects such as stormwater management. Applications of ecological engineering in rural landscapes have included wetland treatment and community reforestation through traditional ecological knowledge. Permaculture is an example of broader applications that have emerged as distinct disciplines from ecological engineering, where David Holmgren cites the influence of Howard Odum in development of permaculture. Design guidelines, functional classes, and design principles Ecological engineering design will combine systems ecology with the process of engineering design. Engineering design typically involves problem formulation (goal), problem analysis (constraints), alternative solutions search, decision among alternatives, and specification of a complete solution. A temporal design framework is provided by Matlock et al., stating the design solutions are considered in ecological time. In selecting between alternatives, the design should incorporate ecological economics in design evaluation and acknowledge a guiding value system which promotes biological conservation, benefiting society and nature. Ecological engineering utilizes systems ecology with engineering design to obtain a holistic view of the interactions within and between society and nature. Ecosystem simulation with Energy Systems Language (also known as energy circuit language or energese) by Howard Odum is one illustration of this systems ecology approach. This holistic model development and simulation defines the system of interest, identifies the system's boundary, and diagrams how energy and material moves into, within, and out of, a system in order to identify how to use renewable resources through ecosystem processes and increase sustainability. The system it describes is a collection (i.e., group) of components (i.e., parts), connected by some type of interaction or interrelationship, that collectively responds to some stimulus or demand and fulfills some specific purpose or function. By understanding systems ecology the ecological engineer can more efficiently design with ecosystem components and processes within the design, utilize renewable energy and resources, and increase sustainability. Mitsch and Jorgensen identified five Functional Classes for ecological engineering designs: Ecosystem utilized to reduce/solve pollution problem. Example: phytoremediation, wastewater wetland, and bioretention of stormwater to filter excess nutrients and metals pollution Ecosystem imitated or copied to address resource problem. Example: forest restoration, replacement wetlands, and installing street side rain gardens to extend canopy cover to optimize residential and urban cooling Ecosystem recovered after disturbance. Example: mine land restoration, lake restoration, and channel aquatic restoration with mature riparian corridors Ecosystem modified in ecologically sound way. Example: selective timber harvest, biomanipulation, and introduction of predator fish to reduce planktivorous fish, increase zooplankton, consume algae or phytoplankton, and clarify the water. Ecosystems used for benefit without destroying balance. Example: sustainable agro-ecosystems, multispecies aquaculture, and introducing agroforestry plots into residential property to generate primary production at multiple vertical levels. Mitsch and Jorgensen identified 19 Design Principles for ecological engineering, yet not all are expected to contribute to any single design: Ecosystem structure & function are determined by forcing functions of the system; Energy inputs to the ecosystems and available storage of the ecosystem is limited; Ecosystems are open and dissipative systems (not thermodynamic balance of energy, matter, entropy, but spontaneous appearance of complex, chaotic structure); Attention to a limited number of governing/controlling factors is most strategic in preventing pollution or restoring ecosystems; Ecosystem have some homeostatic capability that results in smoothing out and depressing the effects of strongly variable inputs; Match recycling pathways to the rates of ecosystems and reduce pollution effects; Design for pulsing systems wherever possible; Ecosystems are self-designing systems; Processes of ecosystems have characteristic time and space scales that should be accounted for in environmental management; Biodiversity should be championed to maintain an ecosystem's self design capacity; Ecotones, transition zones, are as important for ecosystems as membranes for cells; Coupling between ecosystems should be utilized wherever possible; The components of an ecosystem are interconnected, interrelated, and form a network; consider direct as well as indirect efforts of ecosystem development; An ecosystem has a history of development; Ecosystems and species are most vulnerable at their geographical edges; Ecosystems are hierarchical systems and are parts of a larger landscape; Physical and biological processes are interactive, it is important to know both physical and biological interactions and to interpret them properly; Eco-technology requires a holistic approach that integrates all interacting parts and processes as far as possible; Information in ecosystems is stored in structures. Mitsch and Jorgensen identified the following considerations prior implementing an ecological engineering design: Create conceptual model of determine the parts of nature connected to the project; Implement a computer model to simulate the impacts and uncertainty of the project; Optimize the project to reduce uncertainty and increase beneficial impacts. Academic curriculum (colleges) An academic curriculum has been proposed for ecological engineering, and institutions around the world are starting programs. Key elements of this curriculum are: environmental engineering; systems ecology; restoration ecology; ecological modeling; quantitative ecology; economics of ecological engineering, and technical electives. The world's first B.S. Ecological Engineering program was formalized in 2009 at Oregon State University. Complementing this set of courses are prerequisites courses in physical, biological, and chemical subject areas, and integrated design experiences. According to Matlock et al., the design should identify constraints, characterize solutions in ecological time, and incorporate ecological economics in design evaluation. Economics of ecological engineering has been demonstrated using energy principles for a wetland., and using nutrient valuation for a dairy farm <ref>C. Pizarro and others, An Economic Assessment of Algal Turf Scrubber Technology for Treatment of Dairy Manure Effluent. Ecological Engineering, 26(12): 321-327.</ref> See also Afforestation Agroecology Agroforestry Analog forestry Biomass (ecology) Buffer strip Constructed wetland Energy-efficient landscaping Environmental engineering Forest farming Forest gardening Great Green Wall Great Plains Shelterbelt (1934- ) Great Plan for the Transformation of Nature - an example of applied ecological engineering in the 1940s and 1950s Hedgerow Home gardens Human ecology Macro-engineering Sand fence Seawater greenhouse Sustainable agriculture Terra preta Three-North Shelter Forest Program Wildcrafting Windbreak Literature Howard T. Odum (1963), "Man and Ecosystem" Proceedings, Lockwood Conference on the Suburban Forest and Ecology, in: Bulletin Connecticut Agric. Station. W.J. Mitsch (1993), Ecological engineering—"a cooperative role with the planetary life–support systems. Environmental Science & Technology'' 27:438-445. H.D. van Bohemen (2004), Ecological Engineering and Civil Engineering works, Doctoral thesis TU Delft, The Netherlands. References External links What is "ecological engineering"? Webtext, Ecological Engineering Group, 2007. Ecological Engineering Student Society Website, EESS, Oregon State University, 2011. Ecological Engineering webtext by Howard T.Odum Center for Wetlands at the University of Florida, 2007. Organizations American Ecological Engineering Society, homepage. Ecological Engineering Student Society Website, EESS, Oregon State University, 2011. American Society of Professional Wetland Engineers, homepage, wiki. Ecological Engineering Group, homepage. International Ecological Engineering Society homepage. Scientific journals Ecological Engineering since 1992, with a general description of the field. Landscape and Ecological Engineering since 2005. Journal of Ecological Engineering Design Officially launched in 2021, this journal offers a diamond open access format (free to the reader, free to the authors). This is the official journal of the American Ecological Engineering Society with production support from the University of Vermont Libraries. Ecological restoration Environmental terminology Environmental engineering Environmental social science Engineering disciplines Climate change policy
0.802678
0.972933
0.780952
Computational biology
Computational biology refers to the use of data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and big data, the field also has foundations in applied mathematics, chemistry, and genetics. It differs from biological computing, a subfield of computer science and engineering which uses bioengineering to build computers. History Bioinformatics, the analysis of informatics processes in biological systems, began in the early 1970s. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data pushed biological researchers to use computers to evaluate and compare large data sets in their own field. By 1982, researchers shared information via punch cards. The amount of data grew exponentially by the end of the 1980s, requiring new computational methods for quickly interpreting relevant information. Perhaps the best-known example of computational biology, the Human Genome Project, officially began in 1990. By 2003, the project had mapped around 85% of the human genome, satisfying its initial goals. Work continued, however, and by 2021 level " a complete genome" was reached with only 0.3% remaining bases covered by potential issues. The missing Y chromosome was added in January 2022. Since the late 1990s, computational biology has become an important part of biology, leading to numerous subfields. Today, the International Society for Computational Biology recognizes 21 different 'Communities of Special Interest', each representing a slice of the larger field. In addition to helping sequence the human genome, computational biology has helped create accurate models of the human brain, map the 3D structure of genomes, and model biological systems. Global contributions Colombia In 2000, despite a lack of initial expertise in programming and data management, Colombia began applying computational biology from an industrial perspective, focusing on plant diseases. This research has contributed to understanding how to counteract diseases in crops like potatoes and studying the genetic diversity of coffee plants. By 2007, concerns about alternative energy sources and global climate change prompted biologists to collaborate with systems and computer engineers. Together, they developed a robust computational network and database to address these challenges. In 2009, in partnership with the University of Los Angeles, Colombia also created a Virtual Learning Environment (VLE) to improve the integration of computational biology and bioinformatics. Poland In Poland, computational biology is closely linked to mathematics and computational science, serving as a foundation for bioinformatics and biological physics. The field is divided into two main areas: one focusing on physics and simulation and the other on biological sequences. The application of statistical models in Poland has advanced techniques for studying proteins and RNA, contributing to global scientific progress. Polish scientists have also been instrumental in evaluating protein prediction methods, significantly enhancing the field of computational biology. Over time, they have expanded their research to cover topics such as protein-coding analysis and hybrid structures, further solidifying Poland's influence on the development of bioinformatics worldwide. Applications Anatomy Computational anatomy is the study of anatomical shape and form at the visible or gross anatomical scale of morphology. It involves the development of computational mathematical and data-analytical methods for modeling and simulating biological structures. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging, computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morpheme scale in 3D. The original formulation of computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations. The diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow from one anatomical configuration in to another. It relates with shape statistics and morphometrics, with the distinction that diffeomorphisms are used to map coordinate systems, whose study is known as diffeomorphometry. Data and modeling Mathematical biology is the use of mathematical models of living organisms to examine the systems that govern structure, development, and behavior in biological systems. This entails a more theoretical approach to problems, rather than its more empirically-minded counterpart of experimental biology. Mathematical biology draws on discrete mathematics, topology (also useful for computational modeling), Bayesian statistics, linear algebra and Boolean algebra. These mathematical approaches have enabled the creation of databases and other methods for storing, retrieving, and analyzing biological data, a field known as bioinformatics. Usually, this process involves genetics and analyzing genes. Gathering and analyzing large datasets have made room for growing research fields such as data mining, and computational biomodeling, which refers to building computer models and visual simulations of biological systems. This allows researchers to predict how such systems will react to different environments, which is useful for determining if a system can "maintain their state and functions against external and internal perturbations". While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe this will be essential in developing modern medical approaches to creating new drugs and gene therapy. A useful modeling approach is to use Petri nets via tools such as esyN. Along similar lines, until recent decades theoretical ecology has largely dealt with analytic models that were detached from the statistical models used by empirical ecologists. However, computational methods have aided in developing ecological theory via simulation of ecological systems, in addition to increasing application of methods from computational statistics in ecological analyses. Systems Biology Systems biology consists of computing the interactions between various biological systems ranging from the cellular level to entire populations with the goal of discovering emergent properties. This process usually involves networking cell signaling and metabolic pathways. Systems biology often uses computational techniques from biological modeling and graph theory to study these complex interactions at cellular levels. Evolutionary biology Computational biology has assisted evolutionary biology by: Using DNA data to reconstruct the tree of life with computational phylogenetics Fitting population genetics models (either forward time or backward time) to DNA data to make inferences about demographic or selective history Building population genetics models of evolutionary systems from first principles in order to predict what is likely to evolve Genomics Computational genomics is the study of the genomes of cells and organisms. The Human Genome Project is one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individual patient. This opens the possibility of personalized medicine, prescribing treatments based on an individual's pre-existing genetic patterns. Researchers are looking to sequence the genomes of animals, plants, bacteria, and all other types of life. One of the main ways that genomes are compared is by sequence homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a common ancestor. Research suggests that between 80 and 90% of genes in newly sequenced prokaryotic genomes can be identified this way. Sequence alignment is another process for comparing and detecting similarities between biological sequences or genes. Sequence alignment is useful in a number of bioinformatics applications, such as computing the longest common subsequence of two genes or comparing variants of certain diseases. An untouched project in computational genomics is the analysis of intergenic regions, which comprise roughly 97% of the human genome. Researchers are working to understand the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such as ENCODE and the Roadmap Epigenomics Project. Understanding how individual genes contribute to the biology of an organism at the molecular, cellular, and organism levels is known as gene ontology. The Gene Ontology Consortium's mission is to develop an up-to-date, comprehensive, computational model of biological systems, from the molecular level to larger pathways, cellular, and organism-level systems. The Gene Ontology resource provides a computational representation of current scientific knowledge about the functions of genes (or, more properly, the protein and non-coding RNA molecules produced by genes) from many different organisms, from humans to bacteria. 3D genomics is a subsection in computational biology that focuses on the organization and interaction of genes within a eukaryotic cell. One method used to gather 3D genomic data is through Genome Architecture Mapping (GAM). GAM measures 3D distances of chromatin and DNA in the genome by combining cryosectioning, the process of cutting a strip from the nucleus to examine the DNA, with laser microdissection. A nuclear profile is simply this strip or slice that is taken from the nucleus. Each nuclear profile contains genomic windows, which are certain sequences of nucleotides - the base unit of DNA. GAM captures a genome network of complex, multi enhancer chromatin contacts throughout a cell. Neuroscience Computational neuroscience is the study of brain function in terms of the information processing properties of the nervous system. A subset of neuroscience, it looks to model the brain to examine specific aspects of the neurological system. Models of the brain include: Realistic Brain Models: These models look to represent every aspect of the brain, including as much detail at the cellular level as possible. Realistic models provide the most information about the brain, but also have the largest margin for error. More variables in a brain model create the possibility for more error to occur. These models do not account for parts of the cellular structure that scientists do not know about. Realistic brain models are the most computationally heavy and the most expensive to implement. Simplifying Brain Models: These models look to limit the scope of a model in order to assess a specific physical property of the neurological system. This allows for the intensive computational problems to be solved, and reduces the amount of potential error from a realistic brain model. It is the work of computational neuroscientists to improve the algorithms and data structures currently used to increase the speed of such calculations. Computational neuropsychiatry is an emerging field that uses mathematical and computer-assisted modeling of brain mechanisms involved in mental disorders. Several initiatives have demonstrated that computational modeling is an important contribution to understand neuronal circuits that could generate mental functions and dysfunctions. Pharmacology Computational pharmacology is "the study of the effects of genomic data to find links between specific genotypes and diseases and then screening drug data". The pharmaceutical industry requires a shift in methods to analyze drug data. Pharmacologists were able to use Microsoft Excel to compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on a spreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massive data sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed. Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs. Oncology Computational biology plays a crucial role in discovering signs of new, previously unknown living creatures and in cancer research. This field involves large-scale measurements of cellular processes, including RNA, DNA, and proteins, which pose significant computational challenges. To overcome these, biologists rely on computational tools to accurately measure and analyze biological data. In cancer research, computational biology aids in the complex analysis of tumor samples, helping researchers develop new ways to characterize tumors and understand various cellular properties. The use of high-throughput measurements, involving millions of data points from DNA, RNA, and other biological structures, helps in diagnosing cancer at early stages and in understanding the key factors that contribute to cancer development. Areas of focus include analyzing molecules that are deterministic in causing cancer and understanding how the human genome relates to tumor causation. Techniques Computational biologists use a wide range of software and algorithms to carry out their research. Unsupervised Learning Unsupervised learning is a type of algorithm that finds patterns in unlabeled data. One example is k-means clustering, which aims to partition n data points into k clusters, in which each data point belongs to the cluster with the nearest mean. Another version is the k-medoids algorithm, which, when selecting a cluster center or cluster centroid, will pick one of its data points in the set, and not just an average of the cluster. The algorithm follows these steps: Randomly select k distinct data points. These are the initial clusters. Measure the distance between each point and each of the 'k' clusters. (This is the distance of the points from each point k). Assign each point to the nearest cluster. Find the center of each cluster (medoid). Repeat until the clusters no longer change. Assess the quality of the clustering by adding up the variation within each cluster. Repeat the processes with different values of k. Pick the best value for 'k' by finding the "elbow" in the plot of which k value has the lowest variance. One example of this in biology is used in the 3D mapping of a genome. Information of a mouse's HIST1 region of chromosome 13 is gathered from Gene Expression Omnibus. This information contains data on which nuclear profiles show up in certain genomic regions. With this information, the Jaccard distance can be used to find a normalized distance between all the loci. Graph Analytics Graph analytics, or network analysis, is the study of graphs that represent connections between different objects. Graphs can represent all kinds of networks in biology such as protein-protein interaction networks, regulatory networks, Metabolic and biochemical networks and much more. There are many ways to analyze these networks. One of which is looking at centrality in graphs. Finding centrality in graphs assigns nodes rankings to their popularity or centrality in the graph. This can be useful in finding which nodes are most important. For example, given data on the activity of genes over a time period, degree centrality can be used to see what genes are most active throughout the network, or what genes interact with others the most throughout the network. This contributes to the understanding of the roles certain genes play in the network. There are many ways to calculate centrality in graphs all of which can give different kinds of information on centrality. Finding centralities in biology can be applied in many different circumstances, some of which are gene regulatory, protein interaction and metabolic networks. Supervised Learning Supervised learning is a type of algorithm that learns from labeled data and learns how to assign labels to future data that is unlabeled. In biology supervised learning can be helpful when we have data that we know how to categorize and we would like to categorize more data into those categories. A common supervised learning algorithm is the random forest, which uses numerous decision trees to train a model to classify a dataset. Forming the basis of the random forest, a decision tree is a structure which aims to classify, or label, some set of data using certain known features of that data. A practical biological example of this would be taking an individual's genetic data and predicting whether or not that individual is predisposed to develop a certain disease or cancer. At each internal node the algorithm checks the dataset for exactly one feature, a specific gene in the previous example, and then branches left or right based on the result. Then at each leaf node, the decision tree assigns a class label to the dataset. So in practice, the algorithm walks a specific root-to-leaf path based on the input dataset through the decision tree, which results in the classification of that dataset. Commonly, decision trees have target variables that take on discrete values, like yes/no, in which case it is referred to as a classification tree, but if the target variable is continuous then it is called a regression tree. To construct a decision tree, it must first be trained using a training set to identify which features are the best predictors of the target variable. Open source software Open source software provides a platform for computational biology where everyone can access and benefit from software developed in research. PLOS cites four main reasons for the use of open source software: Reproducibility: This allows for researchers to use the exact methods used to calculate the relations between biological data. Faster development: developers and researchers do not have to reinvent existing code for minor tasks. Instead they can use pre-existing programs to save time on the development and implementation of larger projects. Increased quality: Having input from multiple researchers studying the same topic provides a layer of assurance that errors will not be in the code. Long-term availability: Open source programs are not tied to any businesses or patents. This allows for them to be posted to multiple web pages and ensure that they are available in the future. Research There are several large conferences that are concerned with computational biology. Some notable examples are Intelligent Systems for Molecular Biology, European Conference on Computational Biology and Research in Computational Molecular Biology. There are also numerous journals dedicated to computational biology. Some notable examples include Journal of Computational Biology and PLOS Computational Biology, a peer-reviewed open access journal that has many notable research projects in the field of computational biology. They provide reviews on software, tutorials for open source software, and display information on upcoming computational biology conferences. Other journals relevant to this field include Bioinformatics, Computers in Biology and Medicine, BMC Bioinformatics, Nature Methods, Nature Communications, Scientific Reports, PLOS One, etc. Related fields Computational biology, bioinformatics and mathematical biology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science. The NIH describes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data. Specifically, the NIH defines While each field is distinct, there may be significant overlap at their interface, so much so that to many, bioinformatics and computational biology are terms that are used interchangeably. The terms computational biology and evolutionary computation have a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to as genetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, computational evolutionary biology is a subfield of it. See also References External links bioinformatics.org Bioinformatics Computational fields of study
0.785703
0.993911
0.780919
Holism
Holism is the interdisciplinary idea that systems possess properties as wholes apart from the properties of their component parts. The aphorism "The whole is greater than the sum of its parts", typically attributed to Aristotle, is often given as a glib summary of this proposal. The concept of holism can inform the methodology for a broad array of scientific fields and lifestyle practices. When applications of holism are said to reveal properties of a whole system beyond those of its parts, these qualities are referred to as emergent properties of that system. Holism in all contexts is often placed in opposition to reductionism, a dominant notion in the philosophy of science that systems containing parts contain no unique properties beyond those parts. Proponents of holism consider the search for emergent properties within systems to be demonstrative of their perspective. Background The term "holism" was coined by Jan Smuts (1870–1950) in his 1926 book Holism and Evolution. While he never assigned a consistent meaning to the word, Smuts used holism to represent at least three features of reality. First, holism claims that every scientifically measurable thing, either physical or psychological, does possess a nature as a whole beyond its parts. His examples include atoms, cells, or an individual's personality. Smuts discussed this sense of holism in his claim that an individual's body and mind are not completely separated but instead connect and represent the holistic idea of a person. In his second sense, Smuts referred to holism as the cause of evolution. He argued that evolution is neither an accident nor is it brought about by the actions of some transcendant force, such as a God. Smuts criticized writers who emphasized Darwinian concepts of natural selection and genetic variation to support an accidental view of natural processes within the universe. Smuts perceived evolution as the process of nature correcting itself creatively and intentionally. In this way, holism is described as the tendency of a whole system to creatively respond to environmental stressors, a process in which parts naturally work together to bring the whole into more advanced states. Smuts used Pavlovian studies to argue that the inheritance of behavioral changes supports his idea of creative evolution as opposed to purely accidental development in nature. Smuts believed that this creative process was intrinsic within all physical systems of parts and ruled out indirect, transcendent forces. Finally, Smuts used holism to explain the concrete (nontranscendent) nature of the universe in general. In his words, holism is "the ultimate synthetic, ordering, organizing, regulative activity in the universe which accounts for all the structural groupings and syntheses in it." Smuts argued that a holistic view of the universe explains its processes and their evolution more effectively than a reductive view. Professional philosophers of science and linguistics did not consider Holism and Evolution seriously upon its initial publication in 1926 and the work has received criticism for a lack of theoretical coherence. Some biological scientists, however, did offer favorable assessments shortly after its first print. Over time, the meaning of the word holism became most closely associated with Smuts' first conception of the term, yet without any metaphysical commitments to monism, dualism, or similar concepts which can be inferred from his work. Scientific applications Physics Nonseparability The advent of holism in the 20th century coincided with the gradual development of quantum mechanics. Holism in physics is the nonseparability of physical systems from their parts, especially quantum phenomena. Classical physics cannot be regarded as holistic, as the behavior of individual parts represents the whole. However, the state of a system in quantum theory resists a certain kind of reductive analysis. For example, two spatially separated quantum systems are described as "entangled," or nonseparable from each other, when a meaningful analysis of one system is indistinguishable from that of the other. There are different conceptions of nonseparability in physics and its exploration is considered to broadly present insight into the ontological problem. Variants In one sense, holism for physics is a perspective about the best way to understand the nature of a physical system. In this sense, holism is the methodological claim that systems are accurately understood according to their properties as a whole. A methodological reductionist in physics might seek to explain, for example, the behavior of a liquid by examining its component molecules, atoms, ions or electrons. A methodological holist, on the other hand, believes there is something misguided about this approach; one proponent, a condensed matter physicist, puts it: “the most important advances in this area come about by the emergence of qualitatively new concepts at the intermediate or macroscopic levels—concepts which, one hopes, will be compatible with one’s information about the microscopic constituents, but which are in no sense logically dependent on it.” This perspective is considered a conventional attitude among contemporary physicists. In another sense, holism is a metaphysical claim that the nature of a system is not determined by the properties of its component parts. There are three varieties of this sense of physical holism. Ontological holism: some systems are not merely composed of their physical parts Property holism: some systems have properties independent of their physical parts Nomological holism: some systems follow physical laws beyond the laws followed by their physical parts The metaphysical claim does not assert that physical systems involve abstract properties beyond the composition of its physical parts, but that there are concrete properties aside from those of its basic physical parts. Theoretical physicist David Bohm (1917-1992) supports this view head-on. Bohm believed that a complete description of the universe would have to go beyond a simple list of all its particles and their positions, there would also have to be a physical quantum field associated with the properties of those particles guiding their trajectories. Bohm's ontological holism concerning the nature of whole physical systems was literal. But Niels Bohr (1885-1962), on the other hand, held ontological holism from an epistemological angle, rather than a literal one. Bohr saw an observational apparatus to be a part of a system under observation, besides the basic physical parts themselves. His theory agrees with Bohm that whole systems were not merely composed of their parts and it identifies properties such as position and momentum as those of whole systems beyond those of its components. But Bohr states that these holistic properties are only meaningful in experimental contexts when physical systems are under observation and that these systems, when not under observation, cannot be said to have meaningful properties, even if these properties took place outside our observation. While Bohr claims these holistic properties exist only insofar as they can be observed, Bohm took his ontological holism one step further by claiming these properties must exist regardless. Linguistics Semantic holism suggests that the meaning of individual words depends on the meaning of other words, forming a large web of interconnections. In general, meaning holism states that the properties which determine the meaning of a word are connected such that if the meaning of one word changes, the meaning of every other word in the web changes as well. The set of words that alter in meaning due to a change in the meaning of some other is not necessarily specified in meaning holism, but typically such a change is taken straightforwardly to affect the meaning of every word in the language. In scientific disciplines, reductionism is the opposing viewpoint to holism. But in the context of linguistics or the philosophy of language, reductionism is typically referred to as atomism. Specifically, atomism states that each word's meaning is independent and so there are no emergent properties within a language. Additionally, there is meaning molecularism which states that a change in one word alters the meaning of only a relatively small set of other words. The linguistic perspective of meaning holism is traced back to Quine but was subsequently formalized by analytic philosophers Michael Dummett, Jerry Fodor, and Ernest Lepore. While this holistic approach attempts to resolve a classical problem for the philosophy of language concerning how words convey meaning, there is debate over its validity mostly from two angles of criticism: opposition to compositionality and, especially, instability of meaning. The first claims that meaning holism conflicts with the compositionality of language. Meaning in some languages is compositional in that meaning comes from the structure of an expression's parts. Meaning holism suggests that the meaning of words plays an inferential role in the meaning of other words: "pet fish" might infer a meaning of "less than 3 ounces." Since holistic views of meaning assume meaning depends on which words are used and how those words infer meaning onto other words, rather than how they are structured, meaning holism stands in conflict with compositionalism and leaves statements with potentially ambiguous meanings. The second criticism claims that meaning holism makes meaning in language unstable. If some words must be used to infer the meaning of other words, then in order to communicate a message, the sender and the receiver must share an identical set of inferential assumptions or beliefs. If these beliefs were different, meaning may be lost. Many types of communication would be directly affected by the principles of meaning holism such as informative communication, language learning, and communication about psychological states. Nevertheless, some meaning holists maintain that the instability of meaning holism is an acceptable feature from several different angles. In one example, contextual holists make this point simply by suggesting we often do not actually share identical inferential assumptions but instead rely on context to counter differences of inference and support communication. Biology Scientific applications of holism within biology are referred to as systems biology. The opposing analytical approach of systems biology is biological organization which models biological systems and structures only in terms of their component parts. "The reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge...the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." The objective in systems biology is to advance models of the interactions in a system. Holistic approaches to modelling have involved cellular modelling strategies, genomic interaction analysis, and phenotype prediction. Systems medicine Systems medicine is a practical approach to systems biology and accepts its holistic assumptions. Systems medicine takes the systems of the human body as made up of a complete whole and uses this as a starting point in its research and, ultimately, treatment. Lifestyle applications The term holism is also sometimes used in the context of various lifestyle practices, such as dieting, education, and healthcare, to refer to ways of life that either supplement or replace conventional practices. In these contexts, holism is not necessarily a rigorous or well-defined methodology for obtaining a particular lifestyle outcome. It is sometimes simply an adjective to describe practices which account for factors that standard forms of these practices may discount, especially in the context of alternative medicine. See also Confirmation holism Emergentism Holism and Evolution Holism in ecological anthropology Holistic education Holon (philosophy) Holarchy Isomorphism Logical holism aka Theoretical holism Mereology Monism Reductionism Systems theory References External links Holism Philosophical theories Metaphysics of science Social theories Emergence Jan Smuts
0.783487
0.996625
0.780843
Industrial ecology
Industrial ecology (IE) is the study of material and energy flows through industrial systems. The global industrial economy can be modelled as a network of industrial processes that extract resources from the Earth and transform those resources into by-products, products and services which can be bought and sold to meet the needs of humanity. Industrial ecology seeks to quantify the material flows and document the industrial processes that make modern society function. Industrial ecologists are often concerned with the impacts that industrial activities have on the environment, with use of the planet's supply of natural resources, and with problems of waste disposal. Industrial ecology is a young but growing multidisciplinary field of research which combines aspects of engineering, economics, sociology, toxicology and the natural sciences. Industrial ecology has been defined as a "systems-based, multidisciplinary discourse that seeks to understand emergent behavior of complex integrated human/natural systems". The field approaches issues of sustainability by examining problems from multiple perspectives, usually involving aspects of sociology, the environment, economy and technology. The name comes from the idea that the analogy of natural systems should be used as an aid in understanding how to design sustainable industrial systems. Overview Industrial ecology is concerned with the shifting of industrial process from linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes can become inputs for new processes. Much of the research focuses on the following areas: material and energy flow studies ("industrial metabolism") dematerialization and decarbonization technological change and the environment life-cycle planning, design and assessment design for the environment ("eco-design") extended producer responsibility ("product stewardship") eco-industrial parks ("industrial symbiosis") product-oriented environmental policy eco-efficiency Industrial ecology seeks to understand the way in which industrial systems (for example a factory, an ecoregion, or national or global economy) interact with the biosphere. Natural ecosystems provide a metaphor for understanding how different parts of industrial systems interact with one another, in an "ecosystem" based on resources and infrastructural capital rather than on natural capital. It seeks to exploit the idea that natural systems do not have waste in them to inspire sustainable design. Along with more general energy conservation and material conservation goals, and redefining related international trade markets and product stewardship relations strictly as a service economy, industrial ecology is one of the four objectives of Natural Capitalism. This strategy discourages forms of amoral purchasing arising from ignorance of what goes on at a distance and implies a political economy that values natural capital highly and relies on more instructional capital to design and maintain each unique industrial ecology. History Industrial ecology was popularized in 1989 in a Scientific American article by Robert Frosch and Nicholas E. Gallopoulos. Frosch and Gallopoulos' vision was "why would not our industrial system behave like an ecosystem, where the wastes of a species may be resource to another species? Why would not the outputs of an industry be the inputs of another, thus reducing use of raw materials, pollution, and saving on waste treatment?" A notable example resides in a Danish industrial park in the city of Kalundborg. Here several linkages of byproducts and waste heat can be found between numerous entities such as a large power plant, an oil refinery, a pharmaceutical plant, a plasterboard factory, an enzyme manufacturer, a waste company and the city itself. Another example is the Rantasalmi EIP in Rantasalmi, Finland. While this country has had previous organically formed EIP's, the park at Rantasalmi is Finland's first planned EIP. The scientific field Industrial Ecology has grown quickly in recent years. The Journal of Industrial Ecology (since 1997), the International Society for Industrial Ecology (since 2001), and the journal Progress in Industrial Ecology (since 2004) give Industrial Ecology a strong and dynamic position in the international scientific community. Industrial Ecology principles are also emerging in various policy realms such as the concept of the circular economy that is being promoted in China. Although the definition of the circular economy has yet to be formalized, generally the focus is on strategies such as creating a circular flow of materials, and cascading energy flows. An example of this would be using waste heat from one process to run another process that requires a lower temperature. The hope is that strategies such as this will create a more efficient economy with fewer pollutants and other unwanted by-products. Principles One of the central principles of Industrial Ecology is the view that societal and technological systems are bounded within the biosphere, and do not exist outside it. Ecology is used as a metaphor due to the observation that natural systems reuse materials and have a largely closed loop cycling of nutrients. Industrial Ecology approaches problems with the hypothesis that by using similar principles as natural systems, industrial systems can be improved to reduce their impact on the natural environment as well. The table shows the general metaphor. IE examines societal issues and their relationship with both technical systems and the environment. Through this holistic view , IE recognizes that solving problems must involve understanding the connections that exist between these systems, various aspects cannot be viewed in isolation. Often changes in one part of the overall system can propagate and cause changes in another part. Thus, you can only understand a problem if you look at its parts in relation to the whole. Based on this framework, IE looks at environmental issues with a systems thinking approach. A good IE example with these societal impacts can be found at the Blue Lagoon in Iceland. The Lagoon uses super-heated water from a local geothermal power plant to fill mineral-rich basins that have become recreational healing centers. In this sense the industrial process of energy production uses its wastewater to provide a crucial resource for the dependent recreational industry. Take a city for instance. A city can be divided into commercial areas, residential areas, offices, services, infrastructures, and so forth. These are all sub-systems of the 'big city' system. Problems can emerge in one sub-system, but the solution has to be global. Let's say the price of housing is rising dramatically because there is too high a demand for housing. One solution would be to build new houses, but this will lead to more people living in the city, leading to the need for more infrastructure like roads, schools, more supermarkets, etc. This system is a simplified interpretation of reality whose behaviors can be 'predicted'. In many cases, the systems IE deals with are complex systems. Complexity makes it difficult to understand the behavior of the system and may lead to rebound effects. Due to unforeseen behavioral change of users or consumers, a measure taken to improve environmental performance does not lead to any improvement or may even worsen the situation. Moreover, life cycle thinking is also a very important principle in industrial ecology. It implies that all environmental impacts caused by a product, system, or project during its life cycle are taken into account. In this context life cycle includes Raw material extraction Material processing Manufacture Use Maintenance Disposal The transport necessary between these stages is also taken into account as well as, if relevant, extra stages such as reuse, remanufacture, and recycle. Adopting a life cycle approach is essential to avoid shifting environmental impacts from one life cycle stage to another. This is commonly referred to as problem shifting. For instance, during the re-design of a product, one can choose to reduce its weight, thereby decreasing use of resources. It is possible that the lighter materials used in the new product will be more difficult to dispose of. The environmental impacts of the product gained during the extraction phase are shifted to the disposal phase. Overall environmental improvements are thus null. A final important principle of IE is its integrated approach or multidisciplinarity. IE takes into account three different disciplines: social sciences (including economics), technical sciences and environmental sciences. The challenge is to merge them into a single approach. Examples The Kalundborg industrial park is located in Denmark. This industrial park is special because companies reuse each other's waste (which then becomes by-products). For example, the Energy E2 Asnæs Power Station produces gypsum as a by-product of the electricity generation process; this gypsum becomes a resource for the BPB Gyproc A/S which produces plasterboards. This is one example of a system inspired by the biosphere-technosphere metaphor: in ecosystems, the waste from one organism is used as inputs to other organisms; in industrial systems, waste from a company is used as a resource by others. Apart from the direct benefit of incorporating waste into the loop, the use of an eco-industrial park can be a means of making renewable energy generating plants, like Solar PV, more economical and environmentally friendly. In essence, this assists the growth of the renewable energy industry and the environmental benefits that come with replacing fossil-fuels. Additional examples of industrial ecology include: Substituting the fly ash byproduct of coal burning practices for cement in concrete production Using second generation biofuels. An example of this is converting grease or cooking oil to biodiesels to fuel vehicles. South Africa's National Cleaner Production Center (NCPC) was created in order to make the region's industries more efficient in terms of materials. Results of the use of sustainable methods will include lowered energy costs and improved waste management. The program assesses existing companies to implement change. Onsite non-potable water reuse Biodegradable plastic created from polymerized chicken feathers, which are 90% keratin and account for over 6 million tons of waste in the EU and US annually. As agricultural waste, the chicken feathers are recycled into disposable plastic products which are then easily biodegraded into soil. Toyota Motor Company channels a portion of the greenhouse gases emitted back into their system as recovered thermal energy. Anheuser-Busch signed a memorandum of understanding with biochemical company Blue Marble to use brewing wastes as the basis for its "green" products. Enhanced oil recovery at Petra Nova. Reusing cork from wine bottles for use in shoe soles, flooring tiles, building insulation, automotive gaskets, craft materials, and soil conditioner. Darling Quarter Commonwealth Bank Place North building in Sydney, Australia recycles and reuses its wastewater. Plant based plastic packaging that is 100% recyclable and environmentally friendly. Food waste can be used for compost, which can be used as a natural fertilizer for future food production. Additionally, food waste that has not been contaminated can be used to feed those experiencing food insecurity. Hellisheiði geothermal power station uses ground water to produce electricity and hot water for the city of Reykjavik. Their carbon byproducts are then injected back into the Earth and calcified, leaving the station with a net zero carbon emission. Tools Future directions The ecosystem metaphor popularized by Frosch and Gallopoulos has been a valuable creative tool for helping researchers look for novel solutions to difficult problems. Recently, it has been pointed out that this metaphor is based largely on a model of classical ecology, and that advancements in understanding ecology based on complexity science have been made by researchers such as C. S. Holling, James J. Kay, and further advanced in terms of contemporary ecology by others. For industrial ecology, this may mean a shift from a more mechanistic view of systems, to one where sustainability is viewed as an emergent property of a complex system. To explore this further, several researchers are working with agent based modeling techniques. Exergy analysis is performed in the field of industrial ecology to use energy more efficiently. The term exergy was coined by Zoran Rant in 1956, but the concept was developed by J. Willard Gibbs. In recent decades, utilization of exergy has spread outside physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Other examples An example of industrial ecology both in practice and in potential is the Burnside Cleaner Production Centre in Burnside, Nova Scotia. They play a role in facilitating the 'greening' of over 1200 businesses that are located in Burnside, Eastern Canada's largest industrial park. The creation of waste exchange is a big part of what they work towards, which will promote strong industrial ecology relationships. Onsan Industrial Park is a case-study program intended to serve as an example of policies and practices relevant to pursuing a green growth model of development. The potential benefits of the EIP model are being shown in the Republic of Korea, where more than 1,000 businesses from a wide range of industries call the Ulsan Mipo and Onsan Industrial Park in South Korea home. This park is South Korea's industrial capital because it has more than 100,000 jobs. A common practice for water waste management systems is to use the left over "sludge" as fertilizer. The waste water contains lots of phosphorus and nitrogen which are valuable chemicals to use in fertilizer. Gjenge makers ltd is another example of industrial ecology. The company takes discarded plastics and makes them into bricks. Gjenge makers receives the leftover plastic waste from packaging factories and recycling facilities and sells the pavers. See also References Further reading The industrial green game: implications for environmental design and management, Deanna J Richards (Ed), National Academy Press, Washington DC, USA, 1997, 'Handbook of Input-Output Economics in Industrial Ecology', Sangwon Suh (Ed), Springer, 2009, External links Articles and books Industrial Ecology: An Introduction Industrial Ecology Industrial Symbiosis Timeline Journal of Industrial Ecology (Yale University on behalf of the School of Forestry and Environmental Studies). Industrial Ecology research & articles from The Program for the Human Environment, The Rockefeller University Education Industrial Ecology open online course (IEooc) Erasmus Mundus Master's Programme in Industrial Ecology Industrial ecology programme at the NTNU: Industrial Ecology Programme at NTNU, Trondheim – Norway Industrial Ecology Master's Programme at Leiden University & TU Delft (Joint Degree), Leiden/Delft – The Netherlands Center for Industrial Ecology at Yale University’s School of Forestry & Environmental Studies, New Haven – CT, USA Research material Inventory of free and open software tools for industrial ecology research Network International Society for Industrial Ecology – ISIE Industrial engineering Systems ecology Environmental economics
0.798865
0.97742
0.780827
Genomics
Genomics is an interdisciplinary field of molecular biology focusing on the structure, function, evolution, mapping, and editing of genomes. A genome is an organism's complete set of DNA, including all of its genes as well as its hierarchical, three-dimensional structural configuration. In contrast to genetics, which refers to the study of individual genes and their roles in inheritance, genomics aims at the collective characterization and quantification of all of an organism's genes, their interrelations and influence on the organism. Genes may direct the production of proteins with the assistance of enzymes and messenger molecules. In turn, proteins make up body structures such as organs and tissues as well as control chemical reactions and carry signals between cells. Genomics also involves the sequencing and analysis of genomes through uses of high throughput DNA sequencing and bioinformatics to assemble and analyze the function and structure of entire genomes. Advances in genomics have triggered a revolution in discovery-based research and systems biology to facilitate understanding of even the most complex biological systems such as the brain. The field also includes studies of intragenomic (within the genome) phenomena such as epistasis (effect of one gene on another), pleiotropy (one gene affecting more than one trait), heterosis (hybrid vigour), and other interactions between loci and alleles within the genome. History Etymology From the Greek ΓΕΝ gen, "gene" (gamma, epsilon, nu, epsilon) meaning "become, create, creation, birth", and subsequent variants: genealogy, genesis, genetics, genic, genomere, genotype, genus etc. While the word genome (from the German Genom, attributed to Hans Winkler) was in use in English as early as 1926, the term genomics was coined by Tom Roderick, a geneticist at the Jackson Laboratory (Bar Harbor, Maine), over beers with Jim Womack, Tom Shows and Stephen O’Brien at a meeting held in Maryland on the mapping of the human genome in 1986. First as the name for a new journal and then as a whole new science discipline. Early sequencing efforts Following Rosalind Franklin's confirmation of the helical structure of DNA, James D. Watson and Francis Crick's publication of the structure of DNA in 1953 and Fred Sanger's publication of the Amino acid sequence of insulin in 1955, nucleic acid sequencing became a major target of early molecular biologists. In 1964, Robert W. Holley and colleagues published the first nucleic acid sequence ever determined, the ribonucleotide sequence of alanine transfer RNA. Extending this work, Marshall Nirenberg and Philip Leder revealed the triplet nature of the genetic code and were able to determine the sequences of 54 out of 64 codons in their experiments. In 1972, Walter Fiers and his team at the Laboratory of Molecular Biology of the University of Ghent (Ghent, Belgium) were the first to determine the sequence of a gene: the gene for Bacteriophage MS2 coat protein. Fiers' group expanded on their MS2 coat protein work, determining the complete nucleotide-sequence of bacteriophage MS2-RNA (whose genome encodes just four genes in 3569 base pairs [bp]) and Simian virus 40 in 1976 and 1978, respectively. DNA-sequencing technology developed In addition to his seminal work on the amino acid sequence of insulin, Frederick Sanger and his colleagues played a key role in the development of DNA sequencing techniques that enabled the establishment of comprehensive genome sequencing projects. In 1975, he and Alan Coulson published a sequencing procedure using DNA polymerase with radiolabelled nucleotides that he called the Plus and Minus technique. This involved two closely related methods that generated short oligonucleotides with defined 3' termini. These could be fractionated by electrophoresis on a polyacrylamide gel (called polyacrylamide gel electrophoresis) and visualised using autoradiography. The procedure could sequence up to 80 nucleotides in one go and was a big improvement, but was still very laborious. Nevertheless, in 1977 his group was able to sequence most of the 5,386 nucleotides of the single-stranded bacteriophage φX174, completing the first fully sequenced DNA-based genome. The refinement of the Plus and Minus method resulted in the chain-termination, or Sanger method (see below), which formed the basis of the techniques of DNA sequencing, genome mapping, data storage, and bioinformatic analysis most widely used in the following quarter-century of research. In the same year Walter Gilbert and Allan Maxam of Harvard University independently developed the Maxam-Gilbert method (also known as the chemical method) of DNA sequencing, involving the preferential cleavage of DNA at known bases, a less efficient method. For their groundbreaking work in the sequencing of nucleic acids, Gilbert and Sanger shared half the 1980 Nobel Prize in chemistry with Paul Berg (recombinant DNA). Complete genomes The advent of these technologies resulted in a rapid intensification in the scope and speed of completion of genome sequencing projects. The first complete genome sequence of a eukaryotic organelle, the human mitochondrion (16,568 bp, about 16.6 kb [kilobase]), was reported in 1981, and the first chloroplast genomes followed in 1986. In 1992, the first eukaryotic chromosome, chromosome III of brewer's yeast Saccharomyces cerevisiae (315 kb) was sequenced. The first free-living organism to be sequenced was that of Haemophilus influenzae (1.8 Mb [megabase]) in 1995. The following year a consortium of researchers from laboratories across North America, Europe, and Japan announced the completion of the first complete genome sequence of a eukaryote, S. cerevisiae (12.1 Mb), and since then genomes have continued being sequenced at an exponentially growing pace. , the complete sequences are available for: 2,719 viruses, 1,115 archaea and bacteria, and 36 eukaryotes, of which about half are fungi. Most of the microorganisms whose genomes have been completely sequenced are problematic pathogens, such as Haemophilus influenzae, which has resulted in a pronounced bias in their phylogenetic distribution compared to the breadth of microbial diversity. Of the other sequenced species, most were chosen because they were well-studied model organisms or promised to become good models. Yeast (Saccharomyces cerevisiae) has long been an important model organism for the eukaryotic cell, while the fruit fly Drosophila melanogaster has been a very important tool (notably in early pre-molecular genetics). The worm Caenorhabditis elegans is an often used simple model for multicellular organisms. The zebrafish Brachydanio rerio is used for many developmental studies on the molecular level, and the plant Arabidopsis thaliana is a model organism for flowering plants. The Japanese pufferfish (Takifugu rubripes) and the spotted green pufferfish (Tetraodon nigroviridis) are interesting because of their small and compact genomes, which contain very little noncoding DNA compared to most species. The mammals dog (Canis familiaris), brown rat (Rattus norvegicus), mouse (Mus musculus), and chimpanzee (Pan troglodytes) are all important model animals in medical research. A rough draft of the human genome was completed by the Human Genome Project in early 2001, creating much fanfare. This project, completed in 2003, sequenced the entire genome for one specific person, and by 2007 this sequence was declared "finished" (less than one error in 20,000 bases and all chromosomes assembled). In the years since then, the genomes of many other individuals have been sequenced, partly under the auspices of the 1000 Genomes Project, which announced the sequencing of 1,092 genomes in October 2012. Completion of this project was made possible by the development of dramatically more efficient sequencing technologies and required the commitment of significant bioinformatics resources from a large international collaboration. The continued analysis of human genomic data has profound political and social repercussions for human societies. The "omics" revolution The English-language neologism omics informally refers to a field of study in biology ending in -omics, such as genomics, proteomics or metabolomics. The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome, or metabolome (lipidome) respectively. The suffix -ome as used in molecular biology refers to a totality of some sort; similarly omics has come to refer generally to the study of large, comprehensive biological data sets. While the growth in the use of the term has led some scientists (Jonathan Eisen, among others) to claim that it has been oversold, it reflects the change in orientation towards the quantitative analysis of complete or near-complete assortment of all the constituents of a system. In the study of symbioses, for example, researchers which were once limited to the study of a single gene product can now simultaneously compare the total complement of several types of biological molecules. Genome analysis After an organism has been selected, genome projects involve three components: the sequencing of DNA, the assembly of that sequence to create a representation of the original chromosome, and the annotation and analysis of that representation. Sequencing Historically, sequencing was done in sequencing centers, centralized facilities (ranging from large independent institutions such as Joint Genome Institute which sequence dozens of terabases a year, to local molecular biology core facilities) which contain research laboratories with the costly instrumentation and technical support necessary. As sequencing technology continues to improve, however, a new generation of effective fast turnaround benchtop sequencers has come within reach of the average academic laboratory. On the whole, genome sequencing approaches fall into two broad categories, shotgun and high-throughput (or next-generation) sequencing. Shotgun sequencing Shotgun sequencing is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes. It is named by analogy with the rapidly expanding, quasi-random firing pattern of a shotgun. Since gel electrophoresis sequencing can only be used for fairly short sequences (100 to 1000 base pairs), longer DNA sequences must be broken into random small segments which are then sequenced to obtain reads. Multiple overlapping reads for the target DNA are obtained by performing several rounds of this fragmentation and sequencing. Computer programs then use the overlapping ends of different reads to assemble them into a continuous sequence. Shotgun sequencing is a random sampling process, requiring over-sampling to ensure a given nucleotide is represented in the reconstructed sequence; the average number of reads by which a genome is over-sampled is referred to as coverage. For much of its history, the technology underlying shotgun sequencing was the classical chain-termination method or 'Sanger method', which is based on the selective incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication. Recently, shotgun sequencing has been supplanted by high-throughput sequencing methods, especially for large-scale, automated genome analyses. However, the Sanger method remains in wide use, primarily for smaller-scale projects and for obtaining especially long contiguous DNA sequence reads (>500 nucleotides). Chain-termination methods require a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleosidetriphosphates (dNTPs), and modified nucleotides (dideoxyNTPs) that terminate DNA strand elongation. These chain-terminating nucleotides lack a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, causing DNA polymerase to cease extension of DNA when a ddNTP is incorporated. The ddNTPs may be radioactively or fluorescently labelled for detection in DNA sequencers. Typically, these machines can sequence up to 96 DNA samples in a single batch (run) in up to 48 runs a day. High-throughput sequencing The high demand for low-cost sequencing has driven the development of high-throughput sequencing technologies that parallelize the sequencing process, producing thousands or millions of sequences at once. High-throughput sequencing is intended to lower the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing, as many as 500,000 sequencing-by-synthesis operations may be run in parallel. The Illumina dye sequencing method is based on reversible dye-terminators and was developed in 1996 at the Geneva Biomedical Research Institute, by Pascal Mayer and Laurent Farinelli. In this method, DNA molecules and primers are first attached on a slide and amplified with polymerase so that local clonal colonies, initially coined "DNA colonies", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera. Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity; with an optimal configuration, the ultimate throughput of the instrument depends only on the A/D conversion rate of the camera. The camera takes images of the fluorescently labeled nucleotides, then the dye along with the terminal 3' blocker is chemically removed from the DNA, allowing the next cycle. An alternative approach, ion semiconductor sequencing, is based on standard DNA replication chemistry. This technology measures the release of a hydrogen ion each time a base is incorporated. A microwell containing template DNA is flooded with a single nucleotide, if the nucleotide is complementary to the template strand it will be incorporated and a hydrogen ion will be released. This release triggers an ISFET ion sensor. If a homopolymer is present in the template sequence multiple nucleotides will be incorporated in a single flood cycle, and the detected electrical signal will be proportionally higher. Assembly Sequence assembly refers to aligning and merging fragments of a much longer DNA sequence in order to reconstruct the original sequence. This is needed as current DNA sequencing technology cannot read whole genomes as a continuous sequence, but rather reads small pieces of between 20 and 1000 bases, depending on the technology used. Third generation sequencing technologies such as PacBio or Oxford Nanopore routinely generate sequencing reads 10-100 kb in length; however, they have a high error rate at approximately 1 percent. Typically the short fragments, called reads, result from shotgun sequencing genomic DNA, or gene transcripts (ESTs). Assembly approaches Assembly can be broadly categorized into two approaches: de novo assembly, for genomes which are not similar to any sequenced in the past, and comparative assembly, which uses the existing sequence of a closely related organism as a reference during assembly. Relative to comparative assembly, de novo assembly is computationally difficult (NP-hard), making it less favourable for short-read NGS technologies. Within the de novo assembly paradigm there are two primary strategies for assembly, Eulerian path strategies, and overlap-layout-consensus (OLC) strategies. OLC strategies ultimately try to create a Hamiltonian path through an overlap graph which is an NP-hard problem. Eulerian path strategies are computationally more tractable because they try to find a Eulerian path through a deBruijn graph. Finishing Finished genomes are defined as having a single contiguous sequence with no ambiguities representing each replicon. Annotation The DNA sequence assembly alone is of little value without additional analysis. Genome annotation is the process of attaching biological information to sequences, and consists of three main steps: identifying portions of the genome that do not code for proteins identifying elements on the genome, a process called gene prediction, and attaching biological information to these elements. Automatic annotation tools try to perform these steps in silico, as opposed to manual annotation (a.k.a. curation) which involves human expertise and potential experimental verification. Ideally, these approaches co-exist and complement each other in the same annotation pipeline (also see below). Traditionally, the basic level of annotation is using BLAST for finding similarities, and then annotating genomes based on homologues. More recently, additional information is added to the annotation platform. The additional information allows manual annotators to deconvolute discrepancies between genes that are given the same annotation. Some databases use genome context information, similarity scores, experimental data, and integrations of other resources to provide genome annotations through their Subsystems approach. Other databases (e.g. Ensembl) rely on both curated data sources as well as a range of software tools in their automated genome annotation pipeline. Structural annotation consists of the identification of genomic elements, primarily ORFs and their localisation, or gene structure. Functional annotation consists of attaching biological information to genomic elements. Sequencing pipelines and databases The need for reproducibility and efficient management of the large amount of data associated with genome projects mean that computational pipelines have important applications in genomics. Research areas Functional genomics Functional genomics is a field of molecular biology that attempts to make use of the vast wealth of data produced by genomic projects (such as genome sequencing projects) to describe gene (and protein) functions and interactions. Functional genomics focuses on the dynamic aspects such as gene transcription, translation, and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. Functional genomics attempts to answer questions about the function of DNA at the levels of genes, RNA transcripts, and protein products. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "gene-by-gene" approach. A major branch of genomics is still concerned with sequencing the genomes of various organisms, but the knowledge of full genomes has created the possibility for the field of functional genomics, mainly concerned with patterns of gene expression during various conditions. The most important tools here are microarrays and bioinformatics. Structural genomics Structural genomics seeks to describe the 3-dimensional structure of every protein encoded by a given genome. This genome-based approach allows for a high-throughput method of structure determination by a combination of experimental and modeling approaches. The principal difference between structural genomics and traditional structural prediction is that structural genomics attempts to determine the structure of every protein encoded by the genome, rather than focusing on one particular protein. With full-genome sequences available, structure prediction can be done more quickly through a combination of experimental and modeling approaches, especially because the availability of large numbers of sequenced genomes and previously solved protein structures allow scientists to model protein structure on the structures of previously solved homologs. Structural genomics involves taking a large number of approaches to structure determination, including experimental methods using genomic sequences or modeling-based approaches based on sequence or structural homology to a protein of known structure or based on chemical and physical principles for a protein with no homology to any known structure. As opposed to traditional structural biology, the determination of a protein structure through a structural genomics effort often (but not always) comes before anything is known regarding the protein function. This raises new challenges in structural bioinformatics, i.e. determining protein function from its 3D structure. Epigenomics Epigenomics is the study of the complete set of epigenetic modifications on the genetic material of a cell, known as the epigenome. Epigenetic modifications are reversible modifications on a cell's DNA or histones that affect gene expression without altering the DNA sequence (Russell 2010 p. 475). Two of the most characterized epigenetic modifications are DNA methylation and histone modification. Epigenetic modifications play an important role in gene expression and regulation, and are involved in numerous cellular processes such as in differentiation/development and tumorigenesis. The study of epigenetics on a global level has been made possible only recently through the adaptation of genomic high-throughput assays. Metagenomics Metagenomics is the study of metagenomes, genetic material recovered directly from environmental samples. The broad field may also be referred to as environmental genomics, ecogenomics or community genomics. While traditional microbiology and microbial genome sequencing rely upon cultivated clonal cultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods. Recent studies use "shotgun" Sanger sequencing or massively parallel pyrosequencing to get largely unbiased samples of all genes from all the members of the sampled communities. Because of its power to reveal the previously hidden diversity of microscopic life, metagenomics offers a powerful lens for viewing the microbial world that has the potential to revolutionize understanding of the entire living world. Model systems Viruses and bacteriophages Bacteriophages have played and continue to play a key role in bacterial genetics and molecular biology. Historically, they were used to define gene structure and gene regulation. Also the first genome to be sequenced was a bacteriophage. However, bacteriophage research did not lead the genomics revolution, which is clearly dominated by bacterial genomics. Only very recently has the study of bacteriophage genomes become prominent, thereby enabling researchers to understand the mechanisms underlying phage evolution. Bacteriophage genome sequences can be obtained through direct sequencing of isolated bacteriophages, but can also be derived as part of microbial genomes. Analysis of bacterial genomes has shown that a substantial amount of microbial DNA consists of prophage sequences and prophage-like elements. A detailed database mining of these sequences offers insights into the role of prophages in shaping the bacterial genome: Overall, this method verified many known bacteriophage groups, making this a useful tool for predicting the relationships of prophages from bacterial genomes. Cyanobacteria At present there are 24 cyanobacteria for which a total genome sequence is available. 15 of these cyanobacteria come from the marine environment. These are six Prochlorococcus strains, seven marine Synechococcus strains, Trichodesmium erythraeum IMS101 and Crocosphaera watsonii WH8501. Several studies have demonstrated how these sequences could be used very successfully to infer important ecological and physiological characteristics of marine cyanobacteria. However, there are many more genome projects currently in progress, amongst those there are further Prochlorococcus and marine Synechococcus isolates, Acaryochloris and Prochloron, the N2-fixing filamentous cyanobacteria Nodularia spumigena, Lyngbya aestuarii and Lyngbya majuscula, as well as bacteriophages infecting marine cyanobaceria. Thus, the growing body of genome information can also be tapped in a more general way to address global problems by applying a comparative approach. Some new and exciting examples of progress in this field are the identification of genes for regulatory RNAs, insights into the evolutionary origin of photosynthesis, or estimation of the contribution of horizontal gene transfer to the genomes that have been analyzed. Applications Genomics has provided applications in many fields, including medicine, biotechnology, anthropology and other social sciences. Genomic medicine Next-generation genomic technologies allow clinicians and biomedical researchers to drastically increase the amount of genomic data collected on large study populations. When combined with new informatics approaches that integrate many kinds of data with genomic data in disease research, this allows researchers to better understand the genetic bases of drug response and disease. Early efforts to apply the genome to medicine included those by a Stanford team led by Euan Ashley who developed the first tools for the medical interpretation of a human genome. The Genomes2People research program at Brigham and Women’s Hospital, Broad Institute and Harvard Medical School was established in 2012 to conduct empirical research in translating genomics into health. Brigham and Women's Hospital opened a Preventive Genomics Clinic in August 2019, with Massachusetts General Hospital following a month later. The All of Us research program aims to collect genome sequence data from 1 million participants to become a critical component of the precision medicine research platform. Synthetic biology and bioengineering The growth of genomic knowledge has enabled increasingly sophisticated applications of synthetic biology. In 2010 researchers at the J. Craig Venter Institute announced the creation of a partially synthetic species of bacterium, Mycoplasma laboratorium, derived from the genome of Mycoplasma genitalium. Population and conservation genomics Population genomics has developed as a popular field of research, where genomic sequencing methods are used to conduct large-scale comparisons of DNA sequences among populations - beyond the limits of genetic markers such as short-range PCR products or microsatellites traditionally used in population genetics. Population genomics studies genome-wide effects to improve our understanding of microevolution so that we may learn the phylogenetic history and demography of a population. Population genomic methods are used for many different fields including evolutionary biology, ecology, biogeography, conservation biology and fisheries management. Similarly, landscape genomics has developed from landscape genetics to use genomic methods to identify relationships between patterns of environmental and genetic variation. Conservationists can use the information gathered by genomic sequencing in order to better evaluate genetic factors key to species conservation, such as the genetic diversity of a population or whether an individual is heterozygous for a recessive inherited genetic disorder. By using genomic data to evaluate the effects of evolutionary processes and to detect patterns in variation throughout a given population, conservationists can formulate plans to aid a given species without as many variables left unknown as those unaddressed by standard genetic approaches. See also Hi-C (genomic analysis technique) Cognitive genomics Computational genomics Epigenomics Functional genomics GeneCalling, an mRNA profiling technology Genomics of domestication Genetics in fiction Glycomics Immunomics Metagenomics Pathogenomics Personal genomics Proteomics Transcriptomics Venomics Psychogenomics Whole genome sequencing Thomas Roderick References Further reading electronic-book electronic- External links Annual Review of Genomics and Human Genetics BMC Genomics: A BMC journal on Genomics Genomics journal Genomics.org: An openfree genomics portal. NHGRI: US government's genome institute JCVI Comprehensive Microbial Resource KoreaGenome.org: The first Korean Genome published and the sequence is available freely. GenomicsNetwork: Looks at the development and use of the science and technologies of genomics. Institute for Genome Sciences: Genomics research. MIT OpenCourseWare HST.512 Genomic Medicine A free, self-study course in genomic medicine. Resources include audio lectures and selected lecture notes. ENCODE threads explorer Machine learning approaches to genomics. Nature (journal) Global map of genomics laboratories Genomics: Scitable by nature education Learn All About Genetics Online
0.784487
0.995068
0.780617
Ecocentrism
Ecocentrism (; from Greek: οἶκος oikos, 'house' and κέντρον kentron, 'center') is a term used by environmental philosophers and ecologists to denote a nature-centered, as opposed to human-centered (i.e., anthropocentric), system of values. The justification for ecocentrism usually consists in an ontological belief and subsequent ethical claim. The ontological belief denies that there are any existential divisions between human and non-human nature sufficient to claim that humans are either (a) the sole bearers of intrinsic value or (b) possess greater intrinsic value than non-human nature. Thus the subsequent ethical claim is for an equality of intrinsic value across human and non-human nature, or biospherical egalitarianism. Origin of term The ecocentric ethic was conceived by Aldo Leopold and recognizes that all species, including humans, are the product of a long evolutionary process and are inter-related in their life processes. The writings of Aldo Leopold and his idea of the land ethic and good environmental management are a key element to this philosophy. Ecocentrism focuses on the biotic community as a whole and strives to maintain ecosystem composition and ecological processes. The term also finds expression in the first principle of the deep ecology movement, as formulated by Arne Næss and George Sessions in 1984 which points out that anthropocentrism, which considers humans as the center of the universe and the pinnacle of all creation, is a difficult opponent for ecocentrism. Background Environmental thought and the various branches of the environmental movement are often classified into two intellectual camps: those that are considered anthropocentric, or "human-centred," in orientation and those considered biocentric, or "life-centred". This division has been described in other terminology as "shallow" ecology versus "deep" ecology and as "technocentrism" versus "ecocentrism". Ecocentrism can be seen as one stream of thought within environmentalism, the political and ethical movement that seeks to protect and improve the quality of the natural environment through changes to environmentally harmful human activities by adopting environmentally benign forms of political, economic, and social organization and through a reassessment of humanity's relationship with nature. In various ways, environmentalism claims that non-human organisms and the natural environment as a whole deserve consideration when appraising the morality of political, economic, and social policies. Environmental communication scholars suggest that anthropocentric ways of being and identities are maintained by various modes of cultural disciplinary power such as ridiculing, labelling, and silencing. Accordingly, the transition to more ecocentric ways of being and identities requires not only legal and economic structural change, but also the emergence of ecocultural practices that challenge anthropocentric disciplinary power and lead to the creation of ecocentric cultural norms. Relationship to other similar philosophies Anthropocentrism Ecocentrism is taken by its proponents to constitute a radical challenge to long-standing and deeply rooted anthropocentric attitudes in Western culture, science, and politics. Anthropocentrism is alleged to leave the case for the protection of non-human nature subject to the demands of human utility, and thus never more than contingent on the demands of human welfare. An ecocentric ethic, by contrast, is believed to be necessary in order to develop a non-contingent basis for protecting the natural world. Critics of ecocentrism have argued that it opens the doors to an anti-humanist morality that risks sacrificing human well-being for the sake of an ill-defined 'greater good'. Deep ecologist Arne Naess has identified anthropocentrism as a root cause of the ecological crisis, human overpopulation, and the extinctions of many non-human species. Lupinacci also points to anthropocentrism as a root cause of environmental degradation. Others point to the gradual historical realization that humans are not the centre of all things, that "A few hundred years ago, with some reluctance, Western people admitted that the planets, Sun and stars did not circle around their abode. In short, our thoughts and concepts though irreducibly anthropomorphic need not be anthropocentric." Industrocentrism It sees all things on earth as resources to be utilized by humans or to be commodified. This view is the opposite of anthropocentrism and ecocentrism. Technocentrism Ecocentrism is also contrasted with technocentrism (meaning values centred on technology) as two opposing perspectives on attitudes towards human technology and its ability to affect, control and even protect the environment. Ecocentrics, including "deep green" ecologists, see themselves as being subject to nature, rather than in control of it. They lack faith in modern technology and the bureaucracy attached to it. Ecocentrics will argue that the natural world should be respected for its processes and products, and that low impact technology and self-reliance is more desirable than technological control of nature. Technocentrics, including imperialists, have absolute faith in technology and industry and firmly believe that humans have control over nature. Although technocentrics may accept that environmental problems do exist, they do not see them as problems to be solved by a reduction in industry. Indeed, technocentrics see that the way forward for developed and developing countries and the solutions to our environmental problems today lie in scientific and technological advancement. Biocentrism The distinction between biocentrism and ecocentrism is ill-defined. Ecocentrism recognizes Earth's interactive living and non-living systems rather than just the Earth's organisms (biocentrism) as central in importance. The term has been used by those advocating "left biocentrism", combining deep ecology with an "anti-industrial and anti-capitalist" position (David Orton et al.). See also Deep ecology Earth liberation Ecosophy Ecocentric embodied energy analysis Environmentalism Ecological humanities Radical environmentalism Gaia hypothesis Holocentric Sentiocentrism Social ecology (Bookchin) Technocentrism References Further reading Bosselmann, K. 1999. When Two Worlds Collide: Society and Ecology. Eckersley, R. 1992. Environmentalism and Political Theory: Toward an Ecocentric Approach. State University of New York Press. Hettinger, Ned and Throop, Bill 1999. Refocusing Ecocentrism: De-emphasizing Stability and Defending Wilderness. Environmental Ethics 21: 3-21. External links The Ecological Citizen Ecospheric Ethics Ecocentric Alliance Concepts in political philosophy Environmental ethics Green politics Political ecology
0.788487
0.989988
0.780593
Motility
Motility is the ability of an organism to move independently using metabolic energy. This biological concept encompasses movement at various levels, from whole organisms to cells and subcellular components. Motility is observed in animals, microorganisms, and even some plant structures, playing crucial roles in activities such as foraging, reproduction, and cellular functions. It is genetically determined but can be influenced by environmental factors. In multicellular organisms, motility is facilitated by systems like the nervous and musculoskeletal systems, while at the cellular level, it involves mechanisms such as amoeboid movement and flagellar propulsion. These cellular movements can be directed by external stimuli, a phenomenon known as taxis. Examples include chemotaxis (movement along chemical gradients) and phototaxis (movement in response to light). Motility also includes physiological processes like gastrointestinal movements and peristalsis. Understanding motility is important in biology, medicine, and ecology, as it impacts processes ranging from bacterial behavior to ecosystem dynamics. Definitions Motility, the ability of an organism to move independently, using metabolic energy, can be contrasted with sessility, the state of organisms that do not possess a means of self-locomotion and are normally immobile. Motility differs from mobility, the ability of an object to be moved. The term vagility means a lifeform that can be moved but only passively; sessile organisms including plants and fungi often have vagile parts such as fruits, seeds, or spores which may be dispersed by other agents such as wind, water, or other organisms. Motility is genetically determined, but may be affected by environmental factors such as toxins. The nervous system and musculoskeletal system provide the majority of mammalian motility. In addition to animal locomotion, most animals are motile, though some are vagile, described as having passive locomotion. Many bacteria and other microorganisms, including even some viruses, and multicellular organisms are motile; some mechanisms of fluid flow in multicellular organs and tissue are also considered instances of motility, as with gastrointestinal motility. Motile marine animals are commonly called free-swimming, and motile non-parasitic organisms are called free-living. Motility includes an organism's ability to move food through its digestive tract. There are two types of intestinal motility – peristalsis and segmentation. This motility is brought about by the contraction of smooth muscles in the gastrointestinal tract which mix the luminal contents with various secretions (segmentation) and move contents through the digestive tract from the mouth to the anus (peristalsis). Cellular level At the cellular level, different modes of movement exist: amoeboid movement, a crawling-like movement, which also makes swimming possible filopodia, enabling movement of the axonal growth cone flagellar motility, a swimming-like motion (observed for example in spermatozoa, propelled by the regular beat of their flagellum, or the E. coli bacterium, which swims by rotating a helical prokaryotic flagellum) gliding motility swarming motility twitching motility, a form of motility used by bacteria to crawl over surfaces using grappling hook-like filaments called type IV pili. Many cells are not motile, for example Klebsiella pneumoniae and Shigella, or under specific circumstances such as Yersinia pestis at 37 °C. Movements Events perceived as movements can be directed: along a chemical gradient (see chemotaxis) along a temperature gradient (see thermotaxis) along a light gradient (see phototaxis) along a magnetic field line (see magnetotaxis) along an electric field (see galvanotaxis) along the direction of the gravitational force (see gravitaxis) along a rigidity gradient (see durotaxis) along a gradient of cell adhesion sites (see haptotaxis) along other cells or biopolymers See also Cell migration References Physiology Cell movement Articles containing video clips
0.784864
0.994522
0.780565
Ecosophy
Ecosophy or ecophilosophy (a portmanteau of ecological philosophy) is a philosophy of ecological harmony or equilibrium. The term was coined by the French post-structuralist philosopher and psychoanalyst Félix Guattari and the Norwegian father of deep ecology, Arne Næss. Félix Guattari Ecosophy also refers to a field of practice introduced by psychoanalyst, poststructuralist philosopher, and political activist Félix Guattari. In part Guattari's use of the term demarcates a necessity for the proponents of social liberation, whose struggles in the 20th century were dominated by the paradigm of social revolution, to embed their arguments within an ecological framework which understands the interconnections of social and environmental spheres. Guattari holds that traditional environmentalist perspectives obscure the complexity of the relationship between humans and their natural environment through their maintenance of the dualistic separation of human (cultural) and nonhuman (natural) systems; he envisions ecosophy as a new field with a monistic and pluralistic approach to such study. Ecology in the Guattarian sense, then, is a study of complex phenomena, including human subjectivity, the environment, and social relations, all of which are intimately interconnected. Despite this emphasis on interconnection, throughout his individual writings and more famous collaborations with Gilles Deleuze, Guattari has resisted calls for holism, preferring to emphasize heterogeneity and difference, synthesizing assemblages and multiplicities in order to trace rhizomatic structures rather than creating unified and holistic structures. Guattari's concept of the three interacting and interdependent ecologies of mind, society, and environment stems from the outline of the three ecologies presented in Steps to an Ecology of Mind, a collection of writings by cyberneticist Gregory Bateson. Næss's definition Næss defined ecosophy in the following way: While a professor at the University of Oslo in 1972, Arne Næss, introduced the terms "deep ecology movement" and "ecosophy" into environmental literature. Næss based his article on a talk he gave in Bucharest in 1972 at the Third World Future Research Conference. As Drengson notes in Ecophilosophy, Ecosophy and the Deep Ecology Movement: An Overview, "In his talk, Næss discussed the longer-range background of the ecology movement and its connection with respect for Nature and the inherent worth of other beings." Næss's view of humans as an integral part of a "total-field image" of Nature contrasts with the alternative construction of ecosophy outlined by Guattari. The term ecological wisdom, synonymous with ecosophy, was introduced by Næss in 1973. The concept has become one of the foundations of the deep ecology movement. All expressions of values by Green Parties list ecological wisdom as a key value—it was one of the original Four Pillars of the Green Party and is often considered the most basic value of these parties. It is also often associated with indigenous religion and cultural practices. In its political context, it is necessarily not as easily defined as ecological health or scientific ecology concepts. See also Ecology Environmental philosophy Global Greens Charter Green syndicalism Silvilization Simple living Spiritual ecology Sustainable living Yin and yang Notes References Drengson, A. and Y. Inoue, eds. (1995) The Deep Ecology Movement: An Introductory Anthology. Berkeley: North Atlantic Publishers. Guattari, Félix: »Pour une refondation des pratiques sociales«. In: Le Monde Diplomatique (Oct. 1992): 26-7. Guattari, Félix: »Remaking Social Practices«. In: Genosko, Gary (Hg.) (1996): The Guattari Reader. Oxford, Blackwell, S. 262-273. Maybury-Lewis, David. (1992) "On the Importance of Being Tribal: Tribal Wisdom." Millennium: Tribal Wisdom and the Modern World. Binimun Productions Ltd. Næss, Arne. (1973) The Shallow and the Deep Long-Range Ecology Movement: A Summary". Inquiry, 16:95-100 Drengson A. & B. Devall (2008) (Eds) The Ecology of Wisdom. Writings by Arne Naess. Berkeley: Counterpoint Levesque, Simon (2016) Two versions of ecosophy: Arne Næss, Félix Guattari, and their connection with semiotics. Sign Systems Studies'' 44(4): 511-541. http://dx.doi.org/10.12697/SSS.2016.44.4.03 External links Ecophilosophy, Ecosophy and the Deep Ecology Movement: An Overview by Alan Drengson Ecospherics.net. Accessed 2005-08-14. The Trumpeter, A Journal of Ecosophy. Ecology Postmodern theory Environmental philosophy Environmentalism Arne Næss
0.793649
0.983494
0.780549
Evolutionary psychology
Evolutionary psychology is a theoretical approach in psychology that examines cognition and behavior from a modern evolutionary perspective. It seeks to identify human psychological adaptations with regards to the ancestral problems they evolved to solve. In this framework, psychological traits and mechanisms are either functional products of natural and sexual selection or non-adaptive by-products of other adaptive traits. Adaptationist thinking about physiological mechanisms, such as the heart, lungs, and the liver, is common in evolutionary biology. Evolutionary psychologists apply the same thinking in psychology, arguing that just as the heart evolved to pump blood, the liver evolved to detoxify poisons, and the kidneys evolved to filter turbid fluids there is modularity of mind in that different psychological mechanisms evolved to solve different adaptive problems. These evolutionary psychologists argue that much of human behavior is the output of psychological adaptations that evolved to solve recurrent problems in human ancestral environments. Some evolutionary psychologists argue that evolutionary theory can provide a foundational, metatheoretical framework that integrates the entire field of psychology in the same way evolutionary biology has for biology. Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations, including the abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, and cooperate with others. Findings have been made regarding human social behaviour related to infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price, and parental investment. The theories and findings of evolutionary psychology have applications in many fields, including economics, environment, health, law, management, psychiatry, politics, and literature. Criticism of evolutionary psychology involves questions of testability, cognitive and evolutionary assumptions (such as modular functioning of the brain, and large uncertainty about the ancestral environment), importance of non-genetic and non-adaptive explanations, as well as political and ethical issues due to interpretations of research results. Evolutionary psychologists frequently engage with and respond to such criticisms. Scope Principles Its central assumption is that the human brain is composed of a large number of specialized mechanisms that were shaped by natural selection over a vast period of time to solve the recurrent information-processing problems faced by our ancestors. These problems involve food choices, social hierarchies, distributing resources to offspring, and selecting mates. Proponents suggest that it seeks to integrate psychology into the other natural sciences, rooting it in the organizing theory of biology (evolutionary theory), and thus understanding psychology as a branch of biology. Anthropologist John Tooby and psychologist Leda Cosmides note: Just as human physiology and evolutionary physiology have worked to identify physical adaptations of the body that represent "human physiological nature," the purpose of evolutionary psychology is to identify evolved emotional and cognitive adaptations that represent "human psychological nature." According to Steven Pinker, it is "not a single theory but a large set of hypotheses" and a term that "has also come to refer to a particular way of applying evolutionary theory to the mind, with an emphasis on adaptation, gene-level selection, and modularity." Evolutionary psychology adopts an understanding of the mind that is based on the computational theory of mind. It describes mental processes as computational operations, so that, for example, a fear response is described as arising from a neurological computation that inputs the perceptional data, e.g. a visual image of a spider, and outputs the appropriate reaction, e.g. fear of possibly dangerous animals. Under this view, any domain-general learning is impossible because of the combinatorial explosion. Evolutionary Psychology specifies the domain as the problems of survival and reproduction. While philosophers have generally considered the human mind to include broad faculties, such as reason and lust, evolutionary psychologists describe evolved psychological mechanisms as narrowly focused to deal with specific issues, such as catching cheaters or choosing mates. The discipline sees the human brain as having evolved specialized functions, called cognitive modules, or psychological adaptations which are shaped by natural selection. Examples include language-acquisition modules, incest-avoidance mechanisms, cheater-detection mechanisms, intelligence and sex-specific mating preferences, foraging mechanisms, alliance-tracking mechanisms, agent-detection mechanisms, and others. Some mechanisms, termed domain-specific, deal with recurrent adaptive problems over the course of human evolutionary history. Domain-general mechanisms, on the other hand, are proposed to deal with evolutionary novelty. Evolutionary psychology has roots in cognitive psychology and evolutionary biology but also draws on behavioral ecology, artificial intelligence, genetics, ethology, anthropology, archaeology, biology, ecopsycology and zoology. It is closely linked to sociobiology, but there are key differences between them including the emphasis on domain-specific rather than domain-general mechanisms, the relevance of measures of current fitness, the importance of mismatch theory, and psychology rather than behavior. Nikolaas Tinbergen's four categories of questions can help to clarify the distinctions between several different, but complementary, types of explanations. Evolutionary psychology focuses primarily on the "why?" questions, while traditional psychology focuses on the "how?" questions. Premises Evolutionary psychology is founded on several core premises. The brain is an information processing device, and it produces behavior in response to external and internal inputs. The brain's adaptive mechanisms were shaped by natural and sexual selection. Different neural mechanisms are specialized for solving problems in humanity's evolutionary past. The brain has evolved specialized neural mechanisms that were designed for solving problems that recurred over deep evolutionary time, giving modern humans stone-age minds. Most contents and processes of the brain are unconscious; and most mental problems that seem easy to solve are actually extremely difficult problems that are solved unconsciously by complicated neural mechanisms. Human psychology consists of many specialized mechanisms, each sensitive to different classes of information or inputs. These mechanisms combine to manifest behavior. History Evolutionary psychology has its historical roots in Charles Darwin's theory of natural selection. In The Origin of Species, Darwin predicted that psychology would develop an evolutionary basis: Two of his later books were devoted to the study of animal emotions and psychology; The Descent of Man, and Selection in Relation to Sex in 1871 and The Expression of the Emotions in Man and Animals in 1872. Darwin's work inspired William James's functionalist approach to psychology. Darwin's theories of evolution, adaptation, and natural selection have provided insight into why brains function the way they do. The content of evolutionary psychology has derived from, on the one hand, the biological sciences (especially evolutionary theory as it relates to ancient human environments, the study of paleoanthropology and animal behavior) and, on the other, the human sciences, especially psychology. Evolutionary biology as an academic discipline emerged with the modern synthesis in the 1930s and 1940s. In the 1930s the study of animal behavior (ethology) emerged with the work of the Dutch biologist Nikolaas Tinbergen and the Austrian biologists Konrad Lorenz and Karl von Frisch. W.D. Hamilton's (1964) papers on inclusive fitness and Robert Trivers's (1972) theories on reciprocity and parental investment helped to establish evolutionary thinking in psychology and the other social sciences. In 1975, Edward O. Wilson combined evolutionary theory with studies of animal and social behavior, building on the works of Lorenz and Tinbergen, in his book Sociobiology: The New Synthesis. In the 1970s, two major branches developed from ethology. Firstly, the study of animal social behavior (including humans) generated sociobiology, defined by its pre-eminent proponent Edward O. Wilson in 1975 as "the systematic study of the biological basis of all social behavior" and in 1978 as "the extension of population biology and evolutionary theory to social organization." Secondly, there was behavioral ecology which placed less emphasis on social behavior; it focused on the ecological and evolutionary basis of animal and human behavior. In the 1970s and 1980s university departments began to include the term evolutionary biology in their titles. The modern era of evolutionary psychology was ushered in, in particular, by Donald Symons' 1979 book The Evolution of Human Sexuality and Leda Cosmides and John Tooby's 1992 book The Adapted Mind. David Buller observed that the term "evolutionary psychology" is sometimes seen as denoting research based on the specific methodological and theoretical commitments of certain researchers from the Santa Barbara school (University of California), thus some evolutionary psychologists prefer to term their work "human ecology", "human behavioural ecology" or "evolutionary anthropology" instead. From psychology there are the primary streams of developmental, social and cognitive psychology. Establishing some measure of the relative influence of genetics and environment on behavior has been at the core of behavioral genetics and its variants, notably studies at the molecular level that examine the relationship between genes, neurotransmitters and behavior. Dual inheritance theory (DIT), developed in the late 1970s and early 1980s, has a slightly different perspective by trying to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. DIT is seen by some as a "middle-ground" between views that emphasize human universals versus those that emphasize cultural variation. Theoretical foundations The theories on which evolutionary psychology is based originated with Charles Darwin's work, including his speculations about the evolutionary origins of social instincts in humans. Modern evolutionary psychology, however, is possible only because of advances in evolutionary theory in the 20th century. Evolutionary psychologists say that natural selection has provided humans with many psychological adaptations, in much the same way that it generated humans' anatomical and physiological adaptations. As with adaptations in general, psychological adaptations are said to be specialized for the environment in which an organism evolved, the environment of evolutionary adaptedness. Sexual selection provides organisms with adaptations related to mating. For male mammals, which have a relatively high maximal potential reproduction rate, sexual selection leads to adaptations that help them compete for females. For female mammals, with a relatively low maximal potential reproduction rate, sexual selection leads to choosiness, which helps females select higher quality mates. Charles Darwin described both natural selection and sexual selection, and he relied on group selection to explain the evolution of altruistic (self-sacrificing) behavior. But group selection was considered a weak explanation, because in any group the less altruistic individuals will be more likely to survive, and the group will become less self-sacrificing as a whole. In 1964, the evolutionary biologist William D. Hamilton proposed inclusive fitness theory, emphasizing a gene-centered view of evolution. Hamilton noted that genes can increase the replication of copies of themselves into the next generation by influencing the organism's social traits in such a way that (statistically) results in helping the survival and reproduction of other copies of the same genes (most simply, identical copies in the organism's close relatives). According to Hamilton's rule, self-sacrificing behaviors (and the genes influencing them) can evolve if they typically help the organism's close relatives so much that it more than compensates for the individual animal's sacrifice. Inclusive fitness theory resolved the issue of how altruism can evolve. Other theories also help explain the evolution of altruistic behavior, including evolutionary game theory, tit-for-tat reciprocity, and generalized reciprocity. These theories help to explain the development of altruistic behavior, and account for hostility toward cheaters (individuals that take advantage of others' altruism). Several mid-level evolutionary theories inform evolutionary psychology. The r/K selection theory proposes that some species prosper by having many offspring, while others follow the strategy of having fewer offspring but investing much more in each one. Humans follow the second strategy. Parental investment theory explains how parents invest more or less in individual offspring based on how successful those offspring are likely to be, and thus how much they might improve the parents' inclusive fitness. According to the Trivers–Willard hypothesis, parents in good conditions tend to invest more in sons (who are best able to take advantage of good conditions), while parents in poor conditions tend to invest more in daughters (who are best able to have successful offspring even in poor conditions). According to life history theory, animals evolve life histories to match their environments, determining details such as age at first reproduction and number of offspring. Dual inheritance theory posits that genes and human culture have interacted, with genes affecting the development of culture, and culture, in turn, affecting human evolution on a genetic level, in a similar way to the Baldwin effect. Evolved psychological mechanisms Evolutionary psychology is based on the hypothesis that, just like hearts, lungs, livers, kidneys, and immune systems, cognition has a functional structure that has a genetic basis, and therefore has evolved by natural selection. Like other organs and tissues, this functional structure should be universally shared amongst a species and should solve important problems of survival and reproduction. Evolutionary psychologists seek to understand psychological mechanisms by understanding the survival and reproductive functions they might have served over the course of evolutionary history. These might include abilities to infer others' emotions, discern kin from non-kin, identify and prefer healthier mates, cooperate with others and follow leaders. Consistent with the theory of natural selection, evolutionary psychology sees humans as often in conflict with others, including mates and relatives. For instance, a mother may wish to wean her offspring from breastfeeding earlier than does her infant, which frees up the mother to invest in additional offspring. Evolutionary psychology also recognizes the role of kin selection and reciprocity in evolving prosocial traits such as altruism. Like chimpanzees and bonobos, humans have subtle and flexible social instincts, allowing them to form extended families, lifelong friendships, and political alliances. In studies testing theoretical predictions, evolutionary psychologists have made modest findings on topics such as infanticide, intelligence, marriage patterns, promiscuity, perception of beauty, bride price and parental investment. Another example would be the evolved mechanism in depression. Clinical depression is maladaptive and should have evolutionary approaches so it can become adaptive. Over the centuries animals and humans have gone through hard times to stay alive, which made our fight or flight senses evolve tremendously. For instances, mammalians have separation anxiety from their guardians which causes distress and sends signals to their hypothalamic pituitary adrenal axis, and emotional/behavioral changes. Going through these types of circumstances helps mammals cope with separation anxiety. Historical topics Proponents of evolutionary psychology in the 1990s made some explorations in historical events, but the response from historical experts was highly negative and there has been little effort to continue that line of research. Historian Lynn Hunt says that the historians complained that the researchers: Hunt states that "the few attempts to build up a subfield of psychohistory collapsed under the weight of its presuppositions." She concludes that, as of 2014, the "'iron curtain' between historians and psychology...remains standing." Products of evolution: adaptations, exaptations, byproducts, and random variation Not all traits of organisms are evolutionary adaptations. As noted in the table below, traits may also be exaptations, byproducts of adaptations (sometimes called "spandrels"), or random variation between individuals. Psychological adaptations are hypothesized to be innate or relatively easy to learn and to manifest in cultures worldwide. For example, the ability of toddlers to learn a language with virtually no training is likely to be a psychological adaptation. On the other hand, ancestral humans did not read or write, thus today, learning to read and write requires extensive training, and presumably involves the repurposing of cognitive capacities that evolved in response to selection pressures unrelated to written language. However, variations in manifest behavior can result from universal mechanisms interacting with different local environments. For example, Caucasians who move from a northern climate to the equator will have darker skin. The mechanisms regulating their pigmentation do not change; rather the input to those mechanisms change, resulting in different outputs. One of the tasks of evolutionary psychology is to identify which psychological traits are likely to be adaptations, byproducts or random variation. George C. Williams suggested that an "adaptation is a special and onerous concept that should only be used where it is really necessary." As noted by Williams and others, adaptations can be identified by their improbable complexity, species universality, and adaptive functionality. Obligate and facultative adaptations A question that may be asked about an adaptation is whether it is generally obligate (relatively robust in the face of typical environmental variation) or facultative (sensitive to typical environmental variation). The sweet taste of sugar and the pain of hitting one's knee against concrete are the result of fairly obligate psychological adaptations; typical environmental variability during development does not much affect their operation. By contrast, facultative adaptations are somewhat like "if-then" statements. For example, The adaptation for skin to tan is conditional to exposure to sunlight; this is an example of another facultative adaptation. When a psychological adaptation is facultative, evolutionary psychologists concern themselves with how developmental and environmental inputs influence the expression of the adaptation. Cultural universals Evolutionary psychologists hold that behaviors or traits that occur universally in all cultures are good candidates for evolutionary adaptations. Cultural universals include behaviors related to language, cognition, social roles, gender roles, and technology. Evolved psychological adaptations (such as the ability to learn a language) interact with cultural inputs to produce specific behaviors (e.g., the specific language learned). Basic gender differences, such as greater eagerness for sex among men and greater coyness among women, are explained as sexually dimorphic psychological adaptations that reflect the different reproductive strategies of males and females. Evolutionary psychologists contrast their approach to what they term the "standard social science model," according to which the mind is a general-purpose cognition device shaped almost entirely by culture. Environment of evolutionary adaptedness Evolutionary psychology argues that to properly understand the functions of the brain, one must understand the properties of the environment in which the brain evolved. That environment is often referred to as the "environment of evolutionary adaptedness". The idea of an environment of evolutionary adaptedness was first explored as a part of attachment theory by John Bowlby. This is the environment to which a particular evolved mechanism is adapted. More specifically, the environment of evolutionary adaptedness is defined as the set of historically recurring selection pressures that formed a given adaptation, as well as those aspects of the environment that were necessary for the proper development and functioning of the adaptation. Humans, the genus Homo, appeared between 1.5 and 2.5 million years ago, a time that roughly coincides with the start of the Pleistocene 2.6 million years ago. Because the Pleistocene ended a mere 12,000 years ago, most human adaptations either newly evolved during the Pleistocene, or were maintained by stabilizing selection during the Pleistocene. Evolutionary psychology, therefore, proposes that the majority of human psychological mechanisms are adapted to reproductive problems frequently encountered in Pleistocene environments. In broad terms, these problems include those of growth, development, differentiation, maintenance, mating, parenting, and social relationships. The environment of evolutionary adaptedness is significantly different from modern society. The ancestors of modern humans lived in smaller groups, had more cohesive cultures, and had more stable and rich contexts for identity and meaning. Researchers look to existing hunter-gatherer societies for clues as to how hunter-gatherers lived in the environment of evolutionary adaptedness. Unfortunately, the few surviving hunter-gatherer societies are different from each other, and they have been pushed out of the best land and into harsh environments, so it is not clear how closely they reflect ancestral culture. However, all around the world small-band hunter-gatherers offer a similar developmental system for the young ("hunter-gatherer childhood model," Konner, 2005; "evolved developmental niche" or "evolved nest;" Narvaez et al., 2013). The characteristics of the niche are largely the same as for social mammals, who evolved over 30 million years ago: soothing perinatal experience, several years of on-request breastfeeding, nearly constant affection or physical proximity, responsiveness to need (mitigating offspring distress), self-directed play, and for humans, multiple responsive caregivers. Initial studies show the importance of these components in early life for positive child outcomes. Evolutionary psychologists sometimes look to chimpanzees, bonobos, and other great apes for insight into human ancestral behavior. Mismatches Since an organism's adaptations were suited to its ancestral environment, a new and different environment can create a mismatch. Because humans are mostly adapted to Pleistocene environments, psychological mechanisms sometimes exhibit "mismatches" to the modern environment. One example is the fact that although about 10,000 people are killed with guns in the US annually, whereas spiders and snakes kill only a handful, people nonetheless learn to fear spiders and snakes about as easily as they do a pointed gun, and more easily than an unpointed gun, rabbits or flowers. A potential explanation is that spiders and snakes were a threat to human ancestors throughout the Pleistocene, whereas guns (and rabbits and flowers) were not. There is thus a mismatch between humans' evolved fear-learning psychology and the modern environment. This mismatch also shows up in the phenomena of the supernormal stimulus, a stimulus that elicits a response more strongly than the stimulus for which the response evolved. The term was coined by Niko Tinbergen to refer to non-human animal behavior, but psychologist Deirdre Barrett said that supernormal stimulation governs the behavior of humans as powerfully as that of other animals. She explained junk food as an exaggerated stimulus to cravings for salt, sugar, and fats, and she says that television is an exaggeration of social cues of laughter, smiling faces and attention-grabbing action. Magazine centerfolds and double cheeseburgers pull instincts intended for an environment of evolutionary adaptedness where breast development was a sign of health, youth and fertility in a prospective mate, and fat was a rare and vital nutrient. The psychologist Mark van Vugt recently argued that modern organizational leadership is a mismatch. His argument is that humans are not adapted to work in large, anonymous bureaucratic structures with formal hierarchies. The human mind still responds to personalized, charismatic leadership primarily in the context of informal, egalitarian settings. Hence the dissatisfaction and alienation that many employees experience. Salaries, bonuses and other privileges exploit instincts for relative status, which attract particularly males to senior executive positions. Research methods Evolutionary theory is heuristic in that it may generate hypotheses that might not be developed from other theoretical approaches. One of the major goals of adaptationist research is to identify which organismic traits are likely to be adaptations, and which are byproducts or random variations. As noted earlier, adaptations are expected to show evidence of complexity, functionality, and species universality, while byproducts or random variation will not. In addition, adaptations are expected to manifest as proximate mechanisms that interact with the environment in either a generally obligate or facultative fashion (see above). Evolutionary psychologists are also interested in identifying these proximate mechanisms (sometimes termed "mental mechanisms" or "psychological adaptations") and what type of information they take as input, how they process that information, and their outputs. Evolutionary developmental psychology, or "evo-devo," focuses on how adaptations may be activated at certain developmental times (e.g., losing baby teeth, adolescence, etc.) or how events during the development of an individual may alter life-history trajectories. Evolutionary psychologists use several strategies to develop and test hypotheses about whether a psychological trait is likely to be an evolved adaptation. Buss (2011) notes that these methods include: Evolutionary psychologists also use various sources of data for testing, including experiments, archaeological records, data from hunter-gatherer societies, observational studies, neuroscience data, self-reports and surveys, public records, and human products. Recently, additional methods and tools have been introduced based on fictional scenarios, mathematical models, and multi-agent computer simulations. Main areas of research Foundational areas of research in evolutionary psychology can be divided into broad categories of adaptive problems that arise from evolutionary theory itself: survival, mating, parenting, family and kinship, interactions with non-kin, and cultural evolution. Survival and individual-level psychological adaptations Problems of survival are clear targets for the evolution of physical and psychological adaptations. Major problems the ancestors of present-day humans faced included food selection and acquisition; territory selection and physical shelter; and avoiding predators and other environmental threats. Consciousness Consciousness meets George Williams' criteria of species universality, complexity, and functionality, and it is a trait that apparently increases fitness. In his paper "Evolution of consciousness," John Eccles argues that special anatomical and physical adaptations of the mammalian cerebral cortex gave rise to consciousness. In contrast, others have argued that the recursive circuitry underwriting consciousness is much more primitive, having evolved initially in pre-mammalian species because it improves the capacity for interaction with both social and natural environments by providing an energy-saving "neutral" gear in an otherwise energy-expensive motor output machine. Once in place, this recursive circuitry may well have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms, as outlined by Bernard J. Baars. Richard Dawkins suggested that humans evolved consciousness in order to make themselves the subjects of thought. Daniel Povinelli suggests that large, tree-climbing apes evolved consciousness to take into account one's own mass when moving safely among tree branches. Consistent with this hypothesis, Gordon Gallup found that chimpanzees and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests. The concept of consciousness can refer to voluntary action, awareness, or wakefulness. However, even voluntary behavior involves unconscious mechanisms. Many cognitive processes take place in the cognitive unconscious, unavailable to conscious awareness. Some behaviors are conscious when learned but then become unconscious, seemingly automatic. Learning, especially implicitly learning a skill, can take place seemingly outside of consciousness. For example, plenty of people know how to turn right when they ride a bike, but very few can accurately explain how they actually do so. Evolutionary psychology approaches self-deception as an adaptation that can improve one's results in social exchanges. Sleep may have evolved to conserve energy when activity would be less fruitful or more dangerous, such as at night, and especially during the winter season. Sensation and perception Many experts, such as Jerry Fodor, write that the purpose of perception is knowledge, but evolutionary psychologists hold that its primary purpose is to guide action. For example, they say, depth perception seems to have evolved not to help us know the distances to other objects but rather to help us move around in space. Evolutionary psychologists say that animals from fiddler crabs to humans use eyesight for collision avoidance, suggesting that vision is basically for directing action, not providing knowledge. Building and maintaining sense organs is metabolically expensive, so these organs evolve only when they improve an organism's fitness. More than half the brain is devoted to processing sensory information, and the brain itself consumes roughly one-fourth of one's metabolic resources, so the senses must provide exceptional benefits to fitness. Perception accurately mirrors the world; animals get useful, accurate information through their senses. Scientists who study perception and sensation have long understood the human senses as adaptations to their surrounding worlds. Depth perception consists of processing over half a dozen visual cues, each of which is based on a regularity of the physical world. Vision evolved to respond to the narrow range of electromagnetic energy that is plentiful and that does not pass through objects. Sound waves go around corners and interact with obstacles, creating a complex pattern that includes useful information about the sources of and distances to objects. Larger animals naturally make lower-pitched sounds as a consequence of their size. The range over which an animal hears, on the other hand, is determined by adaptation. Homing pigeons, for example, can hear the very low-pitched sound (infrasound) that carries great distances, even though most smaller animals detect higher-pitched sounds. Taste and smell respond to chemicals in the environment that are thought to have been significant for fitness in the environment of evolutionary adaptedness. For example, salt and sugar were apparently both valuable to the human or pre-human inhabitants of the environment of evolutionary adaptedness, so present-day humans have an intrinsic hunger for salty and sweet tastes. The sense of touch is actually many senses, including pressure, heat, cold, tickle, and pain. Pain, while unpleasant, is adaptive. An important adaptation for senses is range shifting, by which the organism becomes temporarily more or less sensitive to sensation. For example, one's eyes automatically adjust to dim or bright ambient light. Sensory abilities of different organisms often coevolve, as is the case with the hearing of echolocating bats and that of the moths that have evolved to respond to the sounds that the bats make. Evolutionary psychologists contend that perception demonstrates the principle of modularity, with specialized mechanisms handling particular perception tasks. For example, people with damage to a particular part of the brain have the specific defect of not being able to recognize faces (prosopagnosia). Evolutionary psychology suggests that this indicates a so-called face-reading module. Learning and facultative adaptations In evolutionary psychology, learning is said to be accomplished through evolved capacities, specifically facultative adaptations. Facultative adaptations express themselves differently depending on input from the environment. Sometimes the input comes during development and helps shape that development. For example, migrating birds learn to orient themselves by the stars during a critical period in their maturation. Evolutionary psychologists believe that humans also learn language along an evolved program, also with critical periods. The input can also come during daily tasks, helping the organism cope with changing environmental conditions. For example, animals evolved Pavlovian conditioning in order to solve problems about causal relationships. Animals accomplish learning tasks most easily when those tasks resemble problems that they faced in their evolutionary past, such as a rat learning where to find food or water. Learning capacities sometimes demonstrate differences between the sexes. In many animal species, for example, males can solve spatial problems faster and more accurately than females, due to the effects of male hormones during development. The same might be true of humans. Emotion and motivation Motivations direct and energize behavior, while emotions provide the affective component to motivation, positive or negative. In the early 1970s, Paul Ekman and colleagues began a line of research which suggests that many emotions are universal. He found evidence that humans share at least five basic emotions: fear, sadness, happiness, anger, and disgust. Social emotions evidently evolved to motivate social behaviors that were adaptive in the environment of evolutionary adaptedness. For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status. Motivation has a neurobiological basis in the reward system of the brain. Recently, it has been suggested that reward systems may evolve in such a way that there may be an inherent or unavoidable trade-off in the motivational system for activities of short versus long duration. Cognition Cognition refers to internal representations of the world and internal information processing. From an evolutionary psychology perspective, cognition is not "general purpose", but uses heuristics, or strategies, that generally increase the likelihood of solving problems that the ancestors of present-day humans routinely faced. For example, present-day humans are far more likely to solve logic problems that involve detecting cheating (a common problem given humans' social nature) than the same logic problem put in purely abstract terms. Since the ancestors of present-day humans did not encounter truly random events, present-day humans may be cognitively predisposed to incorrectly identify patterns in random sequences. "Gamblers' Fallacy" is one example of this. Gamblers may falsely believe that they have hit a "lucky streak" even when each outcome is actually random and independent of previous trials. Most people believe that if a fair coin has been flipped 9 times and Heads appears each time, that on the tenth flip, there is a greater than 50% chance of getting Tails. Humans find it far easier to make diagnoses or predictions using frequency data than when the same information is presented as probabilities or percentages, presumably because the ancestors of present-day humans lived in relatively small tribes (usually with fewer than 150 people) where frequency information was more readily available. Personality Evolutionary psychology is primarily interested in finding commonalities between people, or basic human psychological nature. From an evolutionary perspective, the fact that people have fundamental differences in personality traits initially presents something of a puzzle. (Note: The field of behavioral genetics is concerned with statistically partitioning differences between people into genetic and environmental sources of variance. However, understanding the concept of heritability can be tricky – heritability refers only to the differences between people, never the degree to which the traits of an individual are due to environmental or genetic factors, since traits are always a complex interweaving of both.) Personality traits are conceptualized by evolutionary psychologists as due to normal variation around an optimum, due to frequency-dependent selection (behavioral polymorphisms), or as facultative adaptations. Like variability in height, some personality traits may simply reflect inter-individual variability around a general optimum. Or, personality traits may represent different genetically predisposed "behavioral morphs" – alternate behavioral strategies that depend on the frequency of competing behavioral strategies in the population. For example, if most of the population is generally trusting and gullible, the behavioral morph of being a "cheater" (or, in the extreme case, a sociopath) may be advantageous. Finally, like many other psychological adaptations, personality traits may be facultative – sensitive to typical variations in the social environment, especially during early development. For example, later-born children are more likely than firstborns to be rebellious, less conscientious and more open to new experiences, which may be advantageous to them given their particular niche in family structure. Shared environmental influences do play a role in personality and are not always of less importance than genetic factors. However, shared environmental influences often decrease to near zero after adolescence but do not completely disappear. Language According to Steven Pinker, who builds on the work by Noam Chomsky, the universal human ability to learn to talk between the ages of 1 – 4, basically without training, suggests that language acquisition is a distinctly human psychological adaptation (see, in particular, Pinker's The Language Instinct). Pinker and Bloom (1990) argue that language as a mental faculty shares many likenesses with the complex organs of the body which suggests that, like these organs, language has evolved as an adaptation, since this is the only known mechanism by which such complex organs can develop. Pinker follows Chomsky in arguing that the fact that children can learn any human language with no explicit instruction suggests that language, including most of grammar, is basically innate and that it only needs to be activated by interaction. Chomsky himself does not believe language to have evolved as an adaptation, but suggests that it likely evolved as a byproduct of some other adaptation, a so-called spandrel. But Pinker and Bloom argue that the organic nature of language strongly suggests that it has an adaptational origin. Evolutionary psychologists hold that the FOXP2 gene may well be associated with the evolution of human language. In the 1980s, psycholinguist Myrna Gopnik identified a dominant gene that causes language impairment in the KE family of Britain. This gene turned out to be a mutation of the FOXP2 gene. Humans have a unique allele of this gene, which has otherwise been closely conserved through most of mammalian evolutionary history. This unique allele seems to have first appeared between 100 and 200 thousand years ago, and it is now all but universal in humans. However, the once-popular idea that FOXP2 is a 'grammar gene' or that it triggered the emergence of language in Homo sapiens is now widely discredited. Currently, several competing theories about the evolutionary origin of language coexist, none of them having achieved a general consensus. Researchers of language acquisition in primates and humans such as Michael Tomasello and Talmy Givón, argue that the innatist framework has understated the role of imitation in learning and that it is not at all necessary to posit the existence of an innate grammar module to explain human language acquisition. Tomasello argues that studies of how children and primates actually acquire communicative skills suggest that humans learn complex behavior through experience, so that instead of a module specifically dedicated to language acquisition, language is acquired by the same cognitive mechanisms that are used to acquire all other kinds of socially transmitted behavior. On the issue of whether language is best seen as having evolved as an adaptation or as a spandrel, evolutionary biologist W. Tecumseh Fitch, following Stephen J. Gould, argues that it is unwarranted to assume that every aspect of language is an adaptation, or that language as a whole is an adaptation. He criticizes some strands of evolutionary psychology for suggesting a pan-adaptionist view of evolution, and dismisses Pinker and Bloom's question of whether "Language has evolved as an adaptation" as being misleading. He argues instead that from a biological viewpoint the evolutionary origins of language is best conceptualized as being the probable result of a convergence of many separate adaptations into a complex system. A similar argument is made by Terrence Deacon who in The Symbolic Species argues that the different features of language have co-evolved with the evolution of the mind and that the ability to use symbolic communication is integrated in all other cognitive processes. If the theory that language could have evolved as a single adaptation is accepted, the question becomes which of its many functions has been the basis of adaptation. Several evolutionary hypotheses have been posited: that language evolved for the purpose of social grooming, that it evolved as a way to show mating potential or that it evolved to form social contracts. Evolutionary psychologists recognize that these theories are all speculative and that much more evidence is required to understand how language might have been selectively adapted. Mating Given that sexual reproduction is the means by which genes are propagated into future generations, sexual selection plays a large role in human evolution. Human mating, then, is of interest to evolutionary psychologists who aim to investigate evolved mechanisms to attract and secure mates. Several lines of research have stemmed from this interest, such as studies of mate selection mate poaching, mate retention, mating preferences and conflict between the sexes. In 1972 Robert Trivers published an influential paper on sex differences that is now referred to as parental investment theory. The size differences of gametes (anisogamy) is the fundamental, defining difference between males (small gametes – sperm) and females (large gametes – ova). Trivers noted that anisogamy typically results in different levels of parental investment between the sexes, with females initially investing more. Trivers proposed that this difference in parental investment leads to the sexual selection of different reproductive strategies between the sexes and to sexual conflict. For example, he suggested that the sex that invests less in offspring will generally compete for access to the higher-investing sex to increase their inclusive fitness. Trivers posited that differential parental investment led to the evolution of sexual dimorphisms in mate choice, intra- and inter- sexual reproductive competition, and courtship displays. In mammals, including humans, females make a much larger parental investment than males (i.e. gestation followed by childbirth and lactation). Parental investment theory is a branch of life history theory. Buss and Schmitt's (1993) Sexual Strategies Theory proposed that, due to differential parental investment, humans have evolved sexually dimorphic adaptations related to "sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment." Their Strategic Interference Theory suggested that conflict between the sexes occurs when the preferred reproductive strategies of one sex interfere with those of the other sex, resulting in the activation of emotional responses such as anger or jealousy. Women are generally more selective when choosing mates, especially under long-term mating conditions. However, under some circumstances, short term mating can provide benefits to women as well, such as fertility insurance, trading up to better genes, reducing the risk of inbreeding, and insurance protection of her offspring. Due to male paternity uncertainty, sex differences have been found in the domains of sexual jealousy. Females generally react more adversely to emotional infidelity and males will react more to sexual infidelity. This particular pattern is predicted because the costs involved in mating for each sex are distinct. Women, on average, should prefer a mate who can offer resources (e.g., financial, commitment), thus, a woman risks losing such resources with a mate who commits emotional infidelity. Men, on the other hand, are never certain of the genetic paternity of their children because they do not bear the offspring themselves. This suggests that for men sexual infidelity would generally be more aversive than emotional infidelity because investing resources in another man's offspring does not lead to the propagation of their own genes. Another interesting line of research is that which examines women's mate preferences across the ovulatory cycle. The theoretical underpinning of this research is that ancestral women would have evolved mechanisms to select mates with certain traits depending on their hormonal status. Known as the ovulatory shift hypothesis, the theory posits that, during the ovulatory phase of a woman's cycle (approximately days 10–15 of a woman's cycle), a woman who mated with a male with high genetic quality would have been more likely, on average, to produce and bear a healthy offspring than a woman who mated with a male with low genetic quality. These putative preferences are predicted to be especially apparent for short-term mating domains because a potential male mate would only be offering genes to a potential offspring. This hypothesis allows researchers to examine whether women select mates who have characteristics that indicate high genetic quality during the high fertility phase of their ovulatory cycles. Indeed, studies have shown that women's preferences vary across the ovulatory cycle. In particular, Haselton and Miller (2006) showed that highly fertile women prefer creative but poor men as short-term mates. Creativity may be a proxy for good genes. Research by Gangestad et al. (2004) indicates that highly fertile women prefer men who display social presence and intrasexual competition; these traits may act as cues that would help women predict which men may have, or would be able to acquire, resources. Parenting Reproduction is always costly for women, and can also be for men. Individuals are limited in the degree to which they can devote time and resources to producing and raising their young, and such expenditure may also be detrimental to their future condition, survival and further reproductive output. Parental investment is any parental expenditure (time, energy etc.) that benefits one offspring at a cost to parents' ability to invest in other components of fitness (Clutton-Brock 1991: 9; Trivers 1972). Components of fitness (Beatty 1992) include the well-being of existing offspring, parents' future reproduction, and inclusive fitness through aid to kin (Hamilton, 1964). Parental investment theory is a branch of life history theory. The benefits of parental investment to the offspring are large and are associated with the effects on condition, growth, survival, and ultimately, on the reproductive success of the offspring. However, these benefits can come at the cost of the parent's ability to reproduce in the future e.g. through the increased risk of injury when defending offspring against predators, the loss of mating opportunities whilst rearing offspring, and an increase in the time to the next reproduction. Overall, parents are selected to maximize the difference between the benefits and the costs, and parental care will likely evolve when the benefits exceed the costs. The Cinderella effect is an alleged high incidence of stepchildren being physically, emotionally or sexually abused, neglected, murdered, or otherwise mistreated at the hands of their stepparents at significantly higher rates than their genetic counterparts. It takes its name from the fairy tale character Cinderella, who in the story was cruelly mistreated by her stepmother and stepsisters. Daly and Wilson (1996) noted: "Evolutionary thinking led to the discovery of the most important risk factor for child homicide – the presence of a stepparent. Parental efforts and investments are valuable resources, and selection favors those parental psyches that allocate effort effectively to promote fitness. The adaptive problems that challenge parental decision-making include both the accurate identification of one's offspring and the allocation of one's resources among them with sensitivity to their needs and abilities to convert parental investment into fitness increments…. Stepchildren were seldom or never so valuable to one's expected fitness as one's own offspring would be, and those parental psyches that were easily parasitized by just any appealing youngster must always have incurred a selective disadvantage"(Daly & Wilson, 1996, pp. 64–65). However, they note that not all stepparents will "want" to abuse their partner's children, or that genetic parenthood is any insurance against abuse. They see step parental care as primarily "mating effort" towards the genetic parent. Family and kin Inclusive fitness is the sum of an organism's classical fitness (how many of its own offspring it produces and supports) and the number of equivalents of its own offspring it can add to the population by supporting others. The first component is called classical fitness by Hamilton (1964). From the gene's point of view, evolutionary success ultimately depends on leaving behind the maximum number of copies of itself in the population. Until 1964, it was generally believed that genes only achieved this by causing the individual to leave the maximum number of viable offspring. However, in 1964 W. D. Hamilton proved mathematically that, because close relatives of an organism share some identical genes, a gene can also increase its evolutionary success by promoting the reproduction and survival of these related or otherwise similar individuals. Hamilton concluded that this leads natural selection to favor organisms that would behave in ways that maximize their inclusive fitness. It is also true that natural selection favors behavior that maximizes personal fitness. Hamilton's rule describes mathematically whether or not a gene for altruistic behavior will spread in a population: where is the reproductive cost to the altruist, is the reproductive benefit to the recipient of the altruistic behavior, and is the probability, above the population average, of the individuals sharing an altruistic gene – commonly viewed as "degree of relatedness". The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behavior to be helpful and protective of relatives and their offspring, this behavior also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. Altruists may also have some way to recognize altruistic behavior in unrelated individuals and be inclined to support them. As Dawkins points out in The Selfish Gene (Chapter 6) and The Extended Phenotype, this must be distinguished from the green-beard effect. Although it is generally true that humans tend to be more altruistic toward their kin than toward non-kin, the relevant proximate mechanisms that mediate this cooperation have been debated (see kin recognition), with some arguing that kin status is determined primarily via social and cultural factors (such as co-residence, maternal association of sibs, etc.), while others have argued that kin recognition can also be mediated by biological factors such as facial resemblance and immunogenetic similarity of the major histocompatibility complex (MHC). For a discussion of the interaction of these social and biological kin recognition factors see Lieberman, Tooby, and Cosmides (2007) (PDF). Whatever the proximate mechanisms of kin recognition there is substantial evidence that humans act generally more altruistically to close genetic kin compared to genetic non-kin. Interactions with non-kin / reciprocity Although interactions with non-kin are generally less altruistic compared to those with kin, cooperation can be maintained with non-kin via mutually beneficial reciprocity as was proposed by Robert Trivers. If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect", then a strategy of mutual cooperation may be favored even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act: w > c/b Reciprocity can also be indirect if information about previous interactions is shared. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help. The calculations of indirect reciprocity are complicated and only a tiny fraction of this universe has been uncovered, but again a simple rule has emerged. Indirect reciprocity can only promote cooperation if the probability, q, of knowing someone's reputation exceeds the cost-to-benefit ratio of the altruistic act: q > c/b One important problem with this explanation is that individuals may be able to evolve the capacity to obscure their reputation, reducing the probability, q, that it will be known. Trivers argues that friendship and various social emotions evolved in order to manage reciprocity. Liking and disliking, he says, evolved to help present-day humans' ancestors form coalitions with others who reciprocated and to exclude those who did not reciprocate. Moral indignation may have evolved to prevent one's altruism from being exploited by cheaters, and gratitude may have motivated present-day humans' ancestors to reciprocate appropriately after benefiting from others' altruism. Likewise, present-day humans feel guilty when they fail to reciprocate. These social motivations match what evolutionary psychologists expect to see in adaptations that evolved to maximize the benefits and minimize the drawbacks of reciprocity. Evolutionary psychologists say that humans have psychological adaptations that evolved specifically to help us identify nonreciprocators, commonly referred to as "cheaters." In 1993, Robert Frank and his associates found that participants in a prisoner's dilemma scenario were often able to predict whether their partners would "cheat", based on a half-hour of unstructured social interaction. In a 1996 experiment, for example, Linda Mealey and her colleagues found that people were better at remembering the faces of people when those faces were associated with stories about those individuals cheating (such as embezzling money from a church). Strong reciprocity (or "tribal reciprocity") Humans may have an evolved set of psychological adaptations that predispose them to be more cooperative than otherwise would be expected with members of their tribal in-group, and, more nasty to members of tribal out groups. These adaptations may have been a consequence of tribal warfare. Humans may also have predispositions for "altruistic punishment" – to punish in-group members who violate in-group rules, even when this altruistic behavior cannot be justified in terms of helping those you are related to (kin selection), cooperating with those who you will interact with again (direct reciprocity), or cooperating to better your reputation with others (indirect reciprocity). Evolutionary psychology and culture Though evolutionary psychology has traditionally focused on individual-level behaviors, determined by species-typical psychological adaptations, considerable work has been done on how these adaptations shape and, ultimately govern, culture (Tooby and Cosmides, 1989). Tooby and Cosmides (1989) argued that the mind consists of many domain-specific psychological adaptations, some of which may constrain what cultural material is learned or taught. As opposed to a domain-general cultural acquisition program, where an individual passively receives culturally-transmitted material from the group, Tooby and Cosmides (1989), among others, argue that: "the psyche evolved to generate adaptive rather than repetitive behavior, and hence critically analyzes the behavior of those surrounding it in highly structured and patterned ways, to be used as a rich (but by no means the only) source of information out of which to construct a 'private culture' or individually tailored adaptive system; in consequence, this system may or may not mirror the behavior of others in any given respect." (Tooby and Cosmides 1989). Biological explanations of human culture also brought criticism to evolutionary psychology: Evolutionary psychologists see the human psyche and physiology as a genetic product and assume that genes contain the information for the development and control of the organism and that this information is transmitted from one generation to the next via genes. Evolutionary psychologists thereby see physical and psychological characteristics of humans as genetically programmed. Even then, when evolutionary psychologists acknowledge the influence of the environment on human development, they understand the environment only as an activator or trigger for the programmed developmental instructions encoded in genes. Evolutionary psychologists, for example, believe that the human brain is made up of innate modules, each of which is specialised only for very specific tasks, e. g. an anxiety module. According to evolutionary psychologists, these modules are given before the organism actually develops and are then activated by some environmental event. Critics object that this view is reductionist and that cognitive specialisation only comes about through the interaction of humans with their real environment, rather than the environment of distant ancestors. Interdisciplinary approaches are increasingly striving to mediate between these opposing points of view and to highlight that biological and cultural causes need not be antithetical in explaining human behaviour and even complex cultural achievements. In psychology sub-fields Developmental psychology According to Paul Baltes, the benefits granted by evolutionary selection decrease with age. Natural selection has not eliminated many harmful conditions and nonadaptive characteristics that appear among older adults, such as Alzheimer disease. If it were a disease that killed 20-year-olds instead of 70-year-olds this might have been a disease that natural selection could have eliminated ages ago. Thus, unaided by evolutionary pressures against nonadaptive conditions, modern humans suffer the aches, pains, and infirmities of aging and as the benefits of evolutionary selection decrease with age, the need for modern technological mediums against non-adaptive conditions increases. Social psychology As humans are a highly social species, there are many adaptive problems associated with navigating the social world (e.g., maintaining allies, managing status hierarchies, interacting with outgroup members, coordinating social activities, collective decision-making). Researchers in the emerging field of evolutionary social psychology have made many discoveries pertaining to topics traditionally studied by social psychologists, including person perception, social cognition, attitudes, altruism, emotions, group dynamics, leadership, motivation, prejudice, intergroup relations, and cross-cultural differences. When endeavouring to solve a problem humans at an early age show determination while chimpanzees have no comparable facial expression. Researchers suspect the human determined expression evolved because when a human is determinedly working on a problem other people will frequently help. Abnormal psychology Adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies between physiological and psychological dysfunctions, as noted in the table below. Prominent theorists and evolutionary psychiatrists include Michael T. McGuire, Anthony Stevens, and Randolph M. Nesse. They, and others, suggest that mental disorders are due to the interactive effects of both nature and nurture, and often have multiple contributing causes. Evolutionary psychologists have suggested that schizophrenia and bipolar disorder may reflect a side-effect of genes with fitness benefits, such as increased creativity. (Some individuals with bipolar disorder are especially creative during their manic phases and the close relatives of people with schizophrenia have been found to be more likely to have creative professions.) A 1994 report by the American Psychiatry Association found that people with schizophrenia at roughly the same rate in Western and non-Western cultures, and in industrialized and pastoral societies, suggesting that schizophrenia is not a disease of civilization nor an arbitrary social invention. Sociopathy may represent an evolutionarily stable strategy, by which a small number of people who cheat on social contracts benefit in a society consisting mostly of non-sociopaths. Mild depression may be an adaptive response to withdraw from, and re-evaluate, situations that have led to disadvantageous outcomes (the "analytical rumination hypothesis") (see Evolutionary approaches to depression). Trofimova reviewed the most consistent psychological and behavioural sex differences in psychological abilities and disabilities and linked them to the Geodakyan's evolutionary theory of sex (ETS). She pointed out that a pattern of consistent sex differences in physical, verbal and social dis/abilities corresponds to the idea of the ETS considering sex dimorphism as a functional specialization of a species. Sex differentiation, according to the ETS, creates two partitions within a species, (1) conservational (females), and (2) variational (males). In females, superiority in verbal abilities, higher rule obedience, socialisation, empathy and agreeableness can be presented as a reflection of the systemic conservation function of the female sex. Male superiority is mostly noted in exploratory abilities - in risk- and sensation seeking, spacial orientation, physical strength and higher rates in physical aggression. In combination with higher birth and accidental death rates this pattern might be a reflection of the systemic variational function (testing the boundaries of beneficial characteristics) of the male sex. As a result, psychological sex differences might be influenced by a global tendency within a species to expand its norm of reaction, but at the same time to preserve the beneficial properties of the species. Moreover, Trofimova suggested a "redundancy pruning" hypothesis as an upgrade of the ETS theory. She pointed out to higher rates of psychopathy, dyslexia, autism and schizophrenia in males, in comparison to females. She suggested that the variational function of the "male partition" might also provide irrelevance/redundancy pruning of an excess in a bank of beneficial characteristics of a species, with a continuing resistance to any changes from the norm-driven conservational partition of species. This might explain a contradictory allocation of a high drive for social status/power in the male sex with the their least (among two sexes) abilities for social interaction. The high rates of communicative disorders and psychopathy in males might facilitate their higher rates of disengagement from normative expectations and their insensitivity to social disapproval, when they deliberately do not follow social norms. Some of these speculations have yet to be developed into fully testable hypotheses, and a great deal of research is required to confirm their validity. Antisocial and criminal behavior Evolutionary psychology has been applied to explain criminal or otherwise immoral behavior as being adaptive or related to adaptive behaviors. Males are generally more aggressive than females, who are more selective of their partners because of the far greater effort they have to contribute to pregnancy and child-rearing. Males being more aggressive is hypothesized to stem from the more intense reproductive competition faced by them. Males of low status may be especially vulnerable to being childless. It may have been evolutionary advantageous to engage in highly risky and violently aggressive behavior to increase their status and therefore reproductive success. This may explain why males are generally involved in more crimes, and why low status and being unmarried are associated with criminality. Furthermore, competition over females is argued to have been particularly intensive in late adolescence and young adulthood, which is theorized to explain why crime rates are particularly high during this period. Some sociologists have underlined differential exposure to androgens as the cause of these behaviors, notably Lee Ellis in his evolutionary neuroandrogenic (ENA) theory. Many conflicts that result in harm and death involve status, reputation, and seemingly trivial insults. Steven Pinker in his book The Better Angels of Our Nature argues that in non-state societies without a police it was very important to have a credible deterrence against aggression. Therefore, it was important to be perceived as having a credible reputation for retaliation, resulting in humans developing instincts for revenge as well as for protecting reputation ("honor"). Pinker argues that the development of the state and the police have dramatically reduced the level of violence compared to the ancestral environment. Whenever the state breaks down, which can be very locally such as in poor areas of a city, humans again organize in groups for protection and aggression and concepts such as violent revenge and protecting honor again become extremely important. Rape is theorized to be a reproductive strategy that facilitates the propagation of the rapist's progeny. Such a strategy may be adopted by men who otherwise are unlikely to be appealing to women and therefore cannot form legitimate relationships, or by high-status men on socially vulnerable women who are unlikely to retaliate to increase their reproductive success even further. The sociobiological theories of rape are highly controversial, as traditional theories typically do not consider rape to be a behavioral adaptation, and objections to this theory are made on ethical, religious, political, as well as scientific grounds. Psychology of religion Adaptationist perspectives on religious belief suggest that, like all behavior, religious behaviors are a product of the human brain. As with all other organ functions, cognition's functional structure has been argued to have a genetic foundation, and is therefore subject to the effects of natural selection and sexual selection. Like other organs and tissues, this functional structure should be universally shared amongst humans and should have solved important problems of survival and reproduction in ancestral environments. However, evolutionary psychologists remain divided on whether religious belief is more likely a consequence of evolved psychological adaptations, or a byproduct of other cognitive adaptations. Coalitional psychology Coalitional psychology is an approach to explain political behaviors between different coalitions and the conditionality of these behaviors in evolutionary psychological perspective. This approach assumes that since human beings appeared on the earth, they have evolved to live in groups instead of living as individuals to achieve benefits such as more mating opportunities and increased status. Human beings thus naturally think and act in a way that manages and negotiates group dynamics. Coalitional psychology offers falsifiable ex ante prediction by positing five hypotheses on how these psychological adaptations operate: Humans represent groups as a special category of individual, unstable and with a short shadow of the future Political entrepreneurs strategically manipulate the coalitional environment, often appealing to emotional devices such as "outrage" to inspire collective action. Relative gains dominate relations with enemies, whereas absolute gains characterize relations with allies. Coalitional size and male physical strength will positively predict individual support for aggressive foreign policies. Individuals with children, particularly women, will vary in adopting aggressive foreign policies than those without progeny. Reception and criticism Critics of evolutionary psychology accuse it of promoting genetic determinism, pan-adaptationism (the idea that all behaviors and anatomical features are adaptations), unfalsifiable hypotheses, distal or ultimate explanations of behavior when proximate explanations are superior, and malevolent political or moral ideas. Ethical implications Critics have argued that evolutionary psychology might be used to justify existing social hierarchies and reactionary policies. It has also been suggested by critics that evolutionary psychologists' theories and interpretations of empirical data rely heavily on ideological assumptions about race and gender. In response to such criticism, evolutionary psychologists often caution against committing the naturalistic fallacy – the assumption that "what is natural" is necessarily a moral good. However, their caution against committing the naturalistic fallacy has been criticized as means to stifle legitimate ethical discussions. Contradictions in models Some criticisms of evolutionary psychology point at contradictions between different aspects of adaptive scenarios posited by evolutionary psychology. One example is the evolutionary psychology model of extended social groups selecting for modern human brains, a contradiction being that the synaptic function of modern human brains require high amounts of many specific essential nutrients so that such a transition to higher requirements of the same essential nutrients being shared by all individuals in a population would decrease the possibility of forming large groups due to bottleneck foods with rare essential nutrients capping group sizes. It is mentioned that some insects have societies with different ranks for each individual and that monkeys remain socially functioning after the removal of most of the brain as additional arguments against big brains promoting social networking. The model of males as both providers and protectors is criticized for the impossibility of being in two places at once, the male cannot both protect his family at home and be out hunting at the same time. In the case of the claim that a provider male could buy protection service for his family from other males by bartering food that he had hunted, critics point at the fact that the most valuable food (the food that contained the rarest essential nutrients) would be different in different ecologies and as such vegetable in some geographical areas and animal in others, making it impossible for hunting styles relying on physical strength or risk-taking to be universally of similar value in bartered food and instead of making it inevitable that in some parts of Africa, food gathered with no need for major physical strength would be the most valuable to barter for protection. A contradiction between evolutionary psychology's claim of men needing to be more sexually visual than women for fast speed of assessing women's fertility than women needed to be able to assess the male's genes and its claim of male sexual jealousy guarding against infidelity is also pointed at, as it would be pointless for a male to be fast to assess female fertility if he needed to assess the risk of there being a jealous male mate and in that case his chances of defeating him before mating anyway (pointlessness of assessing one necessary condition faster than another necessary condition can possibly be assessed). Standard social science model Evolutionary psychology has been entangled in the larger philosophical and social science controversies related to the debate on nature versus nurture. Evolutionary psychologists typically contrast evolutionary psychology with what they call the standard social science model (SSSM). They characterize the SSSM as the "blank slate", "relativist", "social constructionist", and "cultural determinist" perspective that they say dominated the social sciences throughout the 20th century and assumed that the mind was shaped almost entirely by culture. Critics have argued that evolutionary psychologists created a false dichotomy between their own view and the caricature of the SSSM. Other critics regard the SSSM as a rhetorical device or a straw man and suggest that the scientists whom evolutionary psychologists associate with the SSSM did not believe that the mind was a blank state devoid of any natural predispositions. Reductionism and determinism Some critics view evolutionary psychology as a form of genetic reductionism and genetic determinism, a common critique being that evolutionary psychology does not address the complexity of individual development and experience and fails to explain the influence of genes on behavior in individual cases. Evolutionary psychologists respond that they are working within a nature-nurture interactionist framework that acknowledges that many psychological adaptations are facultative (sensitive to environmental variations during individual development). The discipline is generally not focused on proximate analyses of behavior, but rather its focus is on the study of distal/ultimate causality (the evolution of psychological adaptations). The field of behavioral genetics is focused on the study of the proximate influence of genes on behavior. Testability of hypotheses A frequent critique of the discipline is that the hypotheses of evolutionary psychology are frequently arbitrary and difficult or impossible to adequately test, thus questioning its status as an actual scientific discipline, for example because many current traits probably evolved to serve different functions than they do now. Thus because there are a potentially infinite number of alternative explanations for why a trait evolved, critics contend that it is impossible to determine the exact explanation. While evolutionary psychology hypotheses are difficult to test, evolutionary psychologists assert that it is not impossible. Part of the critique of the scientific base of evolutionary psychology includes a critique of the concept of the Environment of Evolutionary Adaptation (EEA). Some critics have argued that researchers know so little about the environment in which Homo sapiens evolved that explaining specific traits as an adaption to that environment becomes highly speculative. Evolutionary psychologists respond that they do know many things about this environment, including the facts that present day humans' ancestors were hunter-gatherers, that they generally lived in small tribes, etc. Edward Hagen argues that the human past environments were not radically different in the same sense as the Carboniferous or Jurassic periods and that the animal and plant taxa of the era were similar to those of the modern world, as was the geology and ecology. Hagen argues that few would deny that other organs evolved in the EEA (for example, lungs evolving in an oxygen rich atmosphere) yet critics question whether or not the brain's EEA is truly knowable, which he argues constitutes selective scepticism. Hagen also argues that most evolutionary psychology research is based on the fact that females can get pregnant and males cannot, which Hagen observes was also true in the EEA. John Alcock describes this as the "No Time Machine Argument", as critics are arguing that since it is not possible to travel back in time to the EEA, then it cannot be determined what was going on there and thus what was adaptive. Alcock argues that present-day evidence allows researchers to be reasonably confident about the conditions of the EEA and that the fact that so many human behaviours are adaptive in the current environment is evidence that the ancestral environment of humans had much in common with the present one, as these behaviours would have evolved in the ancestral environment. Thus Alcock concludes that researchers can make predictions on the adaptive value of traits. Similarly, Dominic Murphy argues that alternative explanations cannot just be forwarded but instead need their own evidence and predictions - if one explanation makes predictions that the others cannot, it is reasonable to have confidence in that explanation. In addition, Murphy argues that other historical sciences also make predictions about modern phenomena to come up with explanations about past phenomena, for example, cosmologists look for evidence for what we would expect to see in the modern-day if the Big Bang was true, while geologists make predictions about modern phenomena to determine if an asteroid wiped out the dinosaurs. Murphy argues that if other historical disciplines can conduct tests without a time machine, then the onus is on the critics to show why evolutionary psychology is untestable if other historical disciplines are not, as "methods should be judged across the board, not singled out for ridicule in one context." Modularity of mind Evolutionary psychologists generally presume that, like the body, the mind is made up of many evolved modular adaptations, although there is some disagreement within the discipline regarding the degree of general plasticity, or "generality," of some modules. It has been suggested that modularity evolves because, compared to non-modular networks, it would have conferred an advantage in terms of fitness and because connection costs are lower. In contrast, some academics argue that it is unnecessary to posit the existence of highly domain specific modules, and, suggest that the neural anatomy of the brain supports a model based on more domain general faculties and processes. Moreover, empirical support for the domain-specific theory stems almost entirely from performance on variations of the Wason selection task which is extremely limited in scope as it only tests one subtype of deductive reasoning. Cultural rather than genetic development of cognitive tools Psychologist Cecilia Heyes has argued that the picture presented by some evolutionary psychology of the human mind as a collection of cognitive instinctsorgans of thought shaped by genetic evolution over very long time periodsdoes not fit research results. She posits instead that humans have cognitive gadgets"special-purpose organs of thought" built in the course of development through social interaction. Similar criticisms are articulated by Subrena E. Smith of the University of New Hampshire. Response by evolutionary psychologists Evolutionary psychologists have addressed many of their critics (e.g. in books by Segerstråle (2000), Barkow (2005), and Alcock (2001)). Among their rebuttals are that some criticisms are straw men, or are based on an incorrect nature versus nurture dichotomy or on basic misunderstandings of the discipline. Robert Kurzban suggested that "...critics of the field, when they err, are not slightly missing the mark. Their confusion is deep and profound. It's not like they are marksmen who can't quite hit the center of the target; they're holding the gun backwards." Many have written specifically to correct basic misconceptions. See also Affective neuroscience Behavioural genetics Biocultural evolution Biosocial criminology Collective unconscious Cognitive neuroscience Cultural neuroscience Darwinian Happiness Darwinian literary studies Deep social mind Dunbar's number Evolution of the brain List of evolutionary psychologists Evolutionary origin of religions Evolutionary psychology and culture Molecular evolution Primate cognition Hominid intelligence Human ethology Great ape language Chimpanzee intelligence Cooperative eye hypothesis Id, ego, and superego Intersubjectivity Mirror neuron Origin of language Origin of speech Ovulatory shift hypothesis Primate empathy Shadow (psychology) Simulation theory of empathy Theory of mind Neuroethology Paleolithic diet Paleolithic lifestyle r/K selection theory Social neuroscience Sociobiology Universal Darwinism Notes References Buss, D. M. (1994). The evolution of desire: Strategies of human mating. New York: Basic Books. Gaulin, Steven J. C. and Donald H. McBurney. Evolutionary psychology. Prentice Hall. 2003. Nesse, R.M. (2000). Tingergen's Four Questions Organized . Schacter, Daniel L, Daniel Wegner and Daniel Gilbert. 2007. Psychology. Worth Publishers. . Further reading Heylighen F. (2012). "Evolutionary Psychology", in: A. Michalos (ed.): Encyclopedia of Quality of Life Research (Springer, Berlin). Gerhard Medicus (2017). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Oikkonen, Venla: Gender, Sexuality and Reproduction in Evolutionary Narratives. London: Routledge, 2013. External links PsychTable.org Collaborative effort to catalog human psychological adaptations What Is Evolutionary Psychology? by Clinical Evolutionary Psychologist Dale Glaebach. Evolutionary Psychology – Approaches in Psychology Gerhard Medicus (2017). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Academic societies Human Behavior and Evolution Society; international society dedicated to using evolutionary theory to study human nature The International Society for Human Ethology; promotes ethological perspectives on the study of humans worldwide European Human Behaviour and Evolution Association an interdisciplinary society that supports the activities of European researchers with an interest in evolutionary accounts of human cognition, behavior and society The Association for Politics and the Life Sciences; an international and interdisciplinary association of scholars, scientists, and policymakers concerned with evolutionary, genetic, and ecological knowledge and its bearing on political behavior, public policy and ethics. Society for Evolutionary Analysis in Law a scholarly association dedicated to fostering interdisciplinary exploration of issues at the intersection of law, biology, and evolutionary theory The New England Institute for Cognitive Science and Evolutionary Psychology aims to foster research and education into the interdisciplinary nexus of cognitive science and evolutionary studies The NorthEastern Evolutionary Psychology Society; regional society dedicated to encouraging scholarship and dialogue on the topic of evolutionary psychology Feminist Evolutionary Psychology Society researchers that investigate the active role that females have had in human evolution Journals Evolutionary Psychology – free access online scientific journal Evolution and Human Behavior – journal of the Human Behavior and Evolution Society Evolutionary Psychological Science - An international, interdisciplinary forum for original research papers that address evolved psychology. Spans social and life sciences, anthropology, philosophy, criminology, law and the humanities. Politics and the Life Sciences – an interdisciplinary peer-reviewed journal published by the Association for Politics and the Life Sciences Human Nature: An Interdisciplinary Biosocial Perspective – advances the interdisciplinary investigation of the biological, social, and environmental factors that underlie human behavior. It focuses primarily on the functional unity in which these factors are continuously and mutually interactive. These include the evolutionary, biological, and sociological processes as they interact with human social behavior. Biological Theory: Integrating Development, Evolution and Cognition – devoted to theoretical advances in the fields of biology and cognition, with an emphasis on the conceptual integration afforded by evolutionary and developmental approaches. Evolutionary Anthropology Behavioral and Brain Sciences – interdisciplinary articles in psychology, neuroscience, behavioral biology, cognitive science, artificial intelligence, linguistics and philosophy. About 30% of the articles have focused on evolutionary analyses of behavior. Evolution and Development – research relevant to interface of evolutionary and developmental biology The Evolutionary Review – Art, Science, and Culture Videos Brief video clip from the "Evolution" PBS Series TED talk by Steven Pinker about his book The Blank Slate: The Modern Denial of Human Nature RSA talk by evolutionary psychologist Robert Kurzban on modularity of mind, based on his book Why Everyone (Else) is a Hypocrite Richard Dawkins' lecture on natural selection and evolutionary psychology Evolutionary Psychology – Steven Pinker & Frans de Waal Audio recording Stone Age Minds: A conversation with evolutionary psychologists Leda Cosmides and John Tooby Margaret Mead and Samoa. Review of the nature versus nurture debate triggered by Mead's book "Coming of Age in Samoa." "Evolutionary Psychology", In Our Time, BBC Radio 4 discussion with Janet Radcliffe Richards, Nicholas Humphrey and Steven Rose (November 2, 2000) psychology
0.78294
0.996877
0.780495
Philosophy of biology
The philosophy of biology is a subfield of philosophy of science, which deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, and Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s, associated with the research of David Hull. Philosophers of science then began paying increasing attention to biology, from the rise of Neodarwinism in the 1930s and 1940s to the discovery of the structure of DNA in 1953 to more recent advances in genetic engineering. Other key ideas include the reduction of all life processes to biochemical reactions, and the incorporation of psychology into a broader neuroscience. Overview Philosophers of biology examine the practices, theories, and concepts of biologists with a view toward better understanding biology as a scientific discipline (or group of scientific fields). Scientific ideas are philosophically analyzed and their consequences are explored. Philosophers of biology have also explored how our understanding of biology relates to epistemology, ethics, aesthetics, and metaphysics and whether progress in biology should compel modern societies to rethink traditional values concerning all aspects of human life. It is sometimes difficult to separate the philosophy of biology from theoretical biology. "What is a biological species?" "What is natural selection, and how does it operate in nature?" "How should we distinguish disease states from non-disease states?" "What is life?" "What makes humans uniquely human?" "What is the basis of moral thinking?" "Is biological materialism & deterministic molecular biology compatible with free will?" "How is rationality possible, given our biological origins?" "Is evolution compatible with Christianity or other religious systems?" "Are there laws of biology like the laws of physics?" Ideas drawn from philosophical ontology and logic are being used by biologists in the domain of bioinformatics. Ontologies such as the Gene Ontology are being used to annotate the results of biological experiments in model organisms in order to create logically tractable bodies of data for reasoning and search. The ontologies are species-neutral graph-theoretical representations of biological types joined together by formally defined relations. Philosophy of biology has become a visible, well-organized discipline, with its own journals, conferences, and professional organizations. The largest of the latter is the International Society for the History, Philosophy, and Social Studies of Biology (ISHPSSB). Biological laws and autonomy of biology A prominent question in the philosophy of biology is whether biology can be reduced to lower-level sciences such as chemistry and physics. Materialism is the view that every biological system including organisms consists of nothing except the interactions of molecules; it is opposed to vitalism. As a methodology, reduction would mean that biological systems should be studied at the level of chemistry and molecules. In terms of epistemology, reduction means that knowledge of biological processes can be reduced to knowledge of lower-level processes, a controversial claim. Holism in science is the view that emphasizes higher-level processes, phenomena at a larger level that occur due to the pattern of interactions between the elements of a system over time. For example, to explain why one species of finch survives a drought while others die out, the holistic method looks at the entire ecosystem. Reducing an ecosystem to its parts in this case would be less effective at explaining overall behavior (in this case, the decrease in biodiversity). As individual organisms must be understood in the context of their ecosystems, holists argue, so must lower-level biological processes be understood in the broader context of the living organism in which they take part. Proponents of this view cite our growing understanding of the multidirectional and multilayered nature of gene modulation (including epigenetic changes) as an area where a reductionist view is inadequate for full explanatory power. All processes in organisms obey physical laws, but some argue that the difference between inanimate and biological processes is that the organisation of biological properties is subject to control by coded information. This has led biologists and philosophers such as Ernst Mayr and David Hull to return to the strictly philosophical reflections of Charles Darwin to resolve some of the problems which confronted them when they tried to employ a philosophy of science derived from classical physics. The old positivist approach used in physics emphasised a strict determinism and led to the discovery of universally applicable laws, testable in the course of experiment. It was difficult for biology to use this approach. Standard philosophy of science seemed to leave out a lot of what characterised living organisms - namely, a historical component in the form of an inherited genotype. Philosophers of biology have also examined the notion of teleology in biology. Some have argued that scientists have had no need for a notion of cosmic teleology that can explain and predict evolution, since one was provided by Darwin. But teleological explanations relating to purpose or function have remained useful in biology, for example, in explaining the structural configuration of macromolecules and the study of co-operation in social systems. By clarifying and restricting the use of the term 'teleology' to describe and explain systems controlled strictly by genetic programmes or other physical systems, teleological questions can be framed and investigated while remaining committed to the physical nature of all underlying organic processes. While some philosophers claim that the ideas of Charles Darwin ended the last remainders of teleology in biology, the matter continues to be debated. Debates in these areas of philosophy of biology turn on how one views reductionism more generally. Ethical implications of biology Sharon Street claims that contemporary evolutionary biological theory creates what she calls a “Darwinian Dilemma” for realists. She argues that this is because it is unlikely that our evaluative judgements about morality are tracking anything true about the world. Rather, she says, it is likely that moral judgements and intuitions that promote our reproductive fitness were selected for, and there is no reason to think that it is the truth of these moral intuitions which accounts for their selection. She notes that a moral intuition most people share, that someone being a close family member is a prima facie good reason to help them, happens to be an intuition likely to increase reproductive fitness, while a moral intuition almost no one has, that someone being a close family member is a reason not to help them, is likely to decrease reproductive fitness. David Copp responded to Street by arguing that realists can avoid this so-called dilemma by accepting what he calls a “quasi-tracking” position. Copp explains that what he means by quasi tracking is that it is likely that moral positions in a given society would have evolved to be at least somewhat close to the truth. He justifies this by appealing to the claim that the purpose of morality is to allow a society to meet certain basic needs, such as social stability, and a society with a successful moral codes would be better at doing this. Other perspectives One perspective on the philosophy of biology is how developments in modern biological research and biotechnologies have influenced traditional philosophical ideas about the distinction between biology and technology, as well as implications for ethics, society, and culture. An example is the work of philosopher Eugene Thacker in his book Biomedia. Building on current research in fields such as bioinformatics and biocomputing, as well as on work in the history of science (particularly the work of Georges Canguilhem, Lily E. Kay, and Hans-Jörg Rheinberger), Thacker defines biomedia as entailing "the informatic recontextualization of biological components and processes, for ends that may be medical or non-medical...biomedia continuously make the dual demand that information materialize itself as gene or protein compounds. This point cannot be overstated: biomedia depend upon an understanding of biological as informational but not immaterial." Some approaches to the philosophy of biology incorporate perspectives from science studies and/or science and technology studies, anthropology, sociology of science, and political economy. This includes work by scholars such as Melinda Cooper, Luciana Parisi, Paul Rabinow, Nikolas Rose, and Catherine Waldby. Philosophy of biology was historically associated very closely with theoretical evolutionary biology, but more recently there have been more diverse movements, such as to examine molecular biology. Scientific discovery process Research in biology continues to be less guided by theory than it is in other sciences. This is especially the case where the availability of high throughput screening techniques for the different "-omics" fields such as genomics, whose complexity makes them predominantly data-driven. Such data-intensive scientific discovery is by some considered to be the fourth paradigm, after empiricism, theory and computer simulation. Others reject the idea that data driven research is about to replace theory. As Krakauer et al. put it: "machine learning is a powerful means of preprocessing data in preparation for mechanistic theory building, but should not be considered the final goal of a scientific inquiry." In regard to cancer biology, Raspe et al. state: "A better understanding of tumor biology is fundamental for extracting the relevant information from any high throughput data." The journal Science chose cancer immunotherapy as the breakthrough of 2013. According to their explanation a lesson to be learned from the successes of cancer immunotherapy is that they emerged from decoding of basic biology. Theory in biology is to some extent less strictly formalized than in physics. Besides 1) classic mathematical-analytical theory, as in physics, there is 2) statistics-based, 3) computer simulation and 4) conceptual/verbal analysis. Dougherty and Bittner argue that for biology to progress as a science, it has to move to more rigorous mathematical modeling, or otherwise risk to be "empty talk". In tumor biology research, the characterization of cellular signaling processes has largely focused on identifying the function of individual genes and proteins. Janes showed however the context-dependent nature of signaling driving cell decisions demonstrating the need for a more system based approach. The lack of attention for context dependency in preclinical research is also illustrated by the observation that preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug. The Darwinian dynamic and the origin of life Organisms that exist today, from viruses to humans, possess a self-replicating informational molecule (genome) that is either DNA (most organisms) or RNA (as in some viruses), and such an informational molecule is likely intrinsic to life. Probably the earliest forms of life were likewise based on a self-replicating informational molecule (genome), perhaps RNA or an informational molecule more primitive than RNA or DNA. It has been argued that the evolution of order in living systems and in particular physical systems obey a common fundamental principle that was termed the Darwinian dynamic. This principal was formulated by first considering how macroscopic order is generated in a simple non-biological system far from thermodynamic equilibrium, and subsequently extending consideration to short, replicating RNA molecules. The underlying order-generating process was concluded to be basically similar for both types of systems. Journals and professional organizations Journals History and Philosophy of the Life Sciences Journal of the History of Biology Biology & Philosophy Biological Theory Philosophy, Theory, and Practice in Biology Studies in History and Philosophy of Science Professional organizations International Society for the History, Philosophy, and Social Studies of Biology See also References External links Philosophy of biology
0.800505
0.974958
0.780459
Biodiversity
Biodiversity (or biological diversity) is the variety and variability of life on Earth. It can be measured on various levels. There is for example genetic variability, species diversity, ecosystem diversity and phylogenetic diversity. Diversity is not distributed evenly on Earth. It is greater in the tropics as a result of the warm climate and high primary productivity in the region near the equator. Tropical forest ecosystems cover less than one-fifth of Earth's terrestrial area and contain about 50% of the world's species. There are latitudinal gradients in species diversity for both marine and terrestrial taxa. Since life began on Earth, six major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The Phanerozoic aeon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion. In this period, the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive biodiversity losses. Those events have been classified as mass extinction events. In the Carboniferous, rainforest collapse may have led to a great loss of plant and animal life. The Permian–Triassic extinction event, 251 million years ago, was the worst; vertebrate recovery took 30 million years. Human activities have led to an ongoing biodiversity loss and an accompanying loss of genetic diversity. This process is often referred to as Holocene extinction, or sixth mass extinction. For example, it was estimated in 2007 that up to 30% of all species will be extinct by 2050. Destroying habitats for farming is a key reason why biodiversity is decreasing today. Climate change also plays a role. This can be seen for example in the effects of climate change on biomes. This anthropogenic extinction may have started toward the end of the Pleistocene, as some studies suggest that the megafaunal extinction event that took place around the end of the last ice age partly resulted from overhunting. Definitions Biologists most often define biodiversity as the "totality of genes, species and ecosystems of a region". An advantage of this definition is that it presents a unified view of the traditional types of biological variety previously identified: taxonomic diversity (usually measured at the species diversity level) ecological diversity (often viewed from the perspective of ecosystem diversity) morphological diversity (which stems from genetic diversity and molecular diversity) functional diversity (which is a measure of the number of functionally disparate species within a population (e.g. different feeding mechanism, different motility, predator vs prey, etc.)) Biodiversity is most commonly used to replace the more clearly-defined and long-established terms, species diversity and species richness. However, there is no concrete definition for biodiversity, as its definition continues to be defined. Other definitions include (in chronological order): An explicit definition consistent with this interpretation was first given in a paper by Bruce A. Wilcox commissioned by the International Union for the Conservation of Nature and Natural Resources (IUCN) for the 1982 World National Parks Conference. Wilcox's definition was "Biological diversity is the variety of life forms...at all levels of biological systems (i.e., molecular, organismic, population, species and ecosystem)...". A publication by Wilcox in 1984: Biodiversity can be defined genetically as the diversity of alleles, genes and organisms. They study processes such as mutation and gene transfer that drive evolution. The 1992 United Nations Earth Summit defined biological diversity as "the variability among living organisms from all sources, including, inter alia, terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems". This definition is used in the United Nations Convention on Biological Diversity. Gaston and Spicer's definition in their book "Biodiversity: an introduction" in 2004 is "variation of life at all levels of biological organization". The Food and Agriculture Organization of the United Nations (FAO) defined biodiversity in 2019 as "the variability that exists among living organisms (both within and between species) and the ecosystems of which they are part." Number of species According to estimates by Mora et al. (2011), there are approximately 8.7 million terrestrial species and 2.2 million oceanic species. The authors note that these estimates are strongest for eukaryotic organisms and likely represent the lower bound of prokaryote diversity. Other estimates include: 220,000 vascular plants, estimated using the species-area relation method 0.7-1 million marine species 10–30 million insects; (of some 0.9 million we know today) 5–10 million bacteria; 1.5-3 million fungi, estimates based on data from the tropics, long-term non-tropical sites and molecular studies that have revealed cryptic speciation. Some 0.075 million species of fungi had been documented by 2001; 1 million mites The number of microbial species is not reliably known, but the Global Ocean Sampling Expedition dramatically increased the estimates of genetic diversity by identifying an enormous number of new genes from near-surface plankton samples at various marine locations, initially over the 2004–2006 period. The findings may eventually cause a significant change in the way science defines species and other taxonomic categories. Since the rate of extinction has increased, many extant species may become extinct before they are described. Not surprisingly, in the animalia the most studied groups are birds and mammals, whereas fishes and arthropods are the least studied animals groups. Current biodiversity loss During the last century, decreases in biodiversity have been increasingly observed. It was estimated in 2007 that up to 30% of all species will be extinct by 2050. Of these, about one eighth of known plant species are threatened with extinction. Estimates reach as high as 140,000 species per year (based on Species-area theory). This figure indicates unsustainable ecological practices, because few species emerge each year. The rate of species loss is greater now than at any time in human history, with extinctions occurring at rates hundreds of times higher than background extinction rates. and expected to still grow in the upcoming years. As of 2012, some studies suggest that 25% of all mammal species could be extinct in 20 years. In absolute terms, the planet has lost 58% of its biodiversity since 1970 according to a 2016 study by the World Wildlife Fund. The Living Planet Report 2014 claims that "the number of mammals, birds, reptiles, amphibians, and fish across the globe is, on average, about half the size it was 40 years ago". Of that number, 39% accounts for the terrestrial wildlife gone, 39% for the marine wildlife gone and 76% for the freshwater wildlife gone. Biodiversity took the biggest hit in Latin America, plummeting 83 percent. High-income countries showed a 10% increase in biodiversity, which was canceled out by a loss in low-income countries. This is despite the fact that high-income countries use five times the ecological resources of low-income countries, which was explained as a result of a process whereby wealthy nations are outsourcing resource depletion to poorer nations, which are suffering the greatest ecosystem losses. A 2017 study published in PLOS One found that the biomass of insect life in Germany had declined by three-quarters in the last 25 years. Dave Goulson of Sussex University stated that their study suggested that humans "appear to be making vast tracts of land inhospitable to most forms of life, and are currently on course for ecological Armageddon. If we lose the insects then everything is going to collapse." In 2020 the World Wildlife Foundation published a report saying that "biodiversity is being destroyed at a rate unprecedented in human history". The report claims that 68% of the population of the examined species were destroyed in the years 1970 – 2016. Of 70,000 monitored species, around 48% are experiencing population declines from human activity (in 2023), whereas only 3% have increasing populations. Rates of decline in biodiversity in the current sixth mass extinction match or exceed rates of loss in the five previous mass extinction events in the fossil record. Biodiversity loss is in fact "one of the most critical manifestations of the Anthropocene" (since around the 1950s); the continued decline of biodiversity constitutes "an unprecedented threat" to the continued existence of human civilization. The reduction is caused primarily by human impacts, particularly habitat destruction. Since the Stone Age, species loss has accelerated above the average basal rate, driven by human activity. Estimates of species losses are at a rate 100–10,000 times as fast as is typical in the fossil record. Loss of biodiversity results in the loss of natural capital that supplies ecosystem goods and services. Species today are being wiped out at a rate 100 to 1,000 times higher than baseline, and the rate of extinctions is increasing. This process destroys the resilience and adaptability of life on Earth. In 2006, many species were formally classified as rare or endangered or threatened; moreover, scientists have estimated that millions more species are at risk which have not been formally recognized. About 40 percent of the 40,177 species assessed using the IUCN Red List criteria are now listed as threatened with extinction—a total of 16,119. As of late 2022 9251 species were considered part of the IUCN's critically endangered. Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However, other scientists have criticized this finding and say that loss of habitat caused by "the growth of commodities for export" is the main driver. Some studies have however pointed out that habitat destruction for the expansion of agriculture and the overexploitation of wildlife are the more significant drivers of contemporary biodiversity loss, not climate change. Distribution Biodiversity is not evenly distributed, rather it varies greatly across the globe as well as within regions and seasons. Among other factors, the diversity of all living things (biota) depends on temperature, precipitation, altitude, soils, geography and the interactions between other species. The study of the spatial distribution of organisms, species and ecosystems, is the science of biogeography. Diversity consistently measures higher in the tropics and in other localized regions such as the Cape Floristic Region and lower in polar regions generally. Rain forests that have had wet climates for a long time, such as Yasuní National Park in Ecuador, have particularly high biodiversity. There is local biodiversity, which directly impacts daily life, affecting the availability of fresh water, food choices, and fuel sources for humans. Regional biodiversity includes habitats and ecosystems that synergizes and either overlaps or differs on a regional scale. National biodiversity within a country determines the ability for a country to thrive according to its habitats and ecosystems on a national scale. Also, within a country, endangered species are initially supported on a national level then internationally. Ecotourism may be utilized to support the economy and encourages tourists to continue to visit and support species and ecosystems they visit, while they enjoy the available amenities provided. International biodiversity impacts global livelihood, food systems, and health. Problematic pollution, over consumption, and climate change can devastate international biodiversity. Nature-based solutions are a critical tool for a global resolution. Many species are in danger of becoming extinct and need world leaders to be proactive with the Kunming-Montreal Global Biodiversity Framework. Terrestrial biodiversity is thought to be up to 25 times greater than ocean biodiversity. Forests harbour most of Earth's terrestrial biodiversity. The conservation of the world's biodiversity is thus utterly dependent on the way in which we interact with and use the world's forests. A new method used in 2011, put the total number of species on Earth at 8.7 million, of which 2.1 million were estimated to live in the ocean. However, this estimate seems to under-represent the diversity of microorganisms. Forests provide habitats for 80 percent of amphibian species, 75 percent of bird species and 68 percent of mammal species. About 60 percent of all vascular plants are found in tropical forests. Mangroves provide breeding grounds and nurseries for numerous species of fish and shellfish and help trap sediments that might otherwise adversely affect seagrass beds and coral reefs, which are habitats for many more marine species. Forests span around 4 billion acres (nearly a third of the Earth's land mass) and are home to approximately 80% of the world's biodiversity. About 1 billion hectares are covered by primary forests. Over 700 million hectares of the world's woods are officially protected. The biodiversity of forests varies considerably according to factors such as forest type, geography, climate and soils – in addition to human use. Most forest habitats in temperate regions support relatively few animal and plant species and species that tend to have large geographical distributions, while the montane forests of Africa, South America and Southeast Asia and lowland forests of Australia, coastal Brazil, the Caribbean islands, Central America and insular Southeast Asia have many species with small geographical distributions. Areas with dense human populations and intense agricultural land use, such as Europe, parts of Bangladesh, China, India and North America, are less intact in terms of their biodiversity. Northern Africa, southern Australia, coastal Brazil, Madagascar and South Africa, are also identified as areas with striking losses in biodiversity intactness. European forests in EU and non-EU nations comprise more than 30% of Europe's land mass (around 227 million hectares), representing an almost 10% growth since 1990. Latitudinal gradients Generally, there is an increase in biodiversity from the poles to the tropics. Thus localities at lower latitudes have more species than localities at higher latitudes. This is often referred to as the latitudinal gradient in species diversity. Several ecological factors may contribute to the gradient, but the ultimate factor behind many of them is the greater mean temperature at the equator compared to that at the poles. Even though terrestrial biodiversity declines from the equator to the poles, some studies claim that this characteristic is unverified in aquatic ecosystems, especially in marine ecosystems. The latitudinal distribution of parasites does not appear to follow this rule. Also, in terrestrial ecosystems the soil bacterial diversity has been shown to be highest in temperate climatic zones, and has been attributed to carbon inputs and habitat connectivity. In 2016, an alternative hypothesis ("the fractal biodiversity") was proposed to explain the biodiversity latitudinal gradient. In this study, the species pool size and the fractal nature of ecosystems were combined to clarify some general patterns of this gradient. This hypothesis considers temperature, moisture, and net primary production (NPP) as the main variables of an ecosystem niche and as the axis of the ecological hypervolume. In this way, it is possible to build fractal hyper volumes, whose fractal dimension rises to three moving towards the equator. Biodiversity Hotspots A biodiversity hotspot is a region with a high level of endemic species that have experienced great habitat loss. The term hotspot was introduced in 1988 by Norman Myers. While hotspots are spread all over the world, the majority are forest areas and most are located in the tropics. Brazil's Atlantic Forest is considered one such hotspot, containing roughly 20,000 plant species, 1,350 vertebrates and millions of insects, about half of which occur nowhere else. The island of Madagascar and India are also particularly notable. Colombia is characterized by high biodiversity, with the highest rate of species by area unit worldwide and it has the largest number of endemics (species that are not found naturally anywhere else) of any country. About 10% of the species of the Earth can be found in Colombia, including over 1,900 species of bird, more than in Europe and North America combined, Colombia has 10% of the world's mammals species, 14% of the amphibian species and 18% of the bird species of the world. Madagascar dry deciduous forests and lowland rainforests possess a high ratio of endemism. Since the island separated from mainland Africa 66 million years ago, many species and ecosystems have evolved independently. Indonesia's 17,000 islands cover and contain 10% of the world's flowering plants, 12% of mammals and 17% of reptiles, amphibians and birds—along with nearly 240 million people. Many regions of high biodiversity and/or endemism arise from specialized habitats which require unusual adaptations, for example, alpine environments in high mountains, or Northern European peat bogs. Accurately measuring differences in biodiversity can be difficult. Selection bias amongst researchers may contribute to biased empirical research for modern estimates of biodiversity. In 1768, Rev. Gilbert White succinctly observed of his Selborne, Hampshire "all nature is so full, that that district produces the most variety which is the most examined." Evolution over geologic timeframes Biodiversity is the result of 3.5 billion years of evolution. The origin of life has not been established by science, however, some evidence suggests that life may already have been well-established only a few hundred million years after the formation of the Earth. Until approximately 2.5 billion years ago, all life consisted of microorganisms – archaea, bacteria, and single-celled protozoans and protists. Biodiversity grew fast during the Phanerozoic (the last 540 million years), especially during the so-called Cambrian explosion—a period during which nearly every phylum of multicellular organisms first appeared. However, recent studies suggest that this diversification had started earlier, at least in the Ediacaran, and that it continued in the Ordovician. Over the next 400 million years or so, invertebrate diversity showed little overall trend and vertebrate diversity shows an overall exponential trend. This dramatic rise in diversity was marked by periodic, massive losses of diversity classified as mass extinction events. A significant loss occurred in anamniotic limbed vertebrates when rainforests collapsed in the Carboniferous, but amniotes seem to have been little affected by this event; their diversification slowed down later, around the Asselian/Sakmarian boundary, in the early Cisuralian (Early Permian), about 293 Ma ago. The worst was the Permian-Triassic extinction event, 251 million years ago. Vertebrates took 30 million years to recover from this event. The most recent major mass extinction event, the Cretaceous–Paleogene extinction event, occurred 66 million years ago. This period has attracted more attention than others because it resulted in the extinction of the dinosaurs, which were represented by many lineages at the end of the Maastrichtian, just before that extinction event. However, many other taxa were affected by this crisis, which affected even marine taxa, such as ammonites, which also became extinct around that time. The biodiversity of the past is called Paleobiodiversity. The fossil record suggests that the last few million years featured the greatest biodiversity in history. However, not all scientists support this view, since there is uncertainty as to how strongly the fossil record is biased by the greater availability and preservation of recent geologic sections. Some scientists believe that corrected for sampling artifacts, modern biodiversity may not be much different from biodiversity 300 million years ago, whereas others consider the fossil record reasonably reflective of the diversification of life. Estimates of the present global macroscopic species diversity vary from 2 million to 100 million, with a best estimate of somewhere near 9 million, the vast majority arthropods. Diversity appears to increase continually in the absence of natural selection. Diversification The existence of a global carrying capacity, limiting the amount of life that can live at once, is debated, as is the question of whether such a limit would also cap the number of species. While records of life in the sea show a logistic pattern of growth, life on land (insects, plants and tetrapods) shows an exponential rise in diversity. As one author states, "Tetrapods have not yet invaded 64 percent of potentially habitable modes and it could be that without human influence the ecological and taxonomic diversity of tetrapods would continue to increase exponentially until most or all of the available eco-space is filled." It also appears that the diversity continues to increase over time, especially after mass extinctions. On the other hand, changes through the Phanerozoic correlate much better with the hyperbolic model (widely used in population biology, demography and macrosociology, as well as fossil biodiversity) than with exponential and logistic models. The latter models imply that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) and/or a negative feedback arising from resource limitation. Hyperbolic model implies a second-order positive feedback. Differences in the strength of the second-order feedback due to different intensities of interspecific competition might explain the faster rediversification of ammonoids in comparison to bivalves after the end-Permian extinction. The hyperbolic pattern of the world population growth arises from a second-order positive feedback between the population size and the rate of technological growth. The hyperbolic character of biodiversity growth can be similarly accounted for by a feedback between diversity and community structure complexity. The similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend with cyclical and stochastic dynamics. Most biologists agree however that the period since human emergence is part of a new mass extinction, named the Holocene extinction event, caused primarily by the impact humans are having on the environment. It has been argued that the present rate of extinction is sufficient to eliminate most species on the planet Earth within 100 years. New species are regularly discovered (on average between 5–10,000 new species each year, most of them insects) and many, though discovered, are not yet classified (estimates are that nearly 90% of all arthropods are not yet classified). Most of the terrestrial diversity is found in tropical forests and in general, the land has more species than the ocean; some 8.7 million species may exist on Earth, of which some 2.1 million live in the ocean. Species diversity in geologic time frames It is estimated that 5 to 50 billion species have existed on the planet. Assuming that there may be a maximum of about 50 million species currently alive, it stands to reason that greater than 99% of the planet's species went extinct prior to the evolution of humans. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86% have not yet been described. However, a May 2016 scientific report estimates that 1 trillion species are currently on Earth, with only one-thousandth of one percent described. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037 and weighs 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as four trillion tons of carbon. In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. The age of Earth is about 4.54 billion years. The earliest undisputed evidence of life dates at least from 3.7 billion years ago, during the Eoarchean era after a geological crust started to solidify following the earlier molten Hadean eon. There are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old meta-sedimentary rocks discovered in Western Greenland.. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. According to one of the researchers, "If life arose relatively quickly on Earth...then it could be common in the universe." Role and benefits of biodiversity Ecosystem services There have been many claims about biodiversity's effect on the ecosystem services, especially provisioning and regulating services. Some of those claims have been validated, some are incorrect and some lack enough evidence to draw definitive conclusions. Ecosystem services have been grouped in three types: Provisioning services which involve the production of renewable resources (e.g.: food, wood, fresh water) Regulating services which are those that lessen environmental change (e.g.: climate regulation, pest/disease control) Cultural services represent human value and enjoyment (e.g.: landscape aesthetics, cultural heritage, outdoor recreation and spiritual significance) Experiments with controlled environments have shown that humans cannot easily build ecosystems to support human needs; for example insect pollination cannot be mimicked, though there have been attempts to create artificial pollinators using unmanned aerial vehicles. The economic activity of pollination alone represented between $2.1–14.6 billion in 2003. Other sources have reported somewhat conflicting results and in 1997 Robert Costanza and his colleagues reported the estimated global value of ecosystem services (not captured in traditional markets) at an average of $33 trillion annually. Provisioning services With regards to provisioning services, greater species diversity has the following benefits: Greater species diversity of plants increases fodder yield (synthesis of 271 experimental studies). Greater species diversity of plants (i.e. diversity within a single species) increases overall crop yield (synthesis of 575 experimental studies). Although another review of 100 experimental studies reported mixed evidence. Greater species diversity of trees increases overall wood production (synthesis of 53 experimental studies). However, there is not enough data to draw a conclusion about the effect of tree trait diversity on wood production. Regulating services With regards to regulating services, greater species diversity has the following benefits: Greater species diversity of fish increases the stability of fisheries yield (synthesis of 8 observational studies) of plants increases carbon sequestration, but note that this finding only relates to actual uptake of carbon dioxide and not long-term storage; synthesis of 479 experimental studies) of plants increases soil nutrient remineralization (synthesis of 103 experimental studies), increases soil organic matter (synthesis of 85 experimental studies) and decreases disease prevalence on plants (synthesis of 107 experimental studies) of natural pest enemies decreases herbivorous pest populations (data from two separate reviews; synthesis of 266 experimental and observational studies; Synthesis of 18 observational studies. Although another review of 38 experimental studies found mixed support for this claim, suggesting that in cases where mutual intraguild predation occurs, a single predatory species is often more effective Agriculture Agricultural diversity can be divided into two categories: intraspecific diversity, which includes the genetic variation within a single species, like the potato (Solanum tuberosum) that is composed of many different forms and types (e.g. in the U.S. they might compare russet potatoes with new potatoes or purple potatoes, all different, but all part of the same species, S. tuberosum). The other category of agricultural diversity is called interspecific diversity and refers to the number and types of different species. Agricultural diversity can also be divided by whether it is 'planned' diversity or 'associated' diversity. This is a functional classification that we impose and not an intrinsic feature of life or diversity. Planned diversity includes the crops which a farmer has encouraged, planted or raised (e.g. crops, covers, symbionts, and livestock, among others), which can be contrasted with the associated diversity that arrives among the crops, uninvited (e.g. herbivores, weed species and pathogens, among others). Associated biodiversity can be damaging or beneficial. The beneficial associated biodiversity include for instance wild pollinators such as wild bees and syrphid flies that pollinate crops and natural enemies and antagonists to pests and pathogens. Beneficial associated biodiversity occurs abundantly in crop fields and provide multiple ecosystem services such as pest control, nutrient cycling and pollination that support crop production. Although about 80 percent of humans' food supply comes from just 20 kinds of plants, humans use at least 40,000 species. Earth's surviving biodiversity provides resources for increasing the range of food and other products suitable for human use, although the present extinction rate shrinks that potential. Human health Biodiversity's relevance to human health is becoming an international political issue, as scientific evidence builds on the global health implications of biodiversity loss. This issue is closely linked with the issue of climate change, as many of the anticipated health risks of climate change are associated with changes in biodiversity (e.g. changes in populations and distribution of disease vectors, scarcity of fresh water, impacts on agricultural biodiversity and food resources etc.). This is because the species most likely to disappear are those that buffer against infectious disease transmission, while surviving species tend to be the ones that increase disease transmission, such as that of West Nile Virus, Lyme disease and Hantavirus, according to a study done co-authored by Felicia Keesing, an ecologist at Bard College and Drew Harvell, associate director for Environment of the Atkinson Center for a Sustainable Future (ACSF) at Cornell University. Some of the health issues influenced by biodiversity include dietary health and nutrition security, infectious disease, medical science and medicinal resources, social and psychological health. Biodiversity is also known to have an important role in reducing disaster risk and in post-disaster relief and recovery efforts. Biodiversity provides critical support for drug discovery and the availability of medicinal resources. A significant proportion of drugs are derived, directly or indirectly, from biological sources: at least 50% of the pharmaceutical compounds on the US market are derived from plants, animals and microorganisms, while about 80% of the world population depends on medicines from nature (used in either modern or traditional medical practice) for primary healthcare. Only a tiny fraction of wild species has been investigated for medical potential. Marine ecosystems are particularly important, although inappropriate bioprospecting can increase biodiversity loss, as well as violating the laws of the communities and states from which the resources are taken. Business and industry Many industrial materials derive directly from biological sources. These include building materials, fibers, dyes, rubber, and oil. Biodiversity is also important to the security of resources such as water, timber, paper, fiber, and food. As a result, biodiversity loss is a significant risk factor in business development and a threat to long-term economic sustainability. Cultural and aesthetic value Philosophically it could be argued that biodiversity has intrinsic aesthetic and spiritual value to mankind in and of itself. This idea can be used as a counterweight to the notion that tropical forests and other ecological realms are only worthy of conservation because of the services they provide. Biodiversity also affords many non-material benefits including spiritual and aesthetic values, knowledge systems and education. Measuring biodiversity Analytical limits Less than 1% of all species that have been described have been studied beyond noting their existence. The vast majority of Earth's species are microbial. Contemporary biodiversity physics is "firmly fixated on the visible [macroscopic] world". For example, microbial life is metabolically and environmentally more diverse than multicellular life (see e.g., extremophile). "On the tree of life, based on analyses of small-subunit ribosomal RNA, visible life consists of barely noticeable twigs. The inverse relationship of size and population recurs higher on the evolutionary ladder—to a first approximation, all multicellular species on Earth are insects". Insect extinction rates are high—supporting the Holocene extinction hypothesis. Biodiversity changes (other than losses) Natural seasonal variations Biodiversity naturally varies due to seasonal shifts. Spring's arrival enhances biodiversity as numerous species breed and feed, while winter's onset temporarily reduces it as some insects perish and migrating animals leave. Additionally, the seasonal fluctuation in plant and invertebrate populations influences biodiversity. Introduced and invasive species Barriers such as large rivers, seas, oceans, mountains and deserts encourage diversity by enabling independent evolution on either side of the barrier, via the process of allopatric speciation. The term invasive species is applied to species that breach the natural barriers that would normally keep them constrained. Without barriers, such species occupy new territory, often supplanting native species by occupying their niches, or by using resources that would normally sustain native species. Species are increasingly being moved by humans (on purpose and accidentally). Some studies say that diverse ecosystems are more resilient and resist invasive plants and animals. Many studies cite effects of invasive species on natives, but not extinctions. Invasive species seem to increase local (alpha diversity) diversity, which decreases turnover of diversity (ibeta diversity). Overall gamma diversity may be lowered because species are going extinct because of other causes, but even some of the most insidious invaders (e.g.: Dutch elm disease, emerald ash borer, chestnut blight in North America) have not caused their host species to become extinct. Extirpation, population decline and homogenization of regional biodiversity are much more common. Human activities have frequently been the cause of invasive species circumventing their barriers, by introducing them for food and other purposes. Human activities therefore allow species to migrate to new areas (and thus become invasive) occurred on time scales much shorter than historically have been required for a species to extend its range. At present, several countries have already imported so many exotic species, particularly agricultural and ornamental plants, that their indigenous fauna/flora may be outnumbered. For example, the introduction of kudzu from Southeast Asia to Canada and the United States has threatened biodiversity in certain areas. Another example are pines, which have invaded forests, shrublands and grasslands in the southern hemisphere. Hybridization and genetic pollution Endemic species can be threatened with extinction through the process of genetic pollution, i.e. uncontrolled hybridization, introgression and genetic swamping. Genetic pollution leads to homogenization or replacement of local genomes as a result of either a numerical and/or fitness advantage of an introduced species. Hybridization and introgression are side-effects of introduction and invasion. These phenomena can be especially detrimental to rare species that come into contact with more abundant ones. The abundant species can interbreed with the rare species, swamping its gene pool. This problem is not always apparent from morphological (outward appearance) observations alone. Some degree of gene flow is normal adaptation and not all gene and genotype constellations can be preserved. However, hybridization with or without introgression may, nevertheless, threaten a rare species' existence. Conservation Conservation biology matured in the mid-20th century as ecologists, naturalists and other scientists began to research and address issues pertaining to global biodiversity declines. The conservation ethic advocates management of natural resources for the purpose of sustaining biodiversity in species, ecosystems, the evolutionary process and human culture and society. Conservation biology is reforming around strategic plans to protect biodiversity. Preserving global biodiversity is a priority in strategic conservation plans that are designed to engage public policy and concerns affecting local, regional and global scales of communities, ecosystems and cultures. Action plans identify ways of sustaining human well-being, employing natural capital, macroeconomic policies including economic incentives, and ecosystem services. In the EU Directive 1999/22/EC zoos are described as having a role in the preservation of the biodiversity of wildlife animals by conducting research or participation in breeding programs. Protection and restoration techniques Removal of exotic species will allow the species that they have negatively impacted to recover their ecological niches. Exotic species that have become pests can be identified taxonomically (e.g., with Digital Automated Identification SYstem (DAISY), using the barcode of life). Removal is practical only given large groups of individuals due to the economic cost. As sustainable populations of the remaining native species in an area become assured, "missing" species that are candidates for reintroduction can be identified using databases such as the Encyclopedia of Life and the Global Biodiversity Information Facility. Biodiversity banking places a monetary value on biodiversity. One example is the Australian Native Vegetation Management Framework. Gene banks are collections of specimens and genetic material. Some banks intend to reintroduce banked species to the ecosystem (e.g., via tree nurseries). Reduction and better targeting of pesticides allows more species to survive in agricultural and urbanized areas. Location-specific approaches may be less useful for protecting migratory species. One approach is to create wildlife corridors that correspond to the animals' movements. National and other boundaries can complicate corridor creation. Protected areas Protected areas, including forest reserves and biosphere reserves, serve many functions including for affording protection to wild animals and their habitat. Protected areas have been set up all over the world with the specific aim of protecting and conserving plants and animals. Some scientists have called on the global community to designate as protected areas of 30 percent of the planet by 2030, and 50 percent by 2050, in order to mitigate biodiversity loss from anthropogenic causes. The target of protecting 30% of the area of the planet by the year 2030 (30 by 30) was adopted by almost 200 countries in the 2022 United Nations Biodiversity Conference. At the moment of adoption (December 2022) 17% of land territory and 10% of ocean territory were protected. In a study published 4 September 2020 in Science Advances researchers mapped out regions that can help meet critical conservation and climate goals. Protected areas safeguard nature and cultural resources and contribute to livelihoods, particularly at local level. There are over 238 563 designated protected areas worldwide, equivalent to 14.9 percent of the earth's land surface, varying in their extension, level of protection, and type of management (IUCN, 2018). The benefits of protected areas extend beyond their immediate environment and time. In addition to conserving nature, protected areas are crucial for securing the long-term delivery of ecosystem services. They provide numerous benefits including the conservation of genetic resources for food and agriculture, the provision of medicine and health benefits, the provision of water, recreation and tourism, and for acting as a buffer against disaster. Increasingly, there is acknowledgement of the wider socioeconomic values of these natural ecosystems and of the ecosystem services they can provide. National parks and wildlife sanctuaries A national park is a large natural or near natural area set aside to protect large-scale ecological processes, which also provide a foundation for environmentally and culturally compatible, spiritual, scientific, educational, recreational and visitor opportunities. These areas are selected by governments or private organizations to protect natural biodiversity along with its underlying ecological structure and supporting environmental processes, and to promote education and recreation. The International Union for Conservation of Nature (IUCN), and its World Commission on Protected Areas (WCPA), has defined "National Park" as its Category II type of protected areas. Wildlife sanctuaries aim only at the conservation of species Forest protected areas Forest protected areas are a subset of all protected areas in which a significant portion of the area is forest. This may be the whole or only a part of the protected area. Globally, 18 percent of the world's forest area, or more than 700 million hectares, fall within legally established protected areas such as national parks, conservation areas and game reserves. There is an estimated 726 million ha of forest in protected areas worldwide. Of the six major world regions, South America has the highest share of forests in protected areas, 31 percent. The forests play a vital role in harboring more than 45,000 floral and 81,000 faunal species of which 5150 floral and 1837 faunal species are endemic. In addition, there are 60,065 different tree species in the world. Plant and animal species confined to a specific geographical area are called endemic species. In forest reserves, rights to activities like hunting and grazing are sometimes given to communities living on the fringes of the forest, who sustain their livelihood partially or wholly from forest resources or products. Approximately 50 million hectares (or 24%) of European forest land is protected for biodiversity and landscape protection. Forests allocated for soil, water, and other ecosystem services encompass around 72 million hectares (32% of European forest area). Role of society Transformative change In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services, the Global Assessment Report on Biodiversity and Ecosystem Services, was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES). It stated that "the state of nature has deteriorated at an unprecedented and accelerating rate". To fix the problem, humanity will need a transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. The concept of nature-positive is playing a role in mainstreaming the goals of the Global Biodiversity Framework (GBF) for biodiversity. The aim of mainstreaming is to embed biodiversity considerations into public and private practice to conserve and sustainably use biodiversity on global and local levels. The concept of nature-positive refers to the societal goal to halt and reverse biodiversity loss, measured from a baseline of 2020 levels, and to achieve full so-called "nature recovery" by 2050. Citizen science Citizen science, also known as public participation in scientific research, has been widely used in environmental sciences and is particularly popular in a biodiversity-related context. It has been used to enable scientists to involve the general public in biodiversity research, thereby enabling the scientists to collect data that they would otherwise not have been able to obtain. Volunteer observers have made significant contributions to on-the-ground knowledge about biodiversity, and recent improvements in technology have helped increase the flow and quality of occurrences from citizen sources. A 2016 study published in Biological Conservation registers the massive contributions that citizen scientists already make to data mediated by the Global Biodiversity Information Facility (GBIF). Despite some limitations of the dataset-level analysis, it is clear that nearly half of all occurrence records shared through the GBIF network come from datasets with significant volunteer contributions. Recording and sharing observations are enabled by several global-scale platforms, including iNaturalist and eBird. Legal status International United Nations Convention on Biological Diversity (1992) and Cartagena Protocol on Biosafety; UN BBNJ (High Seas Treaty) 2023 Intergovernmental conference on an international legally binding instrument under the UNCLOS on the conservation and sustainable use of marine biological diversity of areas beyond national jurisdiction (GA resolution 72/249) Convention on International Trade in Endangered Species (CITES); Ramsar Convention (Wetlands); Bonn Convention on Migratory Species; UNESCO Convention concerning the Protection of the World's Cultural and Natural Heritage (indirectly by protecting biodiversity habitats) UNESCO Global Geoparks Regional Conventions such as the Apia Convention Bilateral agreements such as the Japan-Australia Migratory Bird Agreement. Global agreements such as the Convention on Biological Diversity, give "sovereign national rights over biological resources" (not property). The agreements commit countries to "conserve biodiversity", "develop resources for sustainability" and "share the benefits" resulting from their use. Biodiverse countries that allow bioprospecting or collection of natural products, expect a share of the benefits rather than allowing the individual or institution that discovers/exploits the resource to capture them privately. Bioprospecting can become a type of biopiracy when such principles are not respected. Sovereignty principles can rely upon what is better known as Access and Benefit Sharing Agreements (ABAs). The Convention on Biodiversity implies informed consent between the source country and the collector, to establish which resource will be used and for what and to settle on a fair agreement on benefit sharing. On the 19 of December 2022, during the 2022 United Nations Biodiversity Conference every country on earth, with the exception of the United States and the Holy See, signed onto the agreement which includes protecting 30% of land and oceans by 2030 (30 by 30) and 22 other targets intended to reduce biodiversity loss. The agreement includes also recovering 30% of earth degraded ecosystems and increasing funding for biodiversity issues. European Union In May 2020, the European Union published its Biodiversity Strategy for 2030. The biodiversity strategy is an essential part of the climate change mitigation strategy of the European Union. From the 25% of the European budget that will go to fight climate change, large part will go to restore biodiversity and nature based solutions. The EU Biodiversity Strategy for 2030 include the next targets: Protect 30% of the sea territory and 30% of the land territory especially Old-growth forests. Plant 3 billion trees by 2030. Restore at least 25,000 kilometers of rivers, so they will become free flowing. Reduce the use of Pesticides by 50% by 2030. Increase Organic farming. In linked EU program From Farm to Fork it is said, that the target is making 25% of EU agriculture organic, by 2030. Increase biodiversity in agriculture. Give €20 billion per year to the issue and make it part of the business practice. Approximately half of the global GDP depend on nature. In Europe many parts of the economy that generate trillions of euros per year depend on nature. The benefits of Natura 2000 alone in Europe are €200 – €300 billion per year. National level laws Biodiversity is taken into account in some political and judicial decisions: The relationship between law and ecosystems is very ancient and has consequences for biodiversity. It is related to private and public property rights. It can define protection for threatened ecosystems, but also some rights and duties (for example, fishing and hunting rights). Law regarding species is more recent. It defines species that must be protected because they may be threatened by extinction. The U.S. Endangered Species Act is an example of an attempt to address the "law and species" issue. Laws regarding gene pools are only about a century old. Domestication and plant breeding methods are not new, but advances in genetic engineering have led to tighter laws covering distribution of genetically modified organisms, gene patents and process patents. Governments struggle to decide whether to focus on for example, genes, genomes, or organisms and species. Uniform approval for use of biodiversity as a legal standard has not been achieved, however. Bosselman argues that biodiversity should not be used as a legal standard, claiming that the remaining areas of scientific uncertainty cause unacceptable administrative waste and increase litigation without promoting preservation goals. India passed the Biological Diversity Act in 2002 for the conservation of biological diversity in India. The Act also provides mechanisms for equitable sharing of benefits from the use of traditional biological resources and knowledge. History of the term 1916 – The term biological diversity was used first by J. Arthur Harris in "The Variable Desert", Scientific American: "The bare statement that the region contains a flora rich in genera and species and of diverse geographic origin or affinity is entirely inadequate as a description of its real biological diversity." 1967 – Raymond F. Dasmann used the term biological diversity in reference to the richness of living nature that conservationists should protect in his book A Different Kind of Country. 1974 – The term natural diversity was introduced by John Terborgh. 1980 – Thomas Lovejoy introduced the term biological diversity to the scientific community in a book. It rapidly became commonly used. 1985 – According to Edward O. Wilson, the contracted form biodiversity was coined by W. G. Rosen: "The National Forum on BioDiversity ... was conceived by Walter G.Rosen ... Dr. Rosen represented the NRC/NAS throughout the planning stages of the project. Furthermore, he introduced the term biodiversity". 1985 – The term "biodiversity" appears in the article, "A New Plan to Conserve the Earth's Biota" by Laura Tangley. 1988 – The term biodiversity first appeared in publication. 1988 to Present – The United Nations Environment Programme (UNEP) Ad Hoc Working Group of Experts on Biological Diversity in began working in November 1988, leading to the publication of the draft Convention on Biological Diversity in May 1992. Since this time, there have been 15 Conferences of the Parties (COPs) to discuss potential global political responses to biodiversity loss. Most recently COP 15 in Montreal, Canada in 2022. See also Ecological indicator Genetic diversity Global biodiversity Index of biodiversity articles International Day for Biological Diversity Megadiverse countries Soil biodiversity Species diversity 30 by 30 References External links Assessment Report on Diverse Values and Valuation of Nature by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), 2022. NatureServe: This site serves as a portal for accessing several types of publicly available biodiversity data Biodiversity Synthesis Report (PDF) by the Millennium Ecosystem Assessment (MA, 2005) World Map of Biodiversity an interactive map from the United Nations Environment Programme World Conservation Monitoring Centre Biodiversity Heritage Library – Open access digital library of historical taxonomic literature Biodiversity PMC – Open access digital library of biodiversity and ecological literature Mapping of biodiversity Encyclopedia of Life – Documenting all species of life on Earth. Biodiversity Biogeography Population genetics Species
0.780817
0.999473
0.780406
Dioecy
Dioecy ( ; ; adj. dioecious, ) is a characteristic of certain species that have distinct unisexual individuals, each producing either male or female gametes, either directly (in animals) or indirectly (in seed plants). Dioecious reproduction is biparental reproduction. Dioecy has costs, since only the female part of the population directly produces offspring. It is one method for excluding self-fertilization and promoting allogamy (outcrossing), and thus tends to reduce the expression of recessive deleterious mutations present in a population. Plants have several other methods of preventing self-fertilization including, for example, dichogamy, herkogamy, and self-incompatibility. In zoology In zoology, dioecy means that an animal is either male or female, in which case the synonym gonochory is more often used. For example, most animal species are gonochoric, almost all vertebrate species are gonochoric, and all bird and mammal species are gonochoric. Dioecy may also describe colonies within a species, such as the colonies of Siphonophorae (Portuguese man-of-war), which may be either dioecious or monoecious. In botany Land plants (embryophytes) differ from animals in that their life cycle involves alternation of generations. In animals, typically an individual produces gametes of one kind, either sperm or egg cells. The gametes have half the number of chromosomes of the individual producing them, so are haploid. Without further dividing, a sperm and an egg cell fuse to form a zygote that develops into a new individual. In land plants, by contrast, one generation – the sporophyte generation – consists of individuals that produce haploid spores rather than haploid gametes. Spores do not fuse, but germinate by dividing repeatedly by mitosis to give rise to haploid multicellular individuals, the gametophytes, which produce gametes. A male gamete and a female gamete then fuse to produce a new diploid sporophyte. In bryophytes (mosses, liverworts and hornworts), the gametophytes are fully independent plants. Seed plant gametophytes are dependent on the sporophyte and develop within the spores, a condition known as endospory. In flowering plants, the male gametophytes develop within pollen grains produced by the sporophyte's stamens, and the female gametophytes develop within ovules produced by the sporophyte's carpels. The sporophyte generation of a seed plant is called "monoecious" when each sporophyte plant has both kinds of spore-producing organ but in separate flowers or cones. For example, a single flowering plant of a monoecious species has both functional stamens and carpels, in separate flowers. The sporophyte generation of seed plants is called "dioecious" when each sporophyte plant has only one kind of spore-producing organ, all of whose spores give rise either to male gametophytes, which produce only male gametes (sperm), or to female gametophytes, which produce only female gametes (egg cells). For example, a single flowering plant sporophyte of a fully dioecious species like holly has either flowers with functional stamens producing pollen containing male gametes (staminate or 'male' flowers), or flowers with functional carpels producing female gametes (carpellate or 'female' flowers), but not both. (See Plant reproductive morphology for further details, including more complex cases, such as gynodioecy and androdioecy.) Slightly different terms, dioicous and monoicous, may be used for the gametophyte generation of non-vascular plants, although dioecious and monoecious are also used. A dioicous gametophyte either produces only male gametes (sperm) or produces only female gametes (egg cells). About 60% of liverworts are dioicous. Dioecy occurs in a wide variety of plant groups. Examples of dioecious plant species include ginkgos, willows, cannabis and African teak. As its specific name implies, the perennial stinging nettle Urtica dioica is dioecious, while the annual nettle Urtica urens is monoecious. Dioecious flora are predominant in tropical environments. About 65% of gymnosperm species are dioecious, but almost all conifers are monoecious. In gymnosperms, the sexual systems dioecy and monoecy are strongly correlated with the mode of pollen dispersal, monoecious species are predominantly wind dispersed (anemophily) and dioecious species animal-dispersed (zoophily). About 6 percent of flowering plant species are entirely dioecious and about 7% of angiosperm genera contain some dioecious species. Dioecy is more common in woody plants, and heterotrophic species. In most dioecious plants, whether male or female gametophytes are produced is determined genetically, but in some cases it can be determined by the environment, as in Arisaema species. Certain algae, such as some species of Polysiphonia, are dioecious. Dioecy is prevalent in the brown algae (Phaeophyceae) and may have been the ancestral state in that group. Evolution of dioecy In plants, dioecy has evolved independently multiple times either from hermaphroditic species or from monoecious species. A previously untested hypothesis is that this reduces inbreeding; dioecy has been shown to be associated with increased genetic diversity and greater protection against deleterious mutations. Regardless of the evolutionary pathway the intermediate states need to have fitness advantages compared to cosexual flowers in order to survive. Dioecy evolves due to male or female sterility, although it is unlikely that mutations for male and female sterility occurred at the same time. In angiosperms unisexual flowers evolve from bisexual ones. Dioecy occurs in almost half of plant families, but only in a minority of genera, suggesting recent evolution. For 160 families that have dioecious species, dioecy is thought to have evolved more than 100 times. In the family Caricaceae, dioecy is likely the ancestral sexual system. From monoecy Dioecious flowering plants can evolve from monoecious ancestors that have flowers containing both functional stamens and functional carpels. Some authors argue monoecy and dioecy are related. In the genus Sagittaria, since there is a distribution of sexual systems, it has been postulated that dioecy evolved from monoecy through gynodioecy mainly from mutations that resulted in male sterility. However, since the ancestral state is unclear, more work is needed to clarify the evolution of dioecy via monoecy. From hermaphroditism Dioecy usually evolves from hermaphroditism through gynodioecy but may also evolve through androdioecy, through distyly or through heterostyly. In the Asteraceae, dioecy may have evolved independently from hermaphroditism at least 5 or 9 times. The reverse transition, from dioecy back to hermaphroditism has also been observed, both in Asteraceae and in bryophytes, with a frequency about half of that for the forward transition. In Silene, since there is no monoecy, it is suggested that dioecy evolved through gynodioecy. In mycology Very few dioecious fungi have been discovered. Monoecy and dioecy in fungi refer to the donor and recipient roles in mating, where a nucleus is transferred from one haploid hypha to another, and the two nuclei then present in the same cell merge by karyogamy to form a zygote. The definition avoids reference to male and female reproductive structures, which are rare in fungi. An individual of a dioecious fungal species not only requires a partner for mating, but performs only one of the roles in nuclear transfer, as either the donor or the recipient. A monoecious fungal species can perform both roles, but may not be self-compatible. Adaptive benefit Dioecy has the demographic disadvantage compared with hermaphroditism that only about half of reproductive adults are able to produce offspring. Dioecious species must therefore have fitness advantages to compensate for this cost through increased survival, growth, or reproduction. Dioecy excludes self-fertilization and promotes allogamy (outcrossing), and thus tends to reduce the expression of recessive deleterious mutations present in a population. In trees, compensation is realized mainly through increased seed production by females. This in turn is facilitated by a lower contribution of reproduction to population growth, which results in no demonstrable net costs of having males in the population compared to being hermaphroditic. Dioecy may also accelerate or retard lineage diversification in angiosperms. Dioecious lineages are more diversified in certain genera, but less in others. An analysis suggested that dioecy neither consistently places a strong brake on diversification, nor strongly drives it. See also Gonochorism Hermaphrodite Plant reproductive morphology Self-incompatibility in plants Sexual dimorphism Trioecy References Bibliography Plant sexuality Sexual reproduction Reproductive system Sexual system
0.784932
0.993912
0.780154
Fecundity
Fecundity is defined in two ways; in human demography, it is the potential for reproduction of a recorded population as opposed to a sole organism, while in population biology, it is considered similar to fertility, the natural capability to produce offspring, measured by the number of gametes (eggs), seed set, or asexual propagules. Human demography Human demography considers only human fecundity, at its culturally differing rates, while population biology studies all organisms. The term fecundity in population biology is often used to describe the rate of offspring production after one time step (often annual). In this sense, fecundity may include both birth rates and survival of young to that time step. While levels of fecundity vary geographically, it is generally a consistent feature of each culture. Fecundation is another term for fertilization. In obstetrics and gynecology, fecund-ability is the probability of being pregnant in a single menstrual cycle, and fecundity is the probability of achieving a live birth within a single cycle. Population ecology In ecology, fecundity is a measure of the reproductive capacity of an individual or population, typically restricted to the reproductive individuals. It can be equally applied to sexual and asexual reproduction, as the purpose of fecundity is to measure how many new individuals are being added to a population. Fecundity may be defined differently for different ecological studies to explain the specific data the study examined. For example, some studies use apparent fecundity to describe that their data looks at a particular moment in time rather than the species' entire life span. In other studies, these definitions are changed to better quantify fecundity for the organism in question. This need is particularly true for modular organisms, as their modular organization differs from the more typical unitary organism, in which fecundity is best defined through a count of offspring. Life history patterns (parity) Parity is the organization of fecundity into two distinct types, semelparity, and iteroparity. Semelparity occurs when an organism reproduces only once in its lifetime, with death being a part of its reproductive strategy. These species produce many offspring during their one reproductive event, giving them a potential advantage when it comes to fecundity, as they are producing more offspring. Iteroparity is when a species reproduces multiple times over its lifetime. This species' strategy is to protect against the unpredictable survivability of their offspring, in which if their first litter of offspring dies, they can reproduce again and replace the dead offspring. It also allows the organism to care for its offspring, as they will be alive during their development. Factors affecting fecundity There are a multitude of factors that potentially affect the rates of fecundity. For example: ontogeny, population density and latitude. Ontogeny Fecundity in iteroparous organisms often increases with age but can decline at older ages. Several hypotheses have been proposed to explain this relationship. For species with declining growth rates after maturity, the suggestion is that as the organism's growth rate decreases, more resources can be allocated to reproduction. Other possible explanations exist for this pattern for organisms that do not grow after maturity. These explanations include: increased competence of older individuals; less fit individuals have already died off; or since life expectancy decreases with age, older individuals may allocate more resources to reproduction at the expense of survival. In semelparous species, age is frequently a poor predictor of fecundity. In these cases, size is likely a better predictor. Population density Population density is often observed to negatively affect fecundity, making fecundity density-dependent. The reasoning behind this observation is that once an area is overcrowded, fewer resources are available for each individual. Thus there may be insufficient energy to reproduce in high numbers when offspring survival is low. Occasionally high density can stimulate the production of offspring, particularly in plant species, because if there are more plants, there is food to lure pollinators, who will then spread that plant's pollen and allow for more reproduction. Latitude There are many different hypotheses to explain the relationship between latitude and fecundity. One claimed that fecundity increases predictably with increasing latitude. Reginald Morean proposed this hypothesis, the explanation being that there is higher mortality in seasonal environments. A different hypothesis by David Lack attributed the positive relationship to the change in daylight hours found with changing latitudes. These differing daylight hours, in turn, change the hours in which a parent can collect food. He also accounts for a drop in fecundity at the poles due to their extreme amounts of day lengths, which can exhaust the parent. Fecundity intensity due to seasonality is a hypothesis proposed by Phillip Ashmole. He suggests latitude affects fecundity due to seasonality increasing with increasing latitudes. This theory relies on the mortality concept proposed by Moreau but focuses on how seasonality affects mortality and, in turn, population densities. Thus in places with higher mortality, there is more food availability, leading to higher fecundity. Another hypothesis claims that seasonality affects fecundity due to varying lengths of breeding seasons. This idea suggests that shorter breeding seasons select a larger clutch size to compensate for the reduced reproduction frequency, thus increasing those species' fecundity. Fecundity and fitness Fecundity is a significant component of fitness. Fecundity selection builds on that idea. This idea claims that the genetic selection of traits that increase an organism's fecundity is, in turn, advantageous to an organism's fitness. Fecundity Schedule Fecundity Schedules are data tables that display the patterns of birth amongst individuals of different ages in a population. These are typically found in life tables under the columns Fx and mx. Fx lists the total number of young produced by each age class, and mx is the mean number of young produced, found by finding the number of young produced per surviving individual. For example, if you have 12 individuals in an age class and they produced 16 surviving young, the Fx is 16, and the mx is 1.336. Infecundity Infecundity is a term meaning "inability to conceive after several years of exposure to the risk of pregnancy." This usage is prevalent in medicine, especially reproductive medicine, and in demographics. Infecundity would be synonymous with infertility, but in demographic and medical use fertility (and thus its antonym infertility) may refer to quantity and rates of offspring produced, rather than any physiological or other limitations on reproduction. Additional information Additionally, social trends and societal norms may influence fecundity, though this influence tends to be temporary. Indeed, it is considered impossible to cease reproduction based on social factors, and fecundity tends to rise after a brief decline. Fecundity has also been shown to increase in ungulates with relation to warmer weather. In sexual evolutionary biology, especially in sexual selection, fecundity is contrasted to reproductivity. See also Biological life cycle Birth rate Fecundity selection Natalism Population ecology References Fertility Population Philosophy of science Human reproduction Demographics Infertility
0.783565
0.9955
0.780038
Epigenetics
In biology, epigenetics is the study of heritable traits, or a stable change of cell function, that happen without changes to the DNA sequence. The Greek prefix epi- ( "over, outside of, around") in epigenetics implies features that are "on top of" or "in addition to" the traditional (DNA sequence based) genetic mechanism of inheritance. Epigenetics usually involves a change that is not erased by cell division, and affects the regulation of gene expression. Such effects on cellular and physiological phenotypic traits may result from environmental factors, or be part of normal development. Epigenetic factors can also lead to cancer. The term also refers to the mechanism of changes: functionally relevant alterations to the genome that do not involve mutation of the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence. Further, non-coding RNA sequences have been shown to play a key role in the regulation of gene expression. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA. These epigenetic changes may last through cell divisions for the duration of the cell's life, and may also last for multiple generations, even though they do not involve changes in the underlying DNA sequence of the organism; instead, non-genetic factors cause the organism's genes to behave (or "express themselves") differently. One example of an epigenetic change in eukaryotic biology is the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. In other words, as a single fertilized egg cell – the zygote – continues to divide, the resulting daughter cells change into all the different cell types in an organism, including neurons, muscle cells, epithelium, endothelium of blood vessels, etc., by activating some genes while inhibiting the expression of others. Definitions The term epigenesis has a generic meaning of "extra growth" that has been used in English since the 17th century. In scientific publications, the term epigenetics started to appear in the 1930s (see Fig. on the right). However, its contemporary meaning emerged only in the 1990s. A definition of the concept of epigenetic trait as a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence" was formulated at a Cold Spring Harbor meeting in 2008, although alternate definitions that include non-heritable traits are still being used widely. Waddington's canalisation, 1940s The hypothesis of epigenetic changes affecting the expression of chromosomes was put forth by the Russian biologist Nikolai Koltsov. From the generic meaning, and the associated adjective epigenetic, British embryologist C. H. Waddington coined the term epigenetics in 1942 as pertaining to epigenesis, in parallel to Valentin Haecker's 'phenogenetics'. Epigenesis in the context of the biology of that period referred to the differentiation of cells from their initial totipotent state during embryonic development. When Waddington coined the term, the physical nature of genes and their role in heredity was not known. He used it instead as a conceptual model of how genetic components might interact with their surroundings to produce a phenotype; he used the phrase "epigenetic landscape" as a metaphor for biological development. Waddington held that cell fates were established during development in a process he called canalisation much as a marble rolls down to the point of lowest local elevation. Waddington suggested visualising increasing irreversibility of cell type differentiation as ridges rising between the valleys where the marbles (analogous to cells) are travelling. In recent times, Waddington's notion of the epigenetic landscape has been rigorously formalized in the context of the systems dynamics state approach to the study of cell-fate. Cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory. Contemporary Robin Holliday defined in 1990 epigenetics as "the study of the mechanisms of temporal and spatial control of gene activity during the development of complex organisms." More recent usage of the word in biology follows stricter definitions. As defined by Arthur Riggs and colleagues, it is "the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence." The term has also been used, however, to describe processes which have not been demonstrated to be heritable, such as some forms of histone modification. Consequently, there are attempts to redefine "epigenetics" in broader terms that would avoid the constraints of requiring heritability. For example, Adrian Bird defined epigenetics as "the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states." This definition would be inclusive of transient modifications associated with DNA repair or cell-cycle phases as well as stable changes maintained across multiple cell generations, but exclude others such as templating of membrane architecture and prions unless they impinge on chromosome function. Such redefinitions however are not universally accepted and are still subject to debate. The NIH "Roadmap Epigenomics Project", which ran from 2008 to 2017, uses the following definition: "For purposes of this program, epigenetics refers to both heritable changes in gene activity and expression (in the progeny of cells or of individuals) and also stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable." In 2008, a consensus definition of the epigenetic trait, a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence," was made at a Cold Spring Harbor meeting. The similarity of the word to "genetics" has generated many parallel usages. The "epigenome" is a parallel to the word "genome", referring to the overall epigenetic state of a cell, and epigenomics refers to global analyses of epigenetic changes across the entire genome. The phrase "genetic code" has also been adapted – the "epigenetic code" has been used to describe the set of epigenetic features that create different phenotypes in different cells from the same underlying DNA sequence. Taken to its extreme, the "epigenetic code" could represent the total state of the cell, with the position of each molecule accounted for in an epigenomic map, a diagrammatic representation of the gene expression, DNA methylation and histone modification status of a particular genomic region. More typically, the term is used in reference to systematic efforts to measure specific, relevant forms of epigenetic information such as the histone code or DNA methylation patterns. Mechanisms Covalent modification of either DNA (e.g. cytosine methylation and hydroxymethylation) or of histone proteins (e.g. lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, and lysine ubiquitination and sumoylation) play central roles in many types of epigenetic inheritance. Therefore, the word "epigenetics" is sometimes used as a synonym for these processes. However, this can be misleading. Chromatin remodeling is not always inherited, and not all epigenetic inheritance involves chromatin remodeling. In 2019, a further lysine modification appeared in the scientific literature linking epigenetics modification to cell metabolism, i.e. lactylation Because the phenotype of a cell or individual is affected by which of its genes are transcribed, heritable transcription states can give rise to epigenetic effects. There are several layers of regulation of gene expression. One way that genes are regulated is through the remodeling of chromatin. Chromatin is the complex of DNA and the histone proteins with which it associates. If the way that DNA is wrapped around the histones changes, gene expression can change as well. Chromatin remodeling is accomplished through two main mechanisms: The first way is post translational modification of the amino acids that make up histone proteins. Histone proteins are made up of long chains of amino acids. If the amino acids that are in the chain are changed, the shape of the histone might be modified. DNA is not completely unwound during replication. It is possible, then, that the modified histones may be carried into each new copy of the DNA. Once there, these histones may act as templates, initiating the surrounding new histones to be shaped in the new manner. By altering the shape of the histones around them, these modified histones would ensure that a lineage-specific transcription program is maintained after cell division. The second way is the addition of methyl groups to the DNA, mostly at CpG sites, to convert cytosine to 5-methylcytosine. 5-Methylcytosine performs much like a regular cytosine, pairing with a guanine in double-stranded DNA. However, when methylated cytosines are present in CpG sites in the promoter and enhancer regions of genes, the genes are often repressed. When methylated cytosines are present in CpG sites in the gene body (in the coding region excluding the transcription start site) expression of the gene is often enhanced. Transcription of a gene usually depends on a transcription factor binding to a (10 base or less) recognition sequence at the enhancer that interacts with the promoter region of that gene (Gene expression#Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription). About 22% of transcription factors are inhibited from binding when the recognition sequence has a methylated cytosine. In addition, presence of methylated cytosines at a promoter region can attract methyl-CpG-binding domain (MBD) proteins. All MBDs interact with nucleosome remodeling and histone deacetylase complexes, which leads to gene silencing. In addition, another covalent modification involving methylated cytosine is its demethylation by TET enzymes. Hundreds of such demethylations occur, for instance, during learning and memory forming events in neurons. There is frequently a reciprocal relationship between DNA methylation and histone lysine methylation. For instance, the methyl binding domain protein MBD1, attracted to and associating with methylated cytosine in a DNA CpG site, can also associate with H3K9 methyltransferase activity to methylate histone 3 at lysine 9. On the other hand, DNA maintenance methylation by DNMT1 appears to partly rely on recognition of histone methylation on the nucleosome present at the DNA site to carry out cytosine methylation on newly synthesized DNA. There is further crosstalk between DNA methylation carried out by DNMT3A and DNMT3B and histone methylation so that there is a correlation between the genome-wide distribution of DNA methylation and histone methylation. Mechanisms of heritability of histone state are not well understood; however, much is known about the mechanism of heritability of DNA methylation state during cell division and differentiation. Heritability of methylation state depends on certain enzymes (such as DNMT1) that have a higher affinity for 5-methylcytosine than for cytosine. If this enzyme reaches a "hemimethylated" portion of DNA (where 5-methylcytosine is in only one of the two DNA strands) the enzyme will methylate the other half. However, it is now known that DNMT1 physically interacts with the protein UHRF1. UHRF1 has been recently recognized as essential for DNMT1-mediated maintenance of DNA methylation. UHRF1 is the protein that specifically recognizes hemi-methylated DNA, therefore bringing DNMT1 to its substrate to maintain DNA methylation. Although histone modifications occur throughout the entire sequence, the unstructured N-termini of histones (called histone tails) are particularly highly modified. These modifications include acetylation, methylation, ubiquitylation, phosphorylation, sumoylation, ribosylation and citrullination. Acetylation is the most highly studied of these modifications. For example, acetylation of the K14 and K9 lysines of the tail of histone H3 by histone acetyltransferase enzymes (HATs) is generally related to transcriptional competence (see Figure). One mode of thinking is that this tendency of acetylation to be associated with "active" transcription is biophysical in nature. Because it normally has a positively charged nitrogen at its end, lysine can bind the negatively charged phosphates of the DNA backbone. The acetylation event converts the positively charged amine group on the side chain into a neutral amide linkage. This removes the positive charge, thus loosening the DNA from the histone. When this occurs, complexes like SWI/SNF and other transcriptional factors can bind to the DNA and allow transcription to occur. This is the "cis" model of the epigenetic function. In other words, changes to the histone tails have a direct effect on the DNA itself. Another model of epigenetic function is the "trans" model. In this model, changes to the histone tails act indirectly on the DNA. For example, lysine acetylation may create a binding site for chromatin-modifying enzymes (or transcription machinery as well). This chromatin remodeler can then cause changes to the state of the chromatin. Indeed, a bromodomain – a protein domain that specifically binds acetyl-lysine – is found in many enzymes that help activate transcription, including the SWI/SNF complex. It may be that acetylation acts in this and the previous way to aid in transcriptional activation. The idea that modifications act as docking modules for related factors is borne out by histone methylation as well. Methylation of lysine 9 of histone H3 has long been associated with constitutively transcriptionally silent chromatin (constitutive heterochromatin) (see bottom Figure). It has been determined that a chromodomain (a domain that specifically binds methyl-lysine) in the transcriptionally repressive protein HP1 recruits HP1 to K9 methylated regions. One example that seems to refute this biophysical model for methylation is that tri-methylation of histone H3 at lysine 4 is strongly associated with (and required for full) transcriptional activation (see top Figure). Tri-methylation, in this case, would introduce a fixed positive charge on the tail. It has been shown that the histone lysine methyltransferase (KMT) is responsible for this methylation activity in the pattern of histones H3 & H4. This enzyme utilizes a catalytically active site called the SET domain (Suppressor of variegation, Enhancer of Zeste, Trithorax). The SET domain is a 130-amino acid sequence involved in modulating gene activities. This domain has been demonstrated to bind to the histone tail and causes the methylation of the histone. Differing histone modifications are likely to function in differing ways; acetylation at one position is likely to function differently from acetylation at another position. Also, multiple modifications may occur at the same time, and these modifications may work together to change the behavior of the nucleosome. The idea that multiple dynamic modifications regulate gene transcription in a systematic and reproducible way is called the histone code, although the idea that histone state can be read linearly as a digital information carrier has been largely debunked. One of the best-understood systems that orchestrate chromatin-based silencing is the SIR protein based silencing of the yeast hidden mating-type loci HML and HMR. DNA methylation DNA methylation frequently occurs in repeated sequences, and helps to suppress the expression and mobility of 'transposable elements': Because 5-methylcytosine can be spontaneously deaminated (replacing nitrogen by oxygen) to thymidine, CpG sites are frequently mutated and become rare in the genome, except at CpG islands where they remain unmethylated. Epigenetic changes of this type thus have the potential to direct increased frequencies of permanent genetic mutation. DNA methylation patterns are known to be established and modified in response to environmental factors by a complex interplay of at least three independent DNA methyltransferases, DNMT1, DNMT3A, and DNMT3B, the loss of any of which is lethal in mice. DNMT1 is the most abundant methyltransferase in somatic cells, localizes to replication foci, has a 10–40-fold preference for hemimethylated DNA and interacts with the proliferating cell nuclear antigen (PCNA). By preferentially modifying hemimethylated DNA, DNMT1 transfers patterns of methylation to a newly synthesized strand after DNA replication, and therefore is often referred to as the 'maintenance' methyltransferase. DNMT1 is essential for proper embryonic development, imprinting and X-inactivation. To emphasize the difference of this molecular mechanism of inheritance from the canonical Watson-Crick base-pairing mechanism of transmission of genetic information, the term 'Epigenetic templating' was introduced. Furthermore, in addition to the maintenance and transmission of methylated DNA states, the same principle could work in the maintenance and transmission of histone modifications and even cytoplasmic (structural) heritable states. RNA methylation RNA methylation of N6-methyladenosine (m6A) as the most abundant eukaryotic RNA modification has recently been recognized as an important gene regulatory mechanism. Histone modifications Histones H3 and H4 can also be manipulated through demethylation using histone lysine demethylase (KDM). This recently identified enzyme has a catalytically active site called the Jumonji domain (JmjC). The demethylation occurs when JmjC utilizes multiple cofactors to hydroxylate the methyl group, thereby removing it. JmjC is capable of demethylating mono-, di-, and tri-methylated substrates. Chromosomal regions can adopt stable and heritable alternative states resulting in bistable gene expression without changes to the DNA sequence. Epigenetic control is often associated with alternative covalent modifications of histones. The stability and heritability of states of larger chromosomal regions are suggested to involve positive feedback where modified nucleosomes recruit enzymes that similarly modify nearby nucleosomes. A simplified stochastic model for this type of epigenetics is found here. It has been suggested that chromatin-based transcriptional regulation could be mediated by the effect of small RNAs. Small interfering RNAs can modulate transcriptional gene expression via epigenetic modulation of targeted promoters. RNA transcripts Sometimes a gene, after being turned on, transcribes a product that (directly or indirectly) maintains the activity of that gene. For example, Hnf4 and MyoD enhance the transcription of many liver-specific and muscle-specific genes, respectively, including their own, through the transcription factor activity of the proteins they encode. RNA signalling includes differential recruitment of a hierarchy of generic chromatin modifying complexes and DNA methyltransferases to specific loci by RNAs during differentiation and development. Other epigenetic changes are mediated by the production of different splice forms of RNA, or by formation of double-stranded RNA (RNAi). Descendants of the cell in which the gene was turned on will inherit this activity, even if the original stimulus for gene-activation is no longer present. These genes are often turned on or off by signal transduction, although in some systems where syncytia or gap junctions are important, RNA may spread directly to other cells or nuclei by diffusion. A large amount of RNA and protein is contributed to the zygote by the mother during oogenesis or via nurse cells, resulting in maternal effect phenotypes. A smaller quantity of sperm RNA is transmitted from the father, but there is recent evidence that this epigenetic information can lead to visible changes in several generations of offspring. MicroRNAs MicroRNAs (miRNAs) are members of non-coding RNAs that range in size from 17 to 25 nucleotides. miRNAs regulate a large variety of biological functions in plants and animals. So far, in 2013, about 2000 miRNAs have been discovered in humans and these can be found online in a miRNA database. Each miRNA expressed in a cell may target about 100 to 200 messenger RNAs(mRNAs) that it downregulates. Most of the downregulation of mRNAs occurs by causing the decay of the targeted mRNA, while some downregulation occurs at the level of translation into protein. It appears that about 60% of human protein coding genes are regulated by miRNAs. Many miRNAs are epigenetically regulated. About 50% of miRNA genes are associated with CpG islands, that may be repressed by epigenetic methylation. Transcription from methylated CpG islands is strongly and heritably repressed. Other miRNAs are epigenetically regulated by either histone modifications or by combined DNA methylation and histone modification. mRNA In 2011, it was demonstrated that the methylation of mRNA plays a critical role in human energy homeostasis. The obesity-associated FTO gene is shown to be able to demethylate N6-methyladenosine in RNA. sRNAs sRNAs are small (50–250 nucleotides), highly structured, non-coding RNA fragments found in bacteria. They control gene expression including virulence genes in pathogens and are viewed as new targets in the fight against drug-resistant bacteria. They play an important role in many biological processes, binding to mRNA and protein targets in prokaryotes. Their phylogenetic analyses, for example through sRNA–mRNA target interactions or protein binding properties, are used to build comprehensive databases. sRNA-gene maps based on their targets in microbial genomes are also constructed. Long non-coding RNAs Numerous investigations have demonstrated the pivotal involvement of long non-coding RNAs (lncRNAs) in the regulation of gene expression and chromosomal modifications, thereby exerting significant control over cellular differentiation. These long non-coding RNAs also contribute to genomic imprinting and the inactivation of the X chromosome. In invertebrates such as social insects of honey bees, long non-coding RNAs are detected as a possible epigenetic mechanism via allele-specific genes underlying aggression via reciprocal crosses. Prions Prions are infectious forms of proteins. In general, proteins fold into discrete units that perform distinct cellular functions, but some proteins are also capable of forming an infectious conformational state known as a prion. Although often viewed in the context of infectious disease, prions are more loosely defined by their ability to catalytically convert other native state versions of the same protein to an infectious conformational state. It is in this latter sense that they can be viewed as epigenetic agents capable of inducing a phenotypic change without a modification of the genome. Fungal prions are considered by some to be epigenetic because the infectious phenotype caused by the prion can be inherited without modification of the genome. PSI+ and URE3, discovered in yeast in 1965 and 1971, are the two best studied of this type of prion. Prions can have a phenotypic effect through the sequestration of protein in aggregates, thereby reducing that protein's activity. In PSI+ cells, the loss of the Sup35 protein (which is involved in termination of translation) causes ribosomes to have a higher rate of read-through of stop codons, an effect that results in suppression of nonsense mutations in other genes. The ability of Sup35 to form prions may be a conserved trait. It could confer an adaptive advantage by giving cells the ability to switch into a PSI+ state and express dormant genetic features normally terminated by stop codon mutations. Prion-based epigenetics has also been observed in Saccharomyces cerevisiae. Molecular basis Epigenetic changes modify the activation of certain genes, but not the genetic code sequence of DNA. The microstructure (not code) of DNA itself or the associated chromatin proteins may be modified, causing activation or silencing. This mechanism enables differentiated cells in a multicellular organism to express only the genes that are necessary for their own activity. Epigenetic changes are preserved when cells divide. Most epigenetic changes only occur within the course of one individual organism's lifetime; however, these epigenetic changes can be transmitted to the organism's offspring through a process called transgenerational epigenetic inheritance. Moreover, if gene inactivation occurs in a sperm or egg cell that results in fertilization, this epigenetic modification may also be transferred to the next generation. Specific epigenetic processes include paramutation, bookmarking, imprinting, gene silencing, X chromosome inactivation, position effect, DNA methylation reprogramming, transvection, maternal effects, the progress of carcinogenesis, many effects of teratogens, regulation of histone modifications and heterochromatin, and technical limitations affecting parthenogenesis and cloning. DNA damage DNA damage can also cause epigenetic changes. DNA damage is very frequent, occurring on average about 60,000 times a day per cell of the human body (see DNA damage (naturally occurring)). These damages are largely repaired, however, epigenetic changes can still remain at the site of DNA repair. In particular, a double strand break in DNA can initiate unprogrammed epigenetic gene silencing both by causing DNA methylation as well as by promoting silencing types of histone modifications (chromatin remodeling - see next section). In addition, the enzyme Parp1 (poly(ADP)-ribose polymerase) and its product poly(ADP)-ribose (PAR) accumulate at sites of DNA damage as part of the repair process. This accumulation, in turn, directs recruitment and activation of the chromatin remodeling protein, ALC1, that can cause nucleosome remodeling. Nucleosome remodeling has been found to cause, for instance, epigenetic silencing of DNA repair gene MLH1. DNA damaging chemicals, such as benzene, hydroquinone, styrene, carbon tetrachloride and trichloroethylene, cause considerable hypomethylation of DNA, some through the activation of oxidative stress pathways. Foods are known to alter the epigenetics of rats on different diets. Some food components epigenetically increase the levels of DNA repair enzymes such as MGMT and MLH1 and p53. Other food components can reduce DNA damage, such as soy isoflavones. In one study, markers for oxidative stress, such as modified nucleotides that can result from DNA damage, were decreased by a 3-week diet supplemented with soy. A decrease in oxidative DNA damage was also observed 2 h after consumption of anthocyanin-rich bilberry (Vaccinium myrtillius L.) pomace extract. DNA repair Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear. Repair of oxidative DNA damage can alter epigenetic markers In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation. Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes. When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration. As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA. While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations. Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed. Homologous recombinational repair alters epigenetic markers At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period. In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion. Non-homologous end joining can cause some epigenetic marker alterations Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%. Techniques used to study epigenetics Epigenetic research uses a wide range of molecular biological techniques to further understanding of epigenetic phenomena. These techniques include chromatin immunoprecipitation (together with its large-scale variants ChIP-on-chip and ChIP-Seq), fluorescent in situ hybridization, methylation-sensitive restriction enzymes, DNA adenine methyltransferase identification (DamID) and bisulfite sequencing. Furthermore, the use of bioinformatics methods has a role in computational epigenetics. Chromatin Immunoprecipitation Chromatin Immunoprecipitation (ChIP) has helped bridge the gap between DNA and epigenetic interactions. With the use of ChIP, researchers are able to make findings in regards to gene regulation, transcription mechanisms, and chromatin structure. Fluorescent in situ hybridization Fluorescent in situ hybridization (FISH) is very important to understand epigenetic mechanisms. FISH can be used to find the location of genes on chromosomes, as well as finding noncoding RNAs. FISH is predominantly used for detecting chromosomal abnormalities in humans. Methylation-sensitive restriction enzymes Methylation sensitive restriction enzymes paired with PCR is a way to evaluate methylation in DNA - specifically the CpG sites. If DNA is methylated, the restriction enzymes will not cleave the strand. Contrarily, if the DNA is not methylated, the enzymes will cleave the strand and it will be amplified by PCR. Bisulfite sequencing Bisulfite sequencing is another way to evaluate DNA methylation. Cytosine will be changed to uracil from being treated with sodium bisulfite, whereas methylated cytosines will not be affected. Nanopore sequencing Certain sequencing methods, such as nanopore sequencing, allow sequencing of native DNA. Native (=unamplified) DNA retains the epigenetic modifications which would otherwise be lost during the amplification step. Nanopore basecaller models can distinguish between the signals obtained for epigenetically modified bases and unaltered based and provide an epigenetic profile in addition to the sequencing result. Structural inheritance In ciliates such as Tetrahymena and Paramecium, genetically identical cells show heritable differences in the patterns of ciliary rows on their cell surface. Experimentally altered patterns can be transmitted to daughter cells. It seems existing structures act as templates for new structures. The mechanisms of such inheritance are unclear, but reasons exist to assume that multicellular organisms also use existing cell structures to assemble new ones. Nucleosome positioning Eukaryotic genomes have numerous nucleosomes. Nucleosome position is not random, and determine the accessibility of DNA to regulatory proteins. Promoters active in different tissues have been shown to have different nucleosome positioning features. This determines differences in gene expression and cell differentiation. It has been shown that at least some nucleosomes are retained in sperm cells (where most but not all histones are replaced by protamines). Thus nucleosome positioning is to some degree inheritable. Recent studies have uncovered connections between nucleosome positioning and other epigenetic factors, such as DNA methylation and hydroxymethylation. Histone variants Different histone variants are incorporated into specific regions of the genome non-randomly. Their differential biochemical characteristics can affect genome functions via their roles in gene regulation, and maintenance of chromosome structures. Genomic architecture The three-dimensional configuration of the genome (the 3D genome) is complex, dynamic and crucial for regulating genomic function and nuclear processes such as DNA replication, transcription and DNA-damage repair. Functions and consequences In the brain Memory Memory formation and maintenance are due to epigenetic alterations that cause the required dynamic changes in gene transcription that create and renew memory in neurons. An event can set off a chain of reactions that result in altered methylations of a large set of genes in neurons, which give a representation of the event, a memory. Areas of the brain important in the formation of memories include the hippocampus, medial prefrontal cortex (mPFC), anterior cingulate cortex and amygdala, as shown in the diagram of the human brain in this section. When a strong memory is created, as in a rat subjected to contextual fear conditioning (CFC), one of the earliest events to occur is that more than 100 DNA double-strand breaks are formed by topoisomerase IIB in neurons of the hippocampus and the medial prefrontal cortex (mPFC). These double-strand breaks are at specific locations that allow activation of transcription of immediate early genes (IEGs) that are important in memory formation, allowing their expression in mRNA, with peak mRNA transcription at seven to ten minutes after CFC. Two important IEGs in memory formation are EGR1 and the alternative promoter variant of DNMT3A, DNMT3A2. EGR1 protein binds to DNA at its binding motifs, 5′-GCGTGGGCG-3′ or 5′-GCGGGGGCGG-3', and there are about 12,000 genome locations at which EGR1 protein can bind. EGR1 protein binds to DNA in gene promoter and enhancer regions. EGR1 recruits the demethylating enzyme TET1 to an association, and brings TET1 to about 600 locations on the genome where TET1 can then demethylate and activate the associated genes. The DNA methyltransferases DNMT3A1, DNMT3A2 and DNMT3B can all methylate cytosines (see image this section) at CpG sites in or near the promoters of genes. As shown by Manzo et al., these three DNA methyltransferases differ in their genomic binding locations and DNA methylation activity at different regulatory sites. Manzo et al. located 3,970 genome regions exclusively enriched for DNMT3A1, 3,838 regions for DNMT3A2 and 3,432 regions for DNMT3B. When DNMT3A2 is newly induced as an IEG (when neurons are activated), many new cytosine methylations occur, presumably in the target regions of DNMT3A2. Oliviera et al. found that the neuronal activity-inducible IEG levels of Dnmt3a2 in the hippocampus determined the ability to form long-term memories. Rats form long-term associative memories after contextual fear conditioning (CFC). Duke et al. found that 24 hours after CFC in rats, in hippocampus neurons, 2,097 genes (9.17% of the genes in the rat genome) had altered methylation. When newly methylated cytosines are present in CpG sites in the promoter regions of genes, the genes are often repressed, and when newly demethylated cytosines are present the genes may be activated. After CFC, there were 1,048 genes with reduced mRNA expression and 564 genes with upregulated mRNA expression. Similarly, when mice undergo CFC, one hour later in the hippocampus region of the mouse brain there are 675 demethylated genes and 613 hypermethylated genes. However, memories do not remain in the hippocampus, but after four or five weeks the memories are stored in the anterior cingulate cortex. In the studies on mice after CFC, Halder et al. showed that four weeks after CFC there were at least 1,000 differentially methylated genes and more than 1,000 differentially expressed genes in the anterior cingulate cortex, while at the same time the altered methylations in the hippocampus were reversed. The epigenetic alteration of methylation after a new memory is established creates a different pool of nuclear mRNAs. As reviewed by Bernstein, the epigenetically determined new mix of nuclear mRNAs are often packaged into neuronal granules, or messenger RNP, consisting of mRNA, small and large ribosomal subunits, translation initiation factors and RNA-binding proteins that regulate mRNA function. These neuronal granules are transported from the neuron nucleus and are directed, according to 3′ untranslated regions of the mRNA in the granules (their "zip codes"), to neuronal dendrites. Roughly 2,500 mRNAs may be localized to the dendrites of hippocampal pyramidal neurons and perhaps 450 transcripts are in excitatory presynaptic nerve terminals (dendritic spines). The altered assortments of transcripts (dependent on epigenetic alterations in the neuron nucleus) have different sensitivities in response to signals, which is the basis of altered synaptic plasticity. Altered synaptic plasticity is often considered the neurochemical foundation of learning and memory. Aging Epigenetics play a major role in brain aging and age-related cognitive decline, with relevance to life extension. Other and general In adulthood, changes in the epigenome are important for various higher cognitive functions. Dysregulation of epigenetic mechanisms is implicated in neurodegenerative disorders and diseases. Epigenetic modifications in neurons are dynamic and reversible. Epigenetic regulation impacts neuronal action, affecting learning, memory, and other cognitive processes. Early events, including during embryonic development, can influence development, cognition, and health outcomes through epigenetic mechanisms. Epigenetic mechanisms have been proposed as "a potential molecular mechanism for effects of endogenous hormones on the organization of developing brain circuits". Nutrients could interact with the epigenome to "protect or boost cognitive processes across the lifespan". A review suggests neurobiological effects of physical exercise via epigenetics seem "central to building an 'epigenetic memory' to influence long-term brain function and behavior" and may even be heritable. With the axo-ciliary synapse, there is communication between serotonergic axons and antenna-like primary cilia of CA1 pyramidal neurons that alters the neuron's epigenetic state in the nucleus via the signalling distinct from that at the plasma membrane (and longer-term). Epigenetics also play a major role in the brain evolution in and to humans. Development Developmental epigenetics can be divided into predetermined and probabilistic epigenesis. Predetermined epigenesis is a unidirectional movement from structural development in DNA to the functional maturation of the protein. "Predetermined" here means that development is scripted and predictable. Probabilistic epigenesis on the other hand is a bidirectional structure-function development with experiences and external molding development. Somatic epigenetic inheritance, particularly through DNA and histone covalent modifications and nucleosome repositioning, is very important in the development of multicellular eukaryotic organisms. The genome sequence is static (with some notable exceptions), but cells differentiate into many different types, which perform different functions, and respond differently to the environment and intercellular signaling. Thus, as individuals develop, morphogens activate or silence genes in an epigenetically heritable fashion, giving cells a memory. In mammals, most cells terminally differentiate, with only stem cells retaining the ability to differentiate into several cell types ("totipotency" and "multipotency"). In mammals, some stem cells continue producing newly differentiated cells throughout life, such as in neurogenesis, but mammals are not able to respond to loss of some tissues, for example, the inability to regenerate limbs, which some other animals are capable of. Epigenetic modifications regulate the transition from neural stem cells to glial progenitor cells (for example, differentiation into oligodendrocytes is regulated by the deacetylation and methylation of histones). Unlike animals, plant cells do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. While plants do utilize many of the same epigenetic mechanisms as animals, such as chromatin remodeling, it has been hypothesized that some kinds of plant cells do not use or require "cellular memories", resetting their gene expression patterns using positional information from the environment and surrounding cells to determine their fate. Epigenetic changes can occur in response to environmental exposure – for example, maternal dietary supplementation with genistein (250 mg/kg) have epigenetic changes affecting expression of the agouti gene, which affects their fur color, weight, and propensity to develop cancer. Ongoing research is focused on exploring the impact of other known teratogens, such as diabetic embryopathy, on methylation signatures. Controversial results from one study suggested that traumatic experiences might produce an epigenetic signal that is capable of being passed to future generations. Mice were trained, using foot shocks, to fear a cherry blossom odor. The investigators reported that the mouse offspring had an increased aversion to this specific odor. They suggested epigenetic changes that increase gene expression, rather than in DNA itself, in a gene, M71, that governs the functioning of an odor receptor in the nose that responds specifically to this cherry blossom smell. There were physical changes that correlated with olfactory (smell) function in the brains of the trained mice and their descendants. Several criticisms were reported, including the study's low statistical power as evidence of some irregularity such as bias in reporting results. Due to limits of sample size, there is a probability that an effect will not be demonstrated to within statistical significance even if it exists. The criticism suggested that the probability that all the experiments reported would show positive results if an identical protocol was followed, assuming the claimed effects exist, is merely 0.4%. The authors also did not indicate which mice were siblings, and treated all of the mice as statistically independent. The original researchers pointed out negative results in the paper's appendix that the criticism omitted in its calculations, and undertook to track which mice were siblings in the future. Transgenerational Epigenetic mechanisms were a necessary part of the evolutionary origin of cell differentiation. Although epigenetics in multicellular organisms is generally thought to be a mechanism involved in differentiation, with epigenetic patterns "reset" when organisms reproduce, there have been some observations of transgenerational epigenetic inheritance (e.g., the phenomenon of paramutation observed in maize). Although most of these multigenerational epigenetic traits are gradually lost over several generations, the possibility remains that multigenerational epigenetics could be another aspect to evolution and adaptation. As mentioned above, some define epigenetics as heritable. A sequestered germ line or Weismann barrier is specific to animals, and epigenetic inheritance is more common in plants and microbes. Eva Jablonka, Marion J. Lamb and Étienne Danchin have argued that these effects may require enhancements to the standard conceptual framework of the modern synthesis and have called for an extended evolutionary synthesis. Other evolutionary biologists, such as John Maynard Smith, have incorporated epigenetic inheritance into population-genetics models or are openly skeptical of the extended evolutionary synthesis (Michael Lynch). Thomas Dickins and Qazi Rahman state that epigenetic mechanisms such as DNA methylation and histone modification are genetically inherited under the control of natural selection and therefore fit under the earlier "modern synthesis". Two important ways in which epigenetic inheritance can differ from traditional genetic inheritance, with important consequences for evolution, are: rates of epimutation can be much faster than rates of mutation the epimutations are more easily reversible In plants, heritable DNA methylation mutations are 100,000 times more likely to occur compared to DNA mutations. An epigenetically inherited element such as the PSI+ system can act as a "stop-gap", good enough for short-term adaptation that allows the lineage to survive for long enough for mutation and/or recombination to genetically assimilate the adaptive phenotypic change. The existence of this possibility increases the evolvability of a species. More than 100 cases of transgenerational epigenetic inheritance phenomena have been reported in a wide range of organisms, including prokaryotes, plants, and animals. For instance, mourning-cloak butterflies will change color through hormone changes in response to experimentation of varying temperatures. The filamentous fungus Neurospora crassa is a prominent model system for understanding the control and function of cytosine methylation. In this organism, DNA methylation is associated with relics of a genome-defense system called RIP (repeat-induced point mutation) and silences gene expression by inhibiting transcription elongation. The yeast prion PSI is generated by a conformational change of a translation termination factor, which is then inherited by daughter cells. This can provide a survival advantage under adverse conditions, exemplifying epigenetic regulation which enables unicellular organisms to respond rapidly to environmental stress. Prions can be viewed as epigenetic agents capable of inducing a phenotypic change without modification of the genome. Direct detection of epigenetic marks in microorganisms is possible with single molecule real time sequencing, in which polymerase sensitivity allows for measuring methylation and other modifications as a DNA molecule is being sequenced. Several projects have demonstrated the ability to collect genome-wide epigenetic data in bacteria. Epigenetics in bacteria While epigenetics is of fundamental importance in eukaryotes, especially metazoans, it plays a different role in bacteria. Most importantly, eukaryotes use epigenetic mechanisms primarily to regulate gene expression which bacteria rarely do. However, bacteria make widespread use of postreplicative DNA methylation for the epigenetic control of DNA-protein interactions. Bacteria also use DNA adenine methylation (rather than DNA cytosine methylation) as an epigenetic signal. DNA adenine methylation is important in bacteria virulence in organisms such as Escherichia coli, Salmonella, Vibrio, Yersinia, Haemophilus, and Brucella. In Alphaproteobacteria, methylation of adenine regulates the cell cycle and couples gene transcription to DNA replication. In Gammaproteobacteria, adenine methylation provides signals for DNA replication, chromosome segregation, mismatch repair, packaging of bacteriophage, transposase activity and regulation of gene expression. There exists a genetic switch controlling Streptococcus pneumoniae (the pneumococcus) that allows the bacterium to randomly change its characteristics into six alternative states that could pave the way to improved vaccines. Each form is randomly generated by a phase variable methylation system. The ability of the pneumococcus to cause deadly infections is different in each of these six states. Similar systems exist in other bacterial genera. In Bacillota such as Clostridioides difficile, adenine methylation regulates sporulation, biofilm formation and host-adaptation. Medicine Epigenetics has many and varied potential medical applications. Twins Direct comparisons of identical twins constitute an optimal model for interrogating environmental epigenetics. In the case of humans with different environmental exposures, monozygotic (identical) twins were epigenetically indistinguishable during their early years, while older twins had remarkable differences in the overall content and genomic distribution of 5-methylcytosine DNA and histone acetylation. The twin pairs who had spent less of their lifetime together and/or had greater differences in their medical histories were those who showed the largest differences in their levels of 5-methylcytosine DNA and acetylation of histones H3 and H4. Dizygotic (fraternal) and monozygotic (identical) twins show evidence of epigenetic influence in humans. DNA sequence differences that would be abundant in a singleton-based study do not interfere with the analysis. Environmental differences can produce long-term epigenetic effects, and different developmental monozygotic twin subtypes may be different with respect to their susceptibility to be discordant from an epigenetic point of view. A high-throughput study, which denotes technology that looks at extensive genetic markers, focused on epigenetic differences between monozygotic twins to compare global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs. In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic "drift". Epigenetic drift is the term given to epigenetic modifications as they occur as a direct function with age. While age is a known risk factor for many diseases, age-related methylation has been found to occur differentially at specific sites along the genome. Over time, this can result in measurable differences between biological and chronological age. Epigenetic changes have been found to be reflective of lifestyle and may act as functional biomarkers of disease before clinical threshold is reached. A more recent study, where 114 monozygotic twins and 80 dizygotic twins were analyzed for the DNA methylation status of around 6000 unique genomic regions, concluded that epigenetic similarity at the time of blastocyst splitting may also contribute to phenotypic similarities in monozygotic co-twins. This supports the notion that microenvironment at early stages of embryonic development can be quite important for the establishment of epigenetic marks. Congenital genetic disease is well understood and it is clear that epigenetics can play a role, for example, in the case of Angelman syndrome and Prader–Willi syndrome. These are normal genetic diseases caused by gene deletions or inactivation of the genes but are unusually common because individuals are essentially hemizygous because of genomic imprinting, and therefore a single gene knock out is sufficient to cause the disease, where most cases would require both copies to be knocked out. Genomic imprinting Some human disorders are associated with genomic imprinting, a phenomenon in mammals where the father and mother contribute different epigenetic patterns for specific genomic loci in their germ cells. The best-known case of imprinting in human disorders is that of Angelman syndrome and Prader–Willi syndrome – both can be produced by the same genetic mutation, chromosome 15q partial deletion, and the particular syndrome that will develop depends on whether the mutation is inherited from the child's mother or from their father. In the Överkalix study, paternal (but not maternal) grandsons of Swedish men who were exposed during preadolescence to famine in the 19th century were less likely to die of cardiovascular disease. If food was plentiful, then diabetes mortality in the grandchildren increased, suggesting that this was a transgenerational epigenetic inheritance. The opposite effect was observed for females – the paternal (but not maternal) granddaughters of women who experienced famine while in the womb (and therefore while their eggs were being formed) lived shorter lives on average. Examples of drugs altering gene expression from epigenetic events The use of beta-lactam antibiotics can alter glutamate receptor activity and the action of cyclosporine on multiple transcription factors. Additionally, lithium can impact autophagy of aberrant proteins, and opioid drugs via chronic use can increase the expression of genes associated with addictive phenotypes. Parental nutrition, in utero exposure to stress or endocrine disrupting chemicals, male-induced maternal effects such as the attraction of differential mate quality, and maternal as well as paternal age, and offspring gender could all possibly influence whether a germline epimutation is ultimately expressed in offspring and the degree to which intergenerational inheritance remains stable throughout posterity. However, whether and to what extent epigenetic effects can be transmitted across generations remains unclear, particularly in humans. Addiction Addiction is a disorder of the brain's reward system which arises through transcriptional and neuroepigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling). Transgenerational epigenetic inheritance of addictive phenotypes has been noted to occur in preclinical studies. However, robust evidence in support of the persistence of epigenetic effects across multiple generations has yet to be established in humans; for example, an epigenetic effect of prenatal exposure to smoking that is observed in great-grandchildren who had not been exposed. Research The two forms of heritable information, namely genetic and epigenetic, are collectively called dual inheritance. Members of the APOBEC/AID family of cytosine deaminases may concurrently influence genetic and epigenetic inheritance using similar molecular mechanisms, and may be a point of crosstalk between these conceptually compartmentalized processes. Fluoroquinolone antibiotics induce epigenetic changes in mammalian cells through iron chelation. This leads to epigenetic effects through inhibition of α-ketoglutarate-dependent dioxygenases that require iron as a co-factor. Various pharmacological agents are applied for the production of induced pluripotent stem cells (iPSC) or maintain the embryonic stem cell (ESC) phenotypic via epigenetic approach. Adult stem cells like bone marrow stem cells have also shown a potential to differentiate into cardiac competent cells when treated with G9a histone methyltransferase inhibitor BIX01294. Cell plasticity, which is the adaptation of cells to stimuli without changes in their genetic code, requires epigenetic changes. These have been observed in cell plasticity in cancer cells during epithelial-to-mesenchymal transition and also in immune cells, such as macrophages. Interestingly, metabolic changes underly these adaptations, since various metabolites play crucial roles in the chemistry of epigenetic marks. This includes for instance alpha-ketoglutarate, which is required for histone demethylation, and acetyl-Coenzyme A, which is required for histone acetylation. Epigenome editing Epigenetic regulation of gene expression that could be altered or used in epigenome editing are or include mRNA/lncRNA modification, DNA methylation modification and histone modification. CpG sites, SNPs and biological traits Methylation is a widely characterized mechanism of genetic regulation that can determine biological traits. However, strong experimental evidences correlate methylation patterns in SNPs as an important additional feature for the classical activation/inhibition epigenetic dogma. Molecular interaction data, supported by colocalization analyses, identify multiple nuclear regulatory pathways, linking sequence variation to disturbances in DNA methylation and molecular and phenotypic variation. UBASH3B locus UBASH3B encodes a protein with tyrosine phosphatase activity, which has been previously linked to advanced neoplasia. SNP rs7115089 was identified as influencing DNA methylation and expression of this locus, as well as and Body Mass Index (BMI). In fact, SNP rs7115089 is strongly associated with BMI and with genetic variants linked to other cardiovascular and metabolic traits in GWASs. New studies suggesting UBASH3B as a potential mediator of adiposity and cardiometabolic disease. In addition, animal models demonstrated that UBASH3B expression is an indicator of caloric restriction that may drive programmed susceptibility to obesity and it is associated with other measures of adiposity in human peripherical blood. NFKBIE locus SNP rs730775 is located in the first intron of NFKBIE and is a cis eQTL for NFKBIE in whole blood. Nuclear factor (NF)-κB inhibitor ε (NFKBIE) directly inhibits NF-κB1 activity and is significantly co-expressed with NF-κB1, also, it is associated with rheumatoid arthritis. Colocalization analysis supports that variants for the majority of the CpG sites in SNP rs730775 cause genetic variation at the NFKBIE locus which is suggestible linked to rheumatoid arthritis through trans acting regulation of DNA methylation by NF-κB. FADS1 locus Fatty acid desaturase 1 (FADS1) is a key enzyme in the metabolism of fatty acids. Moreover, rs174548 in the FADS1 gene shows increased correlation with DNA methylation in people with high abundance of CD8+ T cells. SNP rs174548 is strongly associated with concentrations of arachidonic acid and other metabolites in fatty acid metabolism, blood eosinophil counts. and inflammatory diseases such as asthma. Interaction results indicated a correlation between rs174548 and asthma, providing new insights about fatty acid metabolism in CD8+ T cells with immune phenotypes. Pseudoscience As epigenetics is in the early stages of development as a science and is surrounded by sensationalism in the public media, David Gorski and geneticist Adam Rutherford have advised caution against the proliferation of false and pseudoscientific conclusions by new age authors making unfounded suggestions that a person's genes and health can be manipulated by mind control. Misuse of the scientific term by quack authors has produced misinformation among the general public. See also Baldwin effect Behavioral epigenetics Biological effects of radiation on the epigenome Computational epigenetics Contribution of epigenetic modifications to evolution DAnCER database (2010) Epigenesis (biology) Epigenetics in forensic science Epigenetics of autoimmune disorders Epiphenotyping Epigenetic therapy Epigenetics of neurodegenerative diseases Genetics Lamarckism Nutriepigenomics Position-effect variegation Preformationism Somatic epitype Synthetic genetic array Sleep epigenetics Transcriptional memory Transgenerational epigenetic inheritance References Further reading External links The Human Epigenome Project (HEP) The Epigenome Network of Excellence (NoE) Canadian Epigenetics, Environment and Health Research Consortium (CEEHRC) The Epigenome Network of Excellence (NoE) – public international site "DNA Is Not Destiny" – Discover magazine cover story "The Ghost In Your Genes", Horizon (2005), BBC Epigenetics article at Hopkins Medicine Towards a global map of epigenetic variation Genetic mapping Lamarckism
0.780274
0.999485
0.779872
Macroevolution
Macroevolution comprises the evolutionary processes and patterns which occur at and above the species level. In contrast, microevolution is evolution occurring within the population(s) of a single species. In other words, microevolution is the scale of evolution that is limited to intraspecific (within-species) variation, while macroevolution extends to interspecific (between-species) variation. The evolution of new species (speciation) is an example of macroevolution. This is the common definition for 'macroevolution' used by contemporary scientists. Although, the exact usage of the term has varied throughout history. Macroevolution addresses the evolution of species and higher taxonomic groups (genera, families, orders, etc) and uses evidence from phylogenetics, the fossil record, and molecular biology to answer how different taxonomic groups exhibit different species diversity and/or morphological disparity. Origin and changing meaning of the term After Charles Darwin published his book On the Origin of Species in 1859, evolution was widely accepted to be real phenomenon. However, many scientists still disagreed with Darwin that natural selection was the primary mechanism to explain evolution. Prior to the Modern Synthesis, during the period between the 1880s to the 1930s (dubbed the ‘Eclipse of Darwinism’) many scientists argued in favor of alternative explanations. These included ‘orthogenesis’, and among its proponents was the Russian entomologist Yuri A. Filipchenko. Filipchenko appears to have been the one who coined the term ‘macroevolution’ in his book Variabilität und Variation (1927). While introducing the concept, he claimed that the field of genetics is insufficient to explain “the origin of higher systematic units” above the species level. Regarding the origin of higher systematic units, Filipchenko stated his claim that ‘like-produces-like’. A taxon must originate from other taxa of equivalent rank. A new species must come from an old species, a genus from an older genus, a family from another family, etc. Filipchenko believed this was the only way to explain the origin of the major characters that define species and especially higher taxonomic groups (genera, families, orders, etc). For example, the origin of families must require the sudden appearance of new traits which are different in greater magnitude compared to the characters required for the origin of a genus or species. However, this view is no longer consistent with contemporary understanding of evolution. Furthermore, the Linnaean ranks of ‘genus’ (and higher) are not real entities but artificial concepts which break down when they are combined with the process of evolution. Nevertheless, Filipchenko’s distinction between microevolution and macroevolution had a major impact on the development of evolutionary science. The term was adopted by Filipchenko's protégé Theodosius Dobzhansky in his book ‘Genetics und the Origin of Species’ (1937), a seminal piece that contributed to the development of the Modern Synthesis. ‘Macroevolution’ was also adopted by those who used it to criticize the Modern Synthesis. A notable example of this was the book The Material Basis of Evolution (1940) by the geneticist Richard Goldschmidt, a close friend of Filipchenko. Goldschmidt suggested saltational evolutionary changes either due to mutations that affect the rates of developmental processes or due to alterations in the chromosomal pattern. Particularly the latter idea was widely rejected by the modern synthesis, but the hopeful monster concept based on Evolutionary developmental biology (or evo-devo) explanations found a moderate revival in recent times. Occasionally such dramatic changes can lead to novel features that survive. As an alternative to saltational evolution, Dobzhansky suggested that the difference between macroevolution and microevolution reflects essentially a difference in time-scales, and that macroevolutionary changes were simply the sum of microevolutionary changes over geologic time. This view became broadly accepted, and accordingly, the term macroevolution has been used widely as a neutral label for the study of evolutionary changes that take place over a very large time-scale. Further, species selection suggests that selection among species is a major evolutionary factor that is independent from and complementary to selection among organisms. Accordingly, the level of selection has become the conceptual basis of a third definition, which defines macroevolution as evolution through selection among interspecific variation. Microevolution vs Macroevolution The fact that both micro- and macroevolution (including common descent) are supported by overwhelming evidence remains uncontroversial within the scientific community. However, there has been considerable debate over the past 80 years regarding causal and explanatory connection between microevolution and macroevolution. The ‘Extrapolation’ view holds there is no fundamental difference between the two aside from scale; i.e. macroevolution is merely cumulative microevolution. Hence, the patterns observed at the macroevolutionary scale can be explained by microevolutionary processes over long periods of time. The ‘Decoupled’ view holds that microevolutionary processes are decoupled from macroevolutionary processes because there are separate macroevolutionary processes that cannot be sufficiently explained by microevolutionary processes alone. " ... macroevolutionary processes are underlain by microevolutionary phenomena and are compatible with microevolutionary theories, but macroevolutionary studies require the formulation of autonomous hypotheses and models (which must be tested using macroevolutionary evidence). In this (epistemologically) very important sense, macroevolution is decoupled from microevolution: macroevolution is an autonomous field of evolutionary study."                           Francisco J. Ayala (1983) Many scientists see macroevolution as a field of study rather than a distinct process that is similar to the process of microevolution. Thus, macroevolution is concerned with the history of life and macroevolutionary explanations encompasses ecology, paleontology, mass extinctions, plate tectonics, and unique events such as the Cambrian explosion. Within microevolution, the evolutionary process of changing heritable characteristics (e.g. changes in allele frequencies) is described by population genetics, with mechanisms such as mutation, natural selection, and genetic drift. However, the scope of evolution can be expanded to higher scales where different observations are made. Macroevolutionary mechanisms are provided to explain these. For example, speciation can be discussed in terms of the ‘mode’, i.e. how speciation occurs. Different modes of speciation include sympatric and allopatric). Additionally, scientists research the 'tempo' of speciation, i.e. the rate at which species change genetically and/or morphologically. Classically, competing hypothesis for the tempo of specieation include phyletic gradualism and punctuated equilibrium). Lastly, what are the causes of speciation is also extensively researched. More questions can be asked regarding the evolution of species and higher taxonomic groups (genera, families, orders, etc), and how these have evolved across geography and vast spans of geological time. Such questions are researched from various fields of science. This makes the study of 'macroevolution' interdisciplinary. For example: How different species are related to each other via common ancestry. This topic is researched in the field of phylogenetics. The rates of evolutionary change and across time in the fossil record. Why do some groups experience a lot of change while others remain morphologically stable? The latter case are often called 'living fossils'. However, this term is criticized for wrongly implying that such species have not evolved. The term 'stabilomorph' has been proposed instead. The impacts and causes of major events in palaeontological history, including mass extinctions and evolutionary diversifications. Prominent examples of mass extinctions are the Permian-Triassic and Cretaceous-Paleogene events. In contrast, famous evolutionary radiations include the Cambrian Explosion and Cretaceous Terrestrial Revolution. Why different species or high taxonomic groups (even in spite of having similar ages) exhibit different survival/extinction rates, species diversity, and/or morphological disparity. The observation of long-term trends in evolution. Evolutionary trends can be passive (resembling diffusion) or driven (directional). A related question is whether these trends are directed in some way, e.g. towards complexity or simplicity. How the distinctive and of complext traits, which differentiate species and higher taxa from another, have evolved. Examples of this include gene duplication, heterochrony, novelty in evodevo from facilitated variation, and constructive neutral evolution. Macroevolutionary processes Speciation According to the modern definition, the evolutionary transition from the ancestral to the daughter species is microevolutionary, because it results from selection (or, more generally, sorting) among varying organisms. However, speciation has also a macroevolutionary aspect, because it produces the interspecific variation species selection operates on. Another macroevolutionary aspect of speciation is the rate at which it successfully occurs, analogous to reproductive success in microevolution. Speciation is the process in which populations within one species change to an extent at which they become reproductively isolated, that is, they cannot interbreed anymore. However, this classical concept has been challenged and more recently, a phylogenetic or evolutionary species concept has been adopted. Their main criteria for new species is to be diagnosable and monophyletic, that is, they form a clearly defined lineage. Charles Darwin first discovered that speciation can be extrapolated so that species not only evolve into new species, but also into new genera, families and other groups of animals. In other words, macroevolution is reducible to microevolution through selection of traits over long periods of time. In addition, some scholars have argued that selection at the species level is important as well. The advent of genome sequencing enabled the discovery of gradual genetic changes both during speciation but also across higher taxa. For instance, the evolution of humans from ancestral primates or other mammals can be traced to numerous but individual mutations. Evolution of new organs and tissues One of the main questions in evolutionary biology is how new structures evolve, such as new organs. Macroevolution is often thought to require the evolution of structures that are 'completely new'. However, fundamentally novel structures are not necessary for dramatic evolutionary change. As can be seen in vertebrate evolution, most "new" organs are actually not new—they are simply modifications of previously existing organs. For instance, the evolution of mammal diversity in the past 100 million years has not required any major innovation. All of this diversity can be explained by modification of existing organs, such as the evolution of elephant tusks from incisors. Other examples include wings (modified limbs), feathers (modified reptile scales), lungs (modified swim bladders, e.g. found in fish), or even the heart (a muscularized segment of a vein). The same concept applies to the evolution of "novel" tissues. Even fundamental tissues such as bone can evolve from combining existing proteins (collagen) with calcium phosphate (specifically, hydroxy-apatite). This probably happened when certain cells that make collagen also accumulated calcium phosphate to get a proto-bone cell. Molecular macroevolution Microevolution is facilitated by mutations, the vast majority of which have no or very small effects on gene or protein function. For instance, the activity of an enzyme may be slightly changed or the stability of a protein slightly altered. However, occasionally mutations can dramatically change the structure and functions of protein. This may be called "molecular macroevolution". Protein function. There are countless cases in which protein function is dramatically altered by mutations. For instance, a mutation in acetaldehyde dehydrogenase (EC:1.2.1.10) can change it to a 4-hydroxy-2-oxopentanoate pyruvate lyase (EC:4.1.3.39), i.e., a mutation that changes an enzyme from one to another EC class (there are only 7 main classes of enzymes). Another example is the conversion of a yeast galactokinase (Gal1) to a transcription factor (Gal3) which can be achieved by an insertion of only two amino acids. While some mutations may not change the molecular function of a protein significantly, their biological function may be dramatically changed. For instance, most brain receptors recognize specific neurotransmitters, but that specificity can easily be changed by mutations. This has been shown by acetylcholine receptors that can be changed to serotonin or glycine receptors which actually have very different functions. Their similar gene structure also indicates that they must have arisen from gene duplications. Protein structure. Although protein structures are highly conserved, sometimes one or a few mutations can dramatically change a protein. For instance, an IgG-binding, 4+ fold can be transformed into an albumin-binding, 3-α fold via a single amino-acid mutation. This example also shows that such a transition can happen with neither function nor native structure being completely lost. In other words, even when multiple mutations are required to convert one protein or structure into another, the structure and function is at least partially retained in the intermediary sequences. Similarly, domains can be converted into other domains (and thus other functions). For instance, the structures of SH3 folds can evolve into OB folds which in turn can evolve into CLB folds. Examples Evolutionary faunas A macroevolutionary benchmark study is Sepkoski's work on marine animal diversity through the Phanerozoic. His iconic diagram of the numbers of marine families from the Cambrian to the Recent illustrates the successive expansion and dwindling of three "evolutionary faunas" that were characterized by differences in origination rates and carrying capacities. Long-term ecological changes and major geological events are postulated to have played crucial roles in shaping these evolutionary faunas. Stanley's rule Macroevolution is driven by differences between species in origination and extinction rates. Remarkably, these two factors are generally positively correlated: taxa that have typically high diversification rates also have high extinction rates. This observation has been described first by Steven Stanley, who attributed it to a variety of ecological factors. Yet, a positive correlation of origination and extinction rates is also a prediction of the Red Queen hypothesis, which postulates that evolutionary progress (increase in fitness) of any given species causes a decrease in fitness of other species, ultimately driving to extinction those species that do not adapt rapidly enough. High rates of origination must therefore correlate with high rates of extinction. Stanley's rule, which applies to almost all taxa and geologic ages, is therefore an indication for a dominant role of biotic interactions in macroevolution. "Macromutations": Single mutations leading to dramatic change While the vast majority of mutations are inconsequential, some can have a dramatic effect on morphology or other features of an organism. One of the best studied cases of a single mutation that leads to massive structural change is the Ultrabithorax mutation in fruit flies. The mutation duplicates the wings of a fly to make it look like a dragonfly, a different order of insect. Evolution of multicellularity The evolution of multicellular organisms is one of the major breakthroughs in evolution. The first step of converting a unicellular organism into a metazoan (a multicellular organism) is to allow cells to attach to each other. This can be achieved by one or a few mutations. In fact, many bacteria form multicellular assemblies, e.g. cyanobacteria or myxobacteria. Another species of bacteria, Jeongeupia sacculi, form well-ordered sheets of cells, which ultimately develop into a bulbous structure. Similarly, unicellular yeast cells can become multicellular by a single mutation in the ACE2 gene, which causes the cells to form a branched multicellular form. Evolution of bat wings The wings of bats have the same structural elements (bones) as any other five-fingered mammal (see periodicity in limb development). However, the finger bones in bats are dramatically elongated, so the question is how these bones became so long. It has been shown that certain growth factors such as bone morphogenetic proteins (specifically Bmp2) is over expressed so that it stimulates an elongation of certain bones. Genetic changes in the bat genome identified the changes that lead to this phenotype and it has been recapitulated in mice: when specific bat DNA is inserted in the mouse genome, recapitulating these mutations, the bones of mice grow longer. Limb loss in lizards and snakes Snakes evolved from lizards. Phylogenetic analysis shows that snakes are actually nested within the phylogenetic tree of lizards, demonstrating that they have a common ancestor. This split happened about 180 million years ago and several intermediary fossils are known to document the origin. In fact, limbs have been lost in numerous clades of reptiles, and there are cases of recent limb loss. For instance, the skink genus Lerista has lost limbs in multiple cases, with all possible intermediary steps, that is, there are species which have fully developed limbs, shorter limbs with 5, 4, 3, 2, 1 or no toes at all. Human evolution While human evolution from their primate ancestors did not require massive morphological changes, our brain has sufficiently changed to allow human consciousness and intelligence. While the latter involves relatively minor morphological changes it did result in dramatic changes to brain function. Thus, macroevolution does not have to be morphological, it can also be functional. Evolution of viviparity in lizards Most lizards are egg-laying and thus need an environment that is warm enough to incubate their eggs. However, some species have evolved viviparity, that is, they give birth to live young, as almost all mammals do. In several clades of lizards, egg-laying (oviparous) species have evolved into live-bearing ones, apparently with very little genetic change. For instance, a European common lizard, Zootoca vivipara, is viviparous throughout most of its range, but oviparous in the extreme southwest portion. That is, within a single species, a radical change in reproductive behavior has happened. Similar cases are known from South American lizards of the genus Liolaemus which have egg-laying species at lower altitudes, but closely related viviparous species at higher altitudes, suggesting that the switch from oviparous to viviparous reproduction does not require many genetic changes. Behavior: Activity pattern in mice Most animals are either active at night or during the day. However, some species switched their activity pattern from day to night or vice versa. For instance, the African striped mouse (Rhabdomys pumilio), transitioned from the ancestrally nocturnal behavior of its close relatives to a diurnal one. Genome sequencing and transcriptomics revealed that this transition was achieved by modifying genes in the rod phototransduction pathway, among others. Research topics Subjects studied within macroevolution include: Adaptive radiations such as the Cambrian Explosion. Changes in biodiversity through time. Evo-devo (the connection between evolution and developmental biology) Genome evolution, like horizontal gene transfer, genome fusions in endosymbioses, and adaptive changes in genome size. Mass extinctions. Estimating diversification rates, including rates of speciation and extinction. The debate between punctuated equilibrium and gradualism. The role of development in shaping evolution, particularly such topics as heterochrony and phenotypic plasticity. See also Extinction event Interspecific competition Microevolution Molecular evolution Punctuated equilibrium Red Queen hypothesis Speciation Transitional fossil Unit of selection Notes References Further reading What is marcroevolution? (pdf) https://onlinelibrary.wiley.com/doi/full/10.1111/pala.12465 External links Introduction to macroevolution Macroevolution as the common descent of all life Macroevolution in the 21st century Macroevolution as an independent discipline. Macroevolution FAQ Evolutionary biology
0.78772
0.98999
0.779835
Ontology
Ontology is the philosophical study of being. As one of the most fundamental concepts, being encompasses all of reality and every entity within it. To articulate the basic structure of being, ontology examines what all entities have in common and how they are divided into fundamental classes, known as categories. An influential distinction is between particular and universal entities. Particulars are unique, non-repeatable entities, like the person Socrates. Universals are general, repeatable entities, like the color green. Another contrast is between concrete objects existing in space and time, like a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality, employing categories such as substance, property, relation, state of affairs, and event. Ontologists disagree about which entities exist on the most basic level. Platonic realism asserts that universals have objective existence. Conceptualism says that universals only exist in the mind while nominalism denies their existence. There are similar disputes about mathematical objects, unobservable objects assumed by scientific theories, and moral facts. Materialism says that, fundamentally, there is only matter while dualism asserts that mind and matter are independent principles. According to some ontologists, there are no objective answers to ontological questions but only perspectives shaped by different linguistic practices. Ontology uses diverse methods of inquiry. They include the analysis of concepts and experience, the use of intuitions and thought experiments, and the integration of findings from natural science. Applied ontology employs ontological theories and principles to study entities belonging to a specific area. It is of particular relevance to information and computer science, which develop conceptual frameworks of limited domains. These frameworks are used to store information in a structured way, such as a college database tracking academic activities. Ontology is closely related to metaphysics and relevant to the fields of logic, theology, and anthropology. The origins of ontology lie in the ancient period with speculations about the nature of being and the source of the universe, including ancient Indian, Chinese, and Greek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name. Definition Ontology is the study of being. It is the branch of philosophy that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. It aims to discover the foundational building blocks of the world and characterize reality as a whole in its most general aspects. In this regard, ontology contrasts with individual sciences like biology and astronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena. In some contexts, the term ontology refers not to the general study of being but to a specific ontological theory within this discipline. It can also mean a conceptual scheme or inventory of a particular domain. Ontology is closely related to metaphysics but the exact relation of these two disciplines is disputed. According to a traditionally influential characterization, metaphysics is the study of fundamental reality in the widest sense while ontology is the subdiscipline of metaphysics that restricts itself to the most general features of reality. This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, like God, mind, and value. A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory. Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being. It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms. The word ontology has its roots in the ancient Greek terms (, meaning ) and (, meaning ), literally, . The ancient Greeks did not use the term ontology, which was coined by philosophers in the 17th century. Basic concepts Being Being, or existence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing the whole of reality and every entity within it. In its widest sense, being only contrasts with non-being or nothingness. It is controversial whether a more substantial analysis of the concept or meaning of being is possible. One proposal understands being as a property possessed by every entity. Critics of this view argue that an entity without being cannot have any properties, meaning that being cannot be a property since properties presuppose being. A different suggestion says that all beings share a set of essential features. According to the Eleatic principle, "power is the mark of being", meaning that only entities with a causal influence truly exist. According to a controversial proposal by philosopher George Berkeley, all existence is mental, expressed in his slogan "to be is to be perceived". Depending on the context, the term being is sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and impermanent and is distinguished from becoming, which implies change. Another contrast is between being, as what truly exists, and phenomena, as what merely appears to exist. In some contexts, being expresses the fact that something is while essence expresses its qualities or what it is like. Ontologists often divide being into fundamental classes or highest kinds, called categories of being. Proposed categories include substance, property, relation, state of affairs, and event. They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category. Some philosophers, like Aristotle, say that entities belonging to different categories exist in distinct ways. Others, like John Duns Scotus, insist that there are no differences in the mode of being, meaning that everything exists in the same way. A related dispute is whether some entities have a higher degree of being than others, an idea already found in Plato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees. The relation between being and non-being is a frequent topic in ontology. Influential issues include the status of nonexistent objects and why there is something rather than nothing. Particulars and universals A central distinction in ontology is between particular and universal entities. Particulars, also called individuals, are unique, non-repeatable entities, like Socrates, the Taj Mahal, and Mars. Universals are general, repeatable entities, like the color green, the form circularity, and the virtue courage. Universals express aspects or features shared by particulars. For example, Mount Everest and Mount Fuji are particulars characterized by the universal mountain. Universals can take the form of properties or relations. Properties express what entities are like. They are features or qualities possessed by an entity. Properties are often divided into essential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it. For instance, having three sides is an essential property of a triangle while being red is an accidental property. Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group. For example, being a city is a property while being east of is a relation, as in "Kathmandu is a city" and "Kathmandu is east of New Delhi". Relations are often divided into internal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation of resemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations. Substances play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the property green and acquires the property red. States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individual Socrates and the property wise. States of affairs that correspond to reality are called facts. Facts are truthmakers of statements, meaning that whether a statement is true or false depends on the underlying facts. Events are particular entities that occur in time, like the fall of the Berlin Wall and the first moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet. Complex events, also called processes, are composed of a sequence of events. Concrete and abstract objects Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set of integers. They lack causal powers and do not undergo changes. It is controversial whether or in what sense abstract objects exist and how people can know about them. Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and pages between them. Each of these components is itself constituted of smaller parts, like molecules, atoms, and elementary particles. Mereology studies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to a different view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another. The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it. Abstract objects are closely related to fictional and intentional objects. Fictional objects are entities invented in works of fiction. They can be things, like the One Ring in J. R. R. Tolkien's book series The Lord of the Rings, and people, like the Monkey King in the novel Journey to the West. Some philosophers say that fictional objects are one type of abstract object, existing outside space and time. Others understand them as artifacts that are created as the works of fiction are written. Intentional objects are entities that exist within mental states, like perceptions, beliefs, and desires. For example, if a person thinks about the Loch Ness Monster then the Loch Ness Monster is the intentional object of this thought. People can think about existing and non-existing objects, making it difficult to assess the ontological status of intentional objects. Other concepts Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity. For instance, the surface of an apple cannot exist without the apple. An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level. It is closely related to metaphysical grounding, which is the relation between a ground and facts it explains. An ontological commitment of a person or a theory is an entity that exists according to them. For instance, a person who believes in God has an ontological commitment to God. Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, the Quine–Putnam indispensability argument defends mathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers. Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible that extraterrestrial life exists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Doha is the capital of Qatar". Ontologists often use the concept of possible worlds to analyze possibility and necessity. A possible world is a complete and consistent way how things could have been. For example, Haruki Murakami was born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea, possible world semantics says that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds. In ontology, identity means that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also called exact similarity and indiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year". Branches There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole. Pure ontology contrasts with applied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science. It considers ontological problems in regard to specific entities such as matter, mind, numbers, God, and cultural artifacts. Social ontology, a major subfield of applied ontology, studies social kinds, like money, gender, society, and language. It aims to determine the nature and essential features of these concepts while also examining their mode of existence. According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars. In the fields of computer science, information science, and knowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way. A related application in genetics is Gene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases. Formal ontology is the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools of formal logic to express their findings in an abstract and general manner. Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area. Examples are ideal spatial beings in the area of geometry and living beings in the area of biology. Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization. Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion. Metaontology studies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists". It is closely related to fundamental ontology, an approach developed by philosopher Martin Heidegger that seeks to uncover the meaning of being. Schools of thought Realism and anti-realism The term realism is used for various theories that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true. This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other. According to philosopher Rudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework. In a more narrow sense, realism refers to the existence of certain types of entities. Realists about universals say that universals have mind-independent existence. According to Platonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universal red could exist by itself even if there were no red objects in the world. Aristotelian realism, also called moderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them. Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world. Nominalists defend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects. Mathematical realism, a closely related view in the philosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence of mathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible to empirical observation. Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and game formalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation. Modal realism is the theory that in addition to the actual world, there are countless possible worlds as real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by our counterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects. Scientific realists say that the scientific description of the world is an accurate representation of reality. It is of particular relevance in regard to things that cannot be directly observed by humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature. Scientific anti-realism says that scientific theories are not descriptions of reality but instruments to predict observations and the outcomes of experiments. Moral realists claim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right. Moral anti-realists either claim that moral principles are subjective and differ between persons and cultures, a position known as moral relativism, or outright deny the existence of moral facts, a view referred to as moral nihilism. By number of categories Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class. For example, some forms of nominalism state that only concrete particulars exist while some forms of bundle theory state that only properties exist. Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything. The closely related discussion between monism and dualism is about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level. Materialism is an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states. Idealists take the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds. Neutral monism occupies a middle ground by saying that both mind and matter are derivative phenomena. Dualists state that mind and matter exist as independent principles, either as distinct substances or different types of properties. In a slightly different sense, monism contrasts with pluralism as a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality. Pluralism is more commonly accepted and says that several distinct entities exist. By fundamental categories The historically influential substance-attribute ontology is a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances. The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless or bare particular that merely supports the properties. Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality. Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible. According to process ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change. Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According to trope bundle theory, properties are particular entities that belong to a single bundle. Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level. Ontic structural realism agrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate. Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact that the Earth is a planet consists of the particular object the Earth and the property being a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts. In the history of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested by Aristotle, whose system includes ten categories: substance, quantity, quality, relation, place, date, posture, state, action, and passion. An early influential system of categories in Indian philosophy, first proposed in the Vaisheshika school, distinguishes between six categories: substance, quality, motion, universal, individuator, and inherence. Immanuel Kant's transcendental idealism includes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality. In more recent philosophy, theories of categories were developed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Others The dispute between constituent and relational ontologies concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties. Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities. One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture. Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists. The ontological theories of endurantism and perdurantism aim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed of temporal parts and, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves. Differential ontology is a poststructuralist approach interested in the relation between the concepts of identity and difference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things. Object-oriented ontology belongs to the school of speculative realism and examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception. Methods Methods of ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied by metaontology. Conceptual analysis is a method to understand ontological concepts and clarify their meaning. It proceeds by analyzing their component parts and the necessary and sufficient conditions under which a concept applies to an entity. This information can help ontologists decide whether a certain type of entity, such as numbers, exists. Eidetic variation is a related method in phenomenological ontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential. The transcendental method begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or which conditions are required for this entity to exist. Another approach is based on intuitions in the form of non-inferential impressions about the correctness of general principles. These principles can be used as the foundation on which an ontological system is built and expanded using deductive reasoning. A further intuition-based method relies on thought experiments to evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employing counterfactual thinking to assess the consequences of this situation. For example, some ontologists examine the relation between mind and matter by imagining creatures identical to humans but without consciousness. Naturalistic methods rely on the insights of the natural sciences to determine what exists. According to an influential approach by Willard Van Orman Quine, ontology can be conducted by analyzing the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them. Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction. The principle of Ockham's Razor says that simple theories are preferable. A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities. Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations. A further factor is how close a theory is to common sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue. In applied ontology, ontological engineering is the process of creating and refining conceptual models of specific domains. Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in a formal language to ensure precision and, in some cases, automatic computability. In the following review phase, the validity of the ontology is assessed using test data. Various more specific instructions for how to carry out the different steps have been suggested. They include the Cyc method, Grüninger and Fox's methodology, and so-called METHONTOLOGY. In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch. Related fields Ontology overlaps with many disciplines, including logic, the study of correct reasoning. Ontologists often employ logical systems to express their insights, specifically in the field of formal ontology. Of particular interest to them is the existential quantifier, which is used to express what exists. In first-order logic, for example, the formula states that dogs exist. Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being. Doubts about the accuracy of natural language have led some ontologists to seek a new formal language, termed ontologese, for a better representation of the fundamental structure of reality. Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which builds databases to store this information and defines computational processes to automatically transform and use it. For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name. In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help of upper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, like Suggested Upper Merged Ontology and Basic Formal Ontology. Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework. Protein Ontology is a formal framework for the standardized representation of protein-related entities and their relationships. Gene Ontology and Sequence Ontology serve a similar purpose in the field of genetics. Environment Ontology is a knowledge representation focused on ecosystems and environmental processes. Friend of a Friend provides a conceptual framework to represent relations between people and their interests and activities. The topic of ontology has received increased attention in anthropology since the 1990s, sometimes termed the "ontological turn". This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook of Indigenous people and how it differs from a Western perspective. As an example of this contrast, it has been argued that various indigenous communities ascribe intentionality to non-human entities, like plants, forests, or rivers. This outlook is known as animism and is also found in Native American ontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature. Ontology is closely related to theology and its interest in the existence of God as an ultimate entity. The ontological argument, first proposed by Anselm of Canterbury, attempts to prove the existence of the divine. It defines God as the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence. Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming it ontotheology. History The roots of ontology in ancient philosophy are speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in the Upanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what sense ultimate reality is one or many. Samkhya, the first orthodox school of Indian philosophy, formulated an atheist dualist ontology based on the Upanishads, identifying pure consciousness and matter as its two foundational principles. The later Vaisheshika school proposed a comprehensive system of categories. In ancient China, Laozi's (6th century BCE) Taoism examines the underlying order of the universe, known as Tao, and how this order is shaped by the interaction of two basic forces, yin and yang. The philosophical movement of Xuanxue emerged in the 3rd century CE and explored the relation between being and non-being. Starting in the 6th century BCE, Presocratic philosophers in ancient Greece aimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things. Parmenides (c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being. Inspired by Presocratic philosophy, Plato (427–347 BCE) developed his theory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms. Aristotle (384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being. The school of Neoplatonism arose in the 3rd century CE and proposed an ineffable source of everything, called the One, which is more basic than being itself. The problem of universals was an influential topic in medieval ontology. Boethius (477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspired Peter Abelard (1079–1142 CE), who proposed that universals exist only in the mind. Thomas Aquinas (1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence and essence, between substance and accidents, and between matter and form. He also discussed the transcendentals, which are the most general properties or modes of being. John Duns Scotus (1266–1308) argued that all entities, including God, exist in the same way and that each entity has a unique essence, called haecceity. William of Ockham (c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known as Ockham's razor. In Arabic-Persian philosophy, Avicenna (980–1037 CE) combined ontology with theology. He identified God as a necessary being that is the source of everything else, which only has contingent existence. In 8th-century Indian philosophy, the school of Advaita Vedanta emerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is an illusion. Starting in the 13th century CE, the Navya-Nyāya school built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation. 9th-century China saw the emergence of Neo-Confucianism, which developed the idea that a rational principle, known as li, is the ground of being and order of the cosmos. René Descartes (1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact. Rejecting Descartes's dualism, Baruch Spinoza (1632–1677) proposed a monist ontology according to which there is only a single entity that is identical to God and nature. Gottfried Wilhelm Leibniz (1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another. John Locke (1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties. Christian Wolff (1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry. George Berkeley (1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds. Immanuel Kant (1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system of pure concepts of understanding. Influenced by Kant's philosophy, Georg Wilhelm Friedrich Hegel (1770–1831) linked ontology and logic. He said that being and thought are identical and examined their foundational structures. Arthur Schopenhauer (1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of a blind and irrational will. Francis Herbert Bradley (1846–1924) saw absolute spirit as the ultimate and all-encompassing reality while denying that there are any external relations. At the beginning of the 20th century, Edmund Husserl (1859–1938) developed phenomenology and employed its method, the description of experience, to address ontological problems. This idea inspired his student Martin Heidegger (1889–1976) to clarify the meaning of being by exploring the mode of human existence. Jean-Paul Sartre responded to Heidegger's philosophy by examining the relation between being and nothingness from the perspective of human existence, freedom, and consciousness. Based on the phenomenological method, Nicolai Hartmann (1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual. Alexius Meinong (1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being. Arguing against this theory, Bertrand Russell (1872–1970) formulated a fact ontology known as logical atomism. This idea was further refined by the early Ludwig Wittgenstein (1889–1951) and inspired D. M. Armstrong's (1926–2014) ontology. Alfred North Whitehead (1861–1947), by contrast, developed a process ontology. Rudolf Carnap (1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework. He had a strong influence on Willard Van Orman Quine (1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems. Quine's student David Lewis (1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world. Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains. See also References Notes Citations Sources External links
0.77998
0.999506
0.779595
Cell physiology
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the structure of the cell and help the cell move. Physiological processes There are different ways through which cells can transport substances across the cell membrane. The two main pathways are passive transport and active transport. Passive transport is more direct and does not require the use of the cell's energy. It relies on an area that maintains a high-to-low concentration gradient. Active transport uses adenosine triphosphate (ATP) to transport a substance that moves against its concentration gradient. Movement of proteins The pathway for proteins to move in cells starts at the ER. Lipids and proteins are synthesized in the ER, and carbohydrates are added to make glycoproteins. Glycoproteins undergo further synthesis in the Golgi apparatus, becoming glycolipids. Both glycoproteins and glycolipids are transported into vesicles to the plasma membrane. The cell releases secretory proteins known as exocytosis. Transport of ions Ions travel across cell membranes through channels, pumps or transporters. In channels, they move down an electrochemical gradient to produce electrical signals. Pumps maintain electrochemical gradients. The main type of pump is the Na/K pump. It moves 3 sodium ions out of a cell and 2 potassium ions into a cell. The process converts one ATP molecule to adenosine diphosphate (ADP) and In a transporter, ions use more than one gradient to produce electrical signals. Endocytosis in animal cells Endocytosis is a form of active transport where a cell takes in molecules, using the plasma membrane, and packages them into vesicles. Phagocytosis In phagocytosis, a cell surrounds particles including food particles through an extension of the pseudopods, which are located on the plasma membrane. The pseudopods then package the particles in a food vacuole. The lysosome, which contains hydrolytic enzymes, then fuses with the food vacuole. Hydrolytic enzymes, also known as digestive enzymes, then digest the particles within the food vacuole. Pinocytosis In pinocytosis, a cell takes in ("gulps") extracellular fluid into vesicles, which are formed when plasma membrane surrounds the fluid. The cell can take in any molecule or solute through this process. Receptor-mediated endocytosis Receptor-mediated endocytosis is a form of pinocytosis where a cell takes in specific molecules or solutes. Proteins with receptor sites are located on the plasma membrane, binding to specific solutes. The receptor proteins that are attached to the specific solutes go inside coated pits, forming a vesicle. The vesicles then surround the receptors that are attached to the specific solutes, releasing their molecules. Receptor proteins are recycled back to the plasma membrane by the same vesicle. References External links Overview at Medical College of Georgia (archived) Cell biology Physiology
0.791196
0.985306
0.77957
Taphonomy
Taphonomy is the study of how organisms decay and become fossilized or preserved in the paleontological record. The term taphonomy (from Greek , 'burial' and , 'law') was introduced to paleontology in 1940 by Soviet scientist Ivan Efremov to describe the study of the transition of remains, parts, or products of organisms from the biosphere to the lithosphere. The term taphomorph is used to describe fossil structures that represent poorly-preserved, deteriorated remains of a mixture of taxonomic groups, rather than of a single one. Description Taphonomic phenomena are grouped into two phases: biostratinomy, events that occur between death of the organism and the burial; and diagenesis, events that occur after the burial. Since Efremov's definition, taphonomy has expanded to include the fossilization of organic and inorganic materials through both cultural and environmental influences. Taphonomy is now most widely defined as the study of what happens to objects after they leave the biosphere (living contexts), enter the lithosphere (buried contexts), and are subsequently recovered and studied. This is a multidisciplinary concept and is used in slightly different contexts throughout different fields of study. Fields that employ the concept of taphonomy include: Archaeobotany Archaeology Biology Forensic science Geoarchaeology Geology Paleoecology Paleontology Zooarchaeology There are five main stages of taphonomy: disarticulation, dispersal, accumulation, fossilization, and mechanical alteration. The first stage, disarticulation, occurs as the organism decays and the bones are no longer held together by the flesh and tendons of the organism. Dispersal is the separation of pieces of an organism caused by natural events (i.e. floods, scavengers etc.). Accumulation occurs when there is a buildup of organic and/or inorganic materials in one location (scavengers or human behavior). When mineral rich groundwater permeates organic materials and fills the empty spaces, a fossil is formed. The final stage of taphonomy is mechanical alteration; these are the processes that physically alter the remains (i.e. freeze-thaw, compaction, transport, burial). These stages are not only successive, they interplay. For example, chemical changes occur at every stage of the process, because of bacteria. Changes begin as soon as the death of the organism: enzymes are released that destroy the organic contents of the tissues, and mineralised tissues such as bone, enamel and dentin are a mixture of organic and mineral components. Moreover, most often the organisms (vegetal or animal) are dead because they have been killed by a predator. The digestion modifies the composition of the flesh, but also that of the bones. Research areas Taphonomy has undergone an explosion of interest since the 1980s, with research focusing on certain areas. Microbial, biogeochemical, and larger-scale controls on the preservation of different tissue types; in particular, exceptional preservation in Konzervat-lagerstätten. Covered within this field is the dominance of biological versus physical agents in the destruction of remains from all major taxonomic groups (plants, invertebrates, vertebrates). Processes that concentrate biological remains; especially the degree to which different types of assemblages reflect the species composition and abundance of source faunas and floras. Actualistic taphonomy uses the present to understand past taphonomic events. This is often done through controlled experiments, such as the role microbes play in fossilization, the effects of mammalian carnivores on bone, or the burial of bone in a water flume. Computer modeling is also used to explain taphonomic events. Studies on actualistic taphonomy gave rise to the discipline conservation paleobiology. The spatio-temporal resolution and ecological fidelity of species assemblages, particularly the relatively minor role of out-of-habitat transport contrasted with the major effects of time-averaging. The outlines of megabiases in the fossil record, including the evolution of new bauplans and behavioral capabilities, and by broad-scale changes in climate, tectonics, and geochemistry of Earth surface systems. The Mars Science Laboratory mission objectives evolved from assessment of ancient Mars habitability to developing predictive models on taphonomy. Paleontology One motivation behind taphonomy is to understand biases present in the fossil record better. Fossils are ubiquitous in sedimentary rocks, yet paleontologists cannot draw the most accurate conclusions about the lives and ecology of the fossilized organisms without knowing about the processes involved in their fossilization. For example, if a fossil assemblage contains more of one type of fossil than another, one can infer either that the organism was present in greater numbers, or that its remains were more resistant to decomposition. During the late twentieth century, taphonomic data began to be applied to other paleontological subfields such as paleobiology, paleoceanography, ichnology (the study of trace fossils) and biostratigraphy. By coming to understand the oceanographic and ethological implications of observed taphonomic patterns, paleontologists have been able to provide new and meaningful interpretations and correlations that would have otherwise remained obscure in the fossil record. In the marine environment, taphonomy, specifically aragonite loss, poses a major challenge in reconstructing past environments from the modern, notably in settings such as carbonate platforms. Forensic science Forensic taphonomy is a relatively new field that has increased in popularity in the past 15 years. It is a subfield of forensic anthropology focusing specifically on how taphonomic forces have altered criminal evidence. There are two different branches of forensic taphonomy: biotaphonomy and geotaphonomy. Biotaphonomy looks at how the decomposition and/or destruction of the organism has happened. The main factors that affect this branch are categorized into three groups: environmental factors; external variables, individual factors; factors from the organism itself (i.e. body size, age, etc.), and cultural factors; factors specific to any cultural behaviors that would affect the decomposition (burial practices). Geotaphonomy studies how the burial practices and the burial itself affects the surrounding environment. This includes soil disturbances and tool marks from digging the grave, disruption of plant growth and soil pH from the decomposing body, and the alteration of the land and water drainage from introducing an unnatural mass to the area. This field is extremely important because it helps scientists use the taphonomic profile to help determine what happened to the remains at the time of death (perimortem) and after death (postmortem). This can make a huge difference when considering what can be used as evidence in a criminal investigation. Archaeology Taphonomy is an important study for archaeologists to better interpret archaeological sites. Since the archaeological record is often incomplete, taphonomy helps explain how it became incomplete. The methodology of taphonomy involves observing transformation processes in order to understand their impact on archaeological material and interpret patterns on real sites. This is mostly in the form of assessing how the deposition of the preserved remains of an organism (usually animal bones) has occurred to better understand a deposit. Whether the deposition was a result of human, animals and/or the environment is often the goal of taphonomic study. Archaeologists typically separate natural from cultural processes when identifying evidence of human interaction with faunal remains. This is done by looking at human processes preceding artifact discard in addition to processes after artifact discard. Changes preceding discard include butchering, skinning, and cooking. Understanding these processes can inform archaeologists on tool use or how an animal was processed. When the artifact is deposited, abiotic and biotic modifications occur. These can include thermal alteration, rodent disturbances, gnaw marks, and the effects of soil pH to name a few. While taphonomic methodology can be applied and used to study a variety of materials such as buried ceramics and lithics, its primary application in archaeology involves the examination of organic residues. Interpretation of the post-mortem, pre-, and post-burial histories of faunal assemblages is critical in determining their association with hominid activity and behaviour. For instance, to distinguish the bone assemblages that are produced by humans from those of non humans, much ethnoarchaeological observation has been done on different human groups and carnivores, to ascertain if there is anything different in the accumulation and fragmentation of bones. This study has also come in the form of excavation of animal dens and burrows to study the discarded bones and experimental breakage of bones with and without stone tools. Studies of this kind by C.K. Brain in South Africa have shown that bone fractures previously attributed to "killer man-apes" were in fact caused by the pressure of overlying rocks and earth in limestone caves. His research has also demonstrated that early hominins, for example australopithecines, were more likely preyed upon by carnivores rather than being hunters themselves, from cave sites such as Swartkrans in South Africa. Outside of Africa Lewis Binford observed the effects of wolves and dogs on bones in Alaska and the American Southwest, differentiating the interference of humans and carnivores on bone remains by the number of bone splinters and the number of intact articular ends. He observed that animals gnaw and attack the articular ends first leaving mostly bone cylinders behind, therefore it can be assumed a deposit with a high number of bone cylinders and a low number of bones with articular ends intact is therefore probably the result of carnivore activity. In practice John Speth applied these criteria to the bones from the Garnsey site in New Mexico. The rarity of bone cylinders indicated that there had been minimal destruction by scavengers, and that the bone assemblage could be assumed to be wholly the result of human activity, butchering the animals for meat and marrow extraction. One of the most important elements in this methodology is replication, to confirm the validity of results. There are limitations to this kind of taphonomic study in archaeological deposits as any analysis has to presume that processes in the past were the same as today, e.g that living carnivores behaved in a similar way to those in prehistoric times. There are wide variations among existing species so determining the behavioural patterns of extinct species is sometimes hard to justify. Moreover, the differences between faunal assemblages of animals and humans is not always so distinct, hyenas and humans display similar patterning in breakage and form similarly shaped fragments as the ways in which a bone can break are limited. Since large bones survive better than plants this also has created a bias and inclination towards big-game hunting rather than gathering when considering prehistoric economies. While all of archaeology studies taphonomy to some extent, certain subfields deal with it more than others. These include zooarchaeology, geoarchaeology, and paleoethnobotany. Microbial Mats Modern experiments have been conducted on post-mortem invertebrates and vertebrates to understand how microbial mats and microbial activity influence the formation of fossils and the preservation of soft tissues. In these studies, microbial mats entomb animal carcasses in a sarcophagus of microbes—the sarcophagus entombing the animal's carcass delays decay. Entombed carcasses were observed to be more intact than non-entombed counter-parts by years at a time. Microbial mats maintained and stabilized the articulation of the joints and the skeleton of post-mortem organisms, as seen in frog carcasses for up to 1080 days after coverage by the mats. The environment within the entombed carcasses is typically described as anoxic and acidic during the initial stage of decomposition. These conditions are perpetuated by the exhaustion of oxygen by aerobic bacteria within the carcass creating an environment ideal for the preservation of soft tissues, such as muscle tissue and brain tissue. The anoxic and acidic conditions created by that mats also inhibit the process of autolysis within the carcasses delaying decay even further.  Endogenous gut bacteria have also been described to aid the preservation of invertebrate soft tissue by delaying decay and stabilizing soft tissue structures. Gut bacteria form pseudomorphs replicating the form of soft tissues within the animal. These pseudomorphs are possible explanation for the increased occurrence of preserved guts impression among invertebrates. In the later stages of the prolonged decomposition of the carcasses, the environment within the sarcophagus alters to more oxic and basic conditions promoting biomineralization and the precipitation of calcium carbonate. Microbial mats additionally play a role in the formation of molds and impressions of carcasses. These molds and impressions replicate and preserve the integument of animal carcasses. The degree to which has been demonstrated in frog skin preservation. The original morphology of the frog skin, including structures such as warts, was preserved for more than 1.5 years. The microbial mats also aided in the formation of the mineral gypsum embedded within the frog skin. The microbes that constitute the microbial mats in addition to forming a sarcophagus, secrete an exopolymeric substances (EPS) that drive biomineralization. The EPS provides a nucleated center for biomineralization. During later stages of decomposition heterotrophic microbes degrade the EPS, facilitating the release of calcium ions into the environment and creating a Ca-enriched film. The degradation of the EPS and formation of the Ca-rich film is suggested to aid in the precipitation of calcium carbonate and further the process of biomineralization. Taphonomic biases in the fossil record Because of the very select processes that cause preservation, not all organisms have the same chance of being preserved. Any factor that affects the likelihood that an organism is preserved as a fossil is a potential source of bias. It is thus arguably the most important goal of taphonomy to identify the scope of such biases such that they can be quantified to allow correct interpretations of the relative abundances of organisms that make up a fossil biota. Some of the most common sources of bias are listed below. Physical attributes of the organism itself This perhaps represents the biggest source of bias in the fossil record. First and foremost, organisms that contain hard parts have a far greater chance of being represented in the fossil record than organisms consisting of soft tissue only. As a result, animals with bones or shells are overrepresented in the fossil record, and many plants are only represented by pollen or spores that have hard walls. Soft-bodied organisms may form 30% to 100% of the biota, but most fossil assemblages preserve none of this unseen diversity, which may exclude groups such as fungi and entire animal phyla from the fossil record. Many animals that moult, on the other hand, are overrepresented, as one animal may leave multiple fossils due to its discarded body parts. Among plants, wind-pollinated species produce so much more pollen than animal-pollinated species, the former being overrepresented relative to the latter. Characteristics of the habitat Most fossils form in conditions where material is deposited on the bottom of water bodies. Coastal areas are often prone to high rates of erosion, and rivers flowing into the sea may carry a high particulate load from inland. These sediments will eventually settle out, so organisms living in such environments have a much higher chance of being preserved as fossils after death than do those organisms living in non-depositing conditions. In continental environments, fossilization is likely in lakes and riverbeds that gradually fill in with organic and inorganic material. The organisms of such habitats are also liable to be overrepresented in the fossil record than those living far from these aquatic environments where burial by sediments is unlikely to occur. Mixing of fossils from different places A sedimentary deposit may have experienced a mixing of noncontemporaneous remains within single sedimentary units via physical or biological processes; i.e. a deposit could be ripped up and redeposited elsewhere, meaning that a deposit may contain a large number of fossils from another place (an allochthonous deposit, as opposed to the usual autochthonous). Thus, a question that is often asked of fossil deposits is to what extent does the fossil deposit record the true biota that originally lived there? Many fossils are obviously autochthonous, such as rooted fossils like crinoids, and many fossils are intrinsically obviously allochthonous, such as the presence of photoautotrophic plankton in a benthic deposit that must have sunk to be deposited. A fossil deposit may thus become biased towards exotic species (i.e. species not endemic to that area) when the sedimentology is dominated by gravity-driven surges, such as mudslides, or may become biased if there are very few endemic organisms to be preserved. This is a particular problem in palynology. Temporal resolution Because population turnover rates of individual taxa are much less than net rates of sediment accumulation, the biological remains of successive, noncontemporaneous populations of organisms may be admixed within a single bed, known as time-averaging. Because of the slow and episodic nature of the geologic record, two apparently contemporaneous fossils may have actually lived centuries, or even millennia, apart. Moreover, the degree of time-averaging in an assemblage may vary. The degree varies on many factors, such as tissue type, the habitat, the frequency of burial events and exhumation events, and the depth of bioturbation within the sedimentary column relative to net sediment accumulation rates. Like biases in spatial fidelity, there is a bias towards organisms that can survive reworking events, such as shells. An example of a more ideal deposit with respect to time-averaging bias would be a volcanic ash deposit, which captures an entire biota caught in the wrong place at the wrong time (e.g. the Silurian Herefordshire lagerstätte). Gaps in time series The geological record is very discontinuous, and deposition is episodic at all scales. At the largest scale, a sedimentological high-stand period may mean that no deposition may occur for millions of years and, in fact, erosion of the deposit may occur. Such a hiatus is called an unconformity. Conversely, a catastrophic event such as a mudslide may overrepresent a time period. At a shorter scale, scouring processes such as the formation of ripples and dunes and the passing of turbidity currents may cause layers to be removed. Thus the fossil record is biased towards periods of greatest sedimentation; periods of time that have less sedimentation are consequently less well represented in the fossil record. A related problem is the slow changes that occur in the depositional environment of an area; a deposit may experience periods of poor preservation due to, for example, a lack of biomineralizing elements. This causes the taphonomic or diagenetic obliteration of fossils, producing gaps and condensation of the record. Consistency in preservation over geologic time Major shifts in intrinsic and extrinsic properties of organisms, including morphology and behaviour in relation to other organisms or shifts in the global environment, can cause secular or long-term cyclic changes in preservation (megabias). Human biases Much of the incompleteness of the fossil record is due to the fact that only a small amount of rock is ever exposed at the surface of the Earth, and not even most of that has been explored. Our fossil record relies on the small amount of exploration that has been done on this. Unfortunately, paleontologists as humans can be very biased in their methods of collection; a bias that must be identified. Potential sources of bias include, Search images: field experiments have shown that paleontologists working on, say fossil clams are better at collecting clams than anything else because their search image has been shaped to bias them in favour of clams. Relative ease of extraction: fossils that are easy to obtain (such as many phosphatic fossils that are easily extracted en masse by dissolution in acid) are overabundant in the fossil record. Taxonomic bias: fossils with easily discernible morphologies will be easy to distinguish as separate species, and will thus have an inflated abundance. Preservation of biopolymers The taphonomic pathways involved in relatively inert substances such as calcite (and to a lesser extent bone) are relatively obvious, as such body parts are stable and change little through time. However, the preservation of "soft tissue" is more interesting, as it requires more peculiar conditions. While usually only biomineralised material survives fossilisation, the preservation of soft tissue is not as rare as sometimes thought. Both DNA and proteins are unstable, and rarely survive more than hundreds of thousands of years before degrading. Polysaccharides also have low preservation potential, unless they are highly cross-linked; this interconnection is most common in structural tissues, and renders them resistant to chemical decay. Such tissues include wood (lignin), spores and pollen (sporopollenin), the cuticles of plants (cutan) and animals, the cell walls of algae (algaenan), and potentially the polysaccharide layer of some lichens. This interconnectedness makes the chemicals less prone to chemical decay, and also means they are a poorer source of energy so less likely to be digested by scavenging organisms. After being subjected to heat and pressure, these cross-linked organic molecules typically "cook" and become kerogen or short (<17 C atoms) aliphatic/aromatic carbon molecules. Other factors affect the likelihood of preservation; for instance sclerotization renders the jaws of polychaetes more readily preserved than the chemically equivalent but non-sclerotized body cuticle. A peer-reviewed study in 2023 was the first to present an in-depth chemical description of how biological tissues and cells potentially preserve into the fossil record. This study generalized the chemistry underlying cell and tissue preservation to explain the phenomenon for potentially any cellular organism. It was thought that only tough, cuticle type soft tissue could be preserved by Burgess Shale type preservation, but an increasing number of organisms are being discovered that lack such cuticle, such as the probable chordate Pikaia and the shellless Odontogriphus. It is a common misconception that anaerobic conditions are necessary for the preservation of soft tissue; indeed much decay is mediated by sulfate reducing bacteria which can only survive in anaerobic conditions. Anoxia does, however, reduce the probability that scavengers will disturb the dead organism, and the activity of other organisms is undoubtedly one of the leading causes of soft-tissue destruction. Plant cuticle is more prone to preservation if it contains cutan, rather than cutin. Plants and algae produce the most preservable compounds, which are listed according to their preservation potential by Tegellaar (see reference). Disintegration How complete fossils are was once thought to be a proxy for the energy of the environment, with stormier waters leaving less articulated carcasses. However, the dominant force actually seems to be predation, with scavengers more likely than rough waters to break up a fresh carcass before it is buried. Sediments cover smaller fossils faster so they are likely to be found fully articulated. However, erosion also tends to destroy smaller fossils more easily. Distortion Often fossils, particularly those of vertebrates, are distorted by the subsequent movements of the surrounding sediment, this can include compression of the fossil in a particular axis, as well as shearing. Significance Taphonomic processes allow researchers of multiple fields to identify the past of natural and cultural objects. From the time of death or burial until excavation, taphonomy can aid in the understanding of past environments. When studying the past it is important to gain contextual information in order to have a solid understanding of the data. Often these findings can be used to better understand cultural or environmental shifts within the present day. The term taphomorph is used to collectively describe fossil structures that represent poorly-preserved and deteriorated remains of various taxonomic groups, rather than of a single species. For example, the 579–560 million year old fossil Ediacaran assemblages from Avalonian locations in Newfoundland contain taphomorphs of a mixture of taxa which have collectively been named Ivesheadiomorphs. Originally interpreted as fossils of a single genus, Ivesheadia, they are now thought to be the deteriorated remains of various types of frondose organism. Similarly, Ediacaran fossils from England, once assigned to Blackbrookia, Pseudovendia and Shepshedia, are now all regarded as taphomorphs related to Charnia or Charniodiscus. Fluvial taphonomy Fluvial taphonomy is concerned with the decomposition of organisms in rivers. An organism may sink or float within a river, it may also be carried by the current near the surface of the river or near its bottom. Organisms in terrestrial and fluvial environments will not undergo the same processes. A fluvial environment may be colder than a terrestrial environment. The ecosystem of live organisms that scavenge on the organism in question and the abiotic items in rivers will differ than on land. Organisms within a river may also be physically transported by the flow of the river. The flow of the river can additionally erode the surface of the organisms found within it. The processes an organism may undergo in a fluvial environment will result in a slower rate of decomposition within a river compared to on land. See also Beecher's Trilobite type preservation Bitter Springs type preservation Burgess Shale type preservation Doushantuo type preservation Ediacaran type preservation Fossil record Karen Chin Lagerstätte Permineralization Petrifaction Pseudofossil Trace fossil References Further reading External links The Shelf and Slope Experimental Taphonomy Initiative is the first long-term large-scale deployment and re-collection of organism remains on the sea floor. Journal of Taphonomy Bioerosion Website at the College of Wooster Comprehensive bioerosion bibliography compiled by Mark A. Wilson Taphonomy Minerals and the Origins of Life (Robert Hazen, NASA) (video, 60m, April 2014). 7th International Meeting on Taphonomy and Fossilization (Taphos 2014), at the Università degli studi di Ferrara, Italy, 10–13 September 2014 Archaeological science Methods in archaeology
0.78889
0.988142
0.779535
Water cycle
The water cycle (or hydrologic cycle or hydrological cycle), is a biogeochemical cycle that involves the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time. However, the partitioning of the water into the major reservoirs of ice, fresh water, salt water and atmospheric water is variable and depends on climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere. The processes that drive these movements are evaporation, transpiration, condensation, precipitation, sublimation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor. The ocean plays a key role in the water cycle as it is the source of 86% of global evaporation. The water cycle involves the exchange of energy, which leads to temperature changes. When water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence the climate system. The evaporative phase of the cycle purifies water because it causes salts and other solids picked up during the cycle to be left behind. The condensation phase in the atmosphere replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It also reshapes the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet. Human actions are greatly affecting the water cycle. Activities such as deforestation, urbanization, and the extraction of groundwater are altering natural landscapes (land use changes) all have an effect on the water cycle. On top of this, climate change is leading to an intensification of the water cycle. Research has shown that global warming is causing shifts in precipitation patterns, increased frequency of extreme weather events, and changes in the timing and intensity of rainfall. These water cycle changes affect ecosystems, water availability, agriculture, and human societies. Description Overall process The water cycle is powered from the energy emitted by the sun. This energy heats water in the ocean and seas. Water evaporates as water vapor into the air. Some ice and snow sublimates directly into water vapor. Evapotranspiration is water transpired from plants and evaporated from the soil. The water molecule has smaller molecular mass than the major components of the atmosphere, nitrogen and oxygen and hence is less dense. Due to the significant difference in density, buoyancy drives humid air higher. As altitude increases, air pressure decreases and the temperature drops (see Gas laws). The lower temperature causes water vapor to condense into tiny liquid water droplets which are heavier than the air, and which fall unless supported by an updraft. A huge concentration of these droplets over a large area in the atmosphere becomes visible as cloud, while condensation near ground level is referred to as fog. Atmospheric circulation moves water vapor around the globe; cloud particles collide, grow, and fall out of the upper atmospheric layers as precipitation. Some precipitation falls as snow, hail, or sleet, and can accumulate in ice caps and glaciers, which can store frozen water for thousands of years. Most water falls as rain back into the ocean or onto land, where the water flows over the ground as surface runoff. A portion of this runoff enters rivers, with streamflow moving water towards the oceans. Runoff and water emerging from the ground (groundwater) may be stored as freshwater in lakes. Not all runoff flows into rivers; much of it soaks into the ground as infiltration. Some water infiltrates deep into the ground and replenishes aquifers, which can store freshwater for long periods of time. Some infiltration stays close to the land surface and can seep back into surface-water bodies (and the ocean) as groundwater discharge or be taken up by plants and transferred back to the atmosphere as water vapor by transpiration. Some groundwater finds openings in the land surface and emerges as freshwater springs. In river valleys and floodplains, there is often continuous water exchange between surface water and ground water in the hyporheic zone. Over time, the water returns to the ocean, to continue the water cycle. The ocean plays a key role in the water cycle. The ocean holds "97% of the total water on the planet; 78% of global precipitation occurs over the ocean, and it is the source of 86% of global evaporation". Important physical processes within the water cycle include (in alphabetical order): Advection: The movement of water through the atmosphere. Without advection, water that evaporated over the oceans could not precipitate over land. Atmospheric rivers that move large volumes of water vapor over long distances are an example of advection. Condensation: The transformation of water vapor to liquid water droplets in the air, creating clouds and fog. Evaporation: The transformation of water from liquid to gas phases as it moves from the ground or bodies of water into the overlying atmosphere. The source of energy for evaporation is primarily solar radiation. Evaporation often implicitly includes transpiration from plants, though together they are specifically referred to as evapotranspiration. Total annual evapotranspiration amounts to approximately of water, of which evaporates from the oceans. 86% of global evaporation occurs over the ocean. Infiltration: The flow of water from the ground surface into the ground. Once infiltrated, the water becomes soil moisture or groundwater. A recent global study using water stable isotopes, however, shows that not all soil moisture is equally available for groundwater recharge or for plant transpiration. Percolation: Water flows vertically through the soil and rocks under the influence of gravity. Precipitation: Condensed water vapor that falls to the Earth's surface. Most precipitation occurs as rain, but also includes snow, hail, fog drip, graupel, and sleet. Approximately of water falls as precipitation each year, of it over the oceans. The rain on land contains of water per year and a snowing only . 78% of global precipitation occurs over the ocean. Runoff: The variety of ways by which water moves across the land. This includes both surface runoff and channel runoff. As it flows, the water may seep into the ground, evaporate into the air, become stored in lakes or reservoirs, or be extracted for agricultural or other human uses. Subsurface flow: The flow of water underground, in the vadose zone and aquifers. Subsurface water may return to the surface (e.g. as a spring or by being pumped) or eventually seep into the oceans. Water returns to the land surface at lower elevation than where it infiltrated, under the force of gravity or gravity induced pressures. Groundwater tends to move slowly and is replenished slowly, so it can remain in aquifers for thousands of years. Transpiration: The release of water vapor from plants and soil into the air. Residence times The residence time of a reservoir within the hydrologic cycle is the average time a water molecule will spend in that reservoir (see table). It is a measure of the average age of the water in that reservoir. Groundwater can spend over 10,000 years beneath Earth's surface before leaving. Particularly old groundwater is called fossil water. Water stored in the soil remains there very briefly, because it is spread thinly across the Earth, and is readily lost by evaporation, transpiration, stream flow, or groundwater recharge. After evaporating, the residence time in the atmosphere is about 9 days before condensing and falling to the Earth as precipitation. The major ice sheets – Antarctica and Greenland – store ice for very long periods. Ice from Antarctica has been reliably dated to 800,000 years before present, though the average residence time is shorter. In hydrology, residence times can be estimated in two ways. The more common method relies on the principle of conservation of mass (water balance) and assumes the amount of water in a given reservoir is roughly constant. With this method, residence times are estimated by dividing the volume of the reservoir by the rate by which water either enters or exits the reservoir. Conceptually, this is equivalent to timing how long it would take the reservoir to become filled from empty if no water were to leave (or how long it would take the reservoir to empty from full if no water were to enter). An alternative method to estimate residence times, which is gaining in popularity for dating groundwater, is the use of isotopic techniques. This is done in the subfield of isotope hydrology. Water in storage The water cycle describes the processes that drive the movement of water throughout the hydrosphere. However, much more water is "in storage" (or in "pools") for long periods of time than is actually moving through the cycle. The storehouses for the vast majority of all water on Earth are the oceans. It is estimated that of the 1,386,000,000 km3 of the world's water supply, about 1,338,000,000 km3 is stored in oceans, or about 97%. It is also estimated that the oceans supply about 90% of the evaporated water that goes into the water cycle. The Earth's ice caps, glaciers, and permanent snowpack stores another 24,064,000 km3 accounting for only 1.7% of the planet's total water volume. However, this quantity of water is 68.7% of all freshwater on the planet. Changes caused by humans Local or regional impacts Human activities can alter the water cycle at the local or regional level. This happens due to changes in land use and land cover. Such changes affect "precipitation, evaporation, flooding, groundwater, and the availability of freshwater for a variety of uses". Examples for such land use changes are converting fields to urban areas or clearing forests. Such changes can affect the ability of soils to soak up surface water. Deforestation has local as well as regional effects. For example it reduces soil moisture, evaporation and rainfall at the local level. Furthermore, deforestation causes regional temperature changes that can affect rainfall patterns. Aquifer drawdown or overdrafting and the pumping of fossil water increase the total amount of water in the hydrosphere. This is because the water that was originally in the ground has now become available for evaporation as it is now in contact with the atmosphere. Water cycle intensification due to climate change Since the middle of the 20th century, human-caused climate change has resulted in observable changes in the global water cycle. The IPCC Sixth Assessment Report in 2021 predicted that these changes will continue to grow significantly at the global and regional level. These findings are a continuation of scientific consensus expressed in the IPCC Fifth Assessment Report from 2007 and other special reports by the Intergovernmental Panel on Climate Change which had already stated that the water cycle will continue to intensify throughout the 21st century. Related processes Biogeochemical cycling While the water cycle is itself a biogeochemical cycle, flow of water over and beneath the Earth is a key component of the cycling of other biogeochemicals. Runoff is responsible for almost all of the transport of eroded sediment and phosphorus from land to waterbodies. The salinity of the oceans is derived from erosion and transport of dissolved salts from the land. Cultural eutrophication of lakes is primarily due to phosphorus, applied in excess to agricultural fields in fertilizers, and then transported overland and down rivers. Both runoff and groundwater flow play significant roles in transporting nitrogen from the land to waterbodies. The dead zone at the outlet of the Mississippi River is a consequence of nitrates from fertilizer being carried off agricultural fields and funnelled down the river system to the Gulf of Mexico. Runoff also plays a part in the carbon cycle, again through the transport of eroded rock and soil. Slow loss over geologic time The hydrodynamic wind within the upper portion of a planet's atmosphere allows light chemical elements such as Hydrogen to move up to the exobase, the lower limit of the exosphere, where the gases can then reach escape velocity, entering outer space without impacting other particles of gas. This type of gas loss from a planet into space is known as planetary wind. Planets with hot lower atmospheres could result in humid upper atmospheres that accelerate the loss of hydrogen. Historical interpretations In ancient times, it was widely thought that the land mass floated on a body of water, and that most of the water in rivers has its origin under the earth. Examples of this belief can be found in the works of Homer. In Works and Days (ca. 700 BC), the Greek poet Hesiod outlines the idea of the water cycle: "[Vapour] is drawn from the ever-flowing rivers and is raised high above the earth by windstorm, and sometimes it turns to rain towards evening, and sometimes to wind when Thracian Boreas huddles the thick clouds." In the ancient Near East, Hebrew scholars observed that even though the rivers ran into the sea, the sea never became full. Some scholars conclude that the water cycle was described completely during this time in this passage: "The wind goeth toward the south, and turneth about unto the north; it whirleth about continually, and the wind returneth again according to its circuits. All the rivers run into the sea, yet the sea is not full; unto the place from whence the rivers come, thither they return again" (Ecclesiastes 1:6-7). Furthermore, it was also observed that when the clouds were full, they emptied rain on the earth (Ecclesiastes 11:3). In the Adityahridayam (a devotional hymn to the Sun God) of Ramayana, a Hindu epic dated to the 4th century BCE, it is mentioned in the 22nd verse that the Sun heats up water and sends it down as rain. By roughly 500 BCE, Greek scholars were speculating that much of the water in rivers can be attributed to rain. The origin of rain was also known by then. These scholars maintained the belief, however, that water rising up through the earth contributed a great deal to rivers. Examples of this thinking included Anaximander (570 BCE) (who also speculated about the evolution of land animals from fish) and Xenophanes of Colophon (530 BCE). Warring States period Chinese scholars such as Chi Ni Tzu (320 BCE) and Lu Shih Ch'un Ch'iu (239 BCE) had similar thoughts. The idea that the water cycle is a closed cycle can be found in the works of Anaxagoras of Clazomenae (460 BCE) and Diogenes of Apollonia (460 BCE). Both Plato (390 BCE) and Aristotle (350 BCE) speculated about percolation as part of the water cycle. Aristotle correctly hypothesized that the sun played a role in the Earth's hydraulic cycle in his book Meteorology, writing "By it [the sun's] agency the finest and sweetest water is everyday carried up and is dissolved into vapor and rises to the upper regions, where it is condensed again by the cold and so returns to the earth.", and believed that clouds were composed of cooled and condensed water vapor. Much like the earlier Aristotle, the Eastern Han Chinese scientist Wang Chong (27–100 AD) accurately described the water cycle of Earth in his Lunheng but was dismissed by his contemporaries. Up to the time of the Renaissance, it was wrongly assumed that precipitation alone was insufficient to feed rivers, for a complete water cycle, and that underground water pushing upwards from the oceans were the main contributors to river water. Bartholomew of England held this view (1240 CE), as did Leonardo da Vinci (1500 CE) and Athanasius Kircher (1644 CE). Discovery of the correct theory The first published thinker to assert that rainfall alone was sufficient for the maintenance of rivers was Bernard Palissy (1580 CE), who is often credited as the discoverer of the modern theory of the water cycle. Palissy's theories were not tested scientifically until 1674, in a study commonly attributed to Pierre Perrault. Even then, these beliefs were not accepted in mainstream science until the early nineteenth century. See also References External links The Water Cycle, United States Geological Survey The Water Cycle for Kids, United States Geological Survey The Water Cycle: Following The Water (NASA Visualization Explorer with videos) Biogeochemical cycle Forms of water Hydrology Soil physics Water Articles containing video clips Limnology Oceanography
0.780149
0.999079
0.77943
Inbreeding
Inbreeding is the production of offspring from the mating or breeding of individuals or organisms that are closely related genetically. By analogy, the term is used in human reproduction, but more commonly refers to the genetic disorders and other consequences that may arise from expression of deleterious recessive traits resulting from incestuous sexual relationships and consanguinity. Animals avoid inbreeding only rarely. Inbreeding results in homozygosity, which can increase the chances of offspring being affected by recessive traits. In extreme cases, this usually leads to at least temporarily decreased biological fitness of a population (called inbreeding depression), which is its ability to survive and reproduce. An individual who inherits such deleterious traits is colloquially referred to as inbred. The avoidance of expression of such deleterious recessive alleles caused by inbreeding, via inbreeding avoidance mechanisms, is the main selective reason for outcrossing. Crossbreeding between populations sometimes has positive effects on fitness-related traits, but also sometimes leads to negative effects known as outbreeding depression. However, increased homozygosity increases the probability of fixing beneficial alleles and also slightly decreases the probability of fixing deleterious alleles in a population. Inbreeding can result in purging of deleterious alleles from a population through purifying selection. Inbreeding is a technique used in selective breeding. For example, in livestock breeding, breeders may use inbreeding when trying to establish a new and desirable trait in the stock and for producing distinct families within a breed, but will need to watch for undesirable characteristics in offspring, which can then be eliminated through further selective breeding or culling. Inbreeding also helps to ascertain the type of gene action affecting a trait. Inbreeding is also used to reveal deleterious recessive alleles, which can then be eliminated through assortative breeding or through culling. In plant breeding, inbred lines are used as stocks for the creation of hybrid lines to make use of the effects of heterosis. Inbreeding in plants also occurs naturally in the form of self-pollination. Inbreeding can significantly influence gene expression which can prevent inbreeding depression. Overview Offspring of biologically related persons are subject to the possible effects of inbreeding, such as congenital birth defects. The chances of such disorders are increased when the biological parents are more closely related. This is because such pairings have a 25% probability of producing homozygous zygotes, resulting in offspring with two recessive alleles, which can produce disorders when these alleles are deleterious. Because most recessive alleles are rare in populations, it is unlikely that two unrelated partners will both be carriers of the same deleterious allele; however, because close relatives share a large fraction of their alleles, the probability that any such deleterious allele is inherited from the common ancestor through both parents is increased dramatically. For each homozygous recessive individual formed there is an equal chance of producing a homozygous dominant individual — one completely devoid of the harmful allele. Contrary to common belief, inbreeding does not in itself alter allele frequencies, but rather increases the relative proportion of homozygotes to heterozygotes; however, because the increased proportion of deleterious homozygotes exposes the allele to natural selection, in the long run its frequency decreases more rapidly in inbred populations. In the short term, incestuous reproduction is expected to increase the number of spontaneous abortions of zygotes, perinatal deaths, and postnatal offspring with birth defects. The advantages of inbreeding may be the result of a tendency to preserve the structures of alleles interacting at different loci that have been adapted together by a common selective history. Malformations or harmful traits can stay within a population due to a high homozygosity rate, and this will cause a population to become fixed for certain traits, like having too many bones in an area, like the vertebral column of wolves on Isle Royale or having cranial abnormalities, such as in Northern elephant seals, where their cranial bone length in the lower mandibular tooth row has changed. Having a high homozygosity rate is problematic for a population because it will unmask recessive deleterious alleles generated by mutations, reduce heterozygote advantage, and it is detrimental to the survival of small, endangered animal populations. When deleterious recessive alleles are unmasked due to the increased homozygosity generated by inbreeding, this can cause inbreeding depression. There may also be other deleterious effects besides those caused by recessive diseases. Thus, similar immune systems may be more vulnerable to infectious diseases (see Major histocompatibility complex and sexual selection). Inbreeding history of the population should also be considered when discussing the variation in the severity of inbreeding depression between and within species. With persistent inbreeding, there is evidence that shows that inbreeding depression becomes less severe. This is associated with the unmasking and elimination of severely deleterious recessive alleles. However, inbreeding depression is not a temporary phenomenon because this elimination of deleterious recessive alleles will never be complete. Eliminating slightly deleterious mutations through inbreeding under moderate selection is not as effective. Fixation of alleles most likely occurs through Muller's ratchet, when an asexual population's genome accumulates deleterious mutations that are irreversible. Despite all its disadvantages, inbreeding can also have a variety of advantages, such as ensuring a child produced from the mating contains, and will pass on, a higher percentage of its mother/father's genetics, reducing the recombination load, and allowing the expression of recessive advantageous phenotypes. Some species with a Haplodiploidy mating system depend on the ability to produce sons to mate with as a means of ensuring a mate can be found if no other male is available. It has been proposed that under circumstances when the advantages of inbreeding outweigh the disadvantages, preferential breeding within small groups could be promoted, potentially leading to speciation. Genetic disorders Autosomal recessive disorders occur in individuals who have two copies of an allele for a particular recessive genetic mutation. Except in certain rare circumstances, such as new mutations or uniparental disomy, both parents of an individual with such a disorder will be carriers of the gene. These carriers do not display any signs of the mutation and may be unaware that they carry the mutated gene. Since relatives share a higher proportion of their genes than do unrelated people, it is more likely that related parents will both be carriers of the same recessive allele, and therefore their children are at a higher risk of inheriting an autosomal recessive genetic disorder. The extent to which the risk increases depends on the degree of genetic relationship between the parents; the risk is greater when the parents are close relatives and lower for relationships between more distant relatives, such as second cousins, though still greater than for the general population. Children of parent-child or sibling-sibling unions are at an increased risk compared to cousin-cousin unions. Inbreeding may result in a greater than expected phenotypic expression of deleterious recessive alleles within a population. As a result, first-generation inbred individuals are more likely to show physical and health defects, including: The isolation of a small population for a period of time can lead to inbreeding within that population, resulting in increased genetic relatedness between breeding individuals. Inbreeding depression can also occur in a large population if individuals tend to mate with their relatives, instead of mating randomly. Due to higher prenatal and postnatal mortality rates, some individuals in the first generation of inbreeding will not live on to reproduce. Over time, with isolation, such as a population bottleneck caused by purposeful (assortative) breeding or natural environmental factors, the deleterious inherited traits are culled. Island species are often very inbred, as their isolation from the larger group on a mainland allows natural selection to work on their population. This type of isolation may result in the formation of race or even speciation, as the inbreeding first removes many deleterious genes, and permits the expression of genes that allow a population to adapt to an ecosystem. As the adaptation becomes more pronounced, the new species or race radiates from its entrance into the new space, or dies out if it cannot adapt and, most importantly, reproduce. The reduced genetic diversity, for example due to a bottleneck will unavoidably increase inbreeding for the entire population. This may mean that a species may not be able to adapt to changes in environmental conditions. Each individual will have similar immune systems, as immune systems are genetically based. When a species becomes endangered, the population may fall below a minimum whereby the forced interbreeding between the remaining animals will result in extinction. Natural breedings include inbreeding by necessity, and most animals only migrate when necessary. In many cases, the closest available mate is a mother, sister, grandmother, father, brother, or grandfather. In all cases, the environment presents stresses to remove from the population those individuals who cannot survive because of illness. There was an assumption that wild populations do not inbreed; this is not what is observed in some cases in the wild. However, in species such as horses, animals in wild or feral conditions often drive off the young of both sexes, thought to be a mechanism by which the species instinctively avoids some of the genetic consequences of inbreeding. In general, many mammal species, including humanity's closest primate relatives, avoid close inbreeding possibly due to the deleterious effects. Examples Although there are several examples of inbred populations of wild animals, the negative consequences of this inbreeding are poorly documented. In the South American sea lion, there was concern that recent population crashes would reduce genetic diversity. Historical analysis indicated that a population expansion from just two matrilineal lines was responsible for most of the individuals within the population. Even so, the diversity within the lines allowed great variation in the gene pool that may help to protect the South American sea lion from extinction. In lions, prides are often followed by related males in bachelor groups. When the dominant male is killed or driven off by one of these bachelors, a father may be replaced by his son. There is no mechanism for preventing inbreeding or to ensure outcrossing. In the prides, most lionesses are related to one another. If there is more than one dominant male, the group of alpha males are usually related. Two lines are then being "line bred". Also, in some populations, such as the Crater lions, it is known that a population bottleneck has occurred. Researchers found far greater genetic heterozygosity than expected. In fact, predators are known for low genetic variance, along with most of the top portion of the trophic levels of an ecosystem. Additionally, the alpha males of two neighboring prides can be from the same litter; one brother may come to acquire leadership over another's pride, and subsequently mate with his 'nieces' or cousins. However, killing another male's cubs, upon the takeover, allows the new selected gene complement of the incoming alpha male to prevail over the previous male. There are genetic assays being scheduled for lions to determine their genetic diversity. The preliminary studies show results inconsistent with the outcrossing paradigm based on individual environments of the studied groups. In Central California, sea otters were thought to have been driven to extinction due to over hunting, until a small colony was discovered in the Point Sur region in the 1930s. Since then, the population has grown and spread along the central Californian coast to around 2,000 individuals, a level that has remained stable for over a decade. Population growth is limited by the fact that all Californian sea otters are descended from the isolated colony, resulting in inbreeding. Cheetahs are another example of inbreeding. Thousands of years ago, the cheetah went through a population bottleneck that reduced its population dramatically so the animals that are alive today are all related to one another. A consequence from inbreeding for this species has been high juvenile mortality, low fecundity, and poor breeding success. In a study on an island population of song sparrows, individuals that were inbred showed significantly lower survival rates than outbred individuals during a severe winter weather related population crash. These studies show that inbreeding depression and ecological factors have an influence on survival. The Florida panther population was reduced to about 30 animals, so inbreeding became a problem. Several females were imported from Texas and now the population is better off genetically. Measures A measure of inbreeding of an individual A is the probability F(A) that both alleles in one locus are derived from the same allele in an ancestor. These two identical alleles that are both derived from a common ancestor are said to be identical by descent. This probability F(A) is called the "coefficient of inbreeding". Another useful measure that describes the extent to which two individuals are related (say individuals A and B) is their coancestry coefficient f(A,B), which gives the probability that one randomly selected allele from A and another randomly selected allele from B are identical by descent. This is also denoted as the kinship coefficient between A and B. A particular case is the self-coancestry of individual A with itself, f(A,A), which is the probability that taking one random allele from A and then, independently and with replacement, another random allele also from A, both are identical by descent. Since they can be identical by descent by sampling the same allele or by sampling both alleles that happen to be identical by descent, we have f(A,A) = 1/2 + F(A)/2. Both the inbreeding and the coancestry coefficients can be defined for specific individuals or as average population values. They can be computed from genealogies or estimated from the population size and its breeding properties, but all methods assume no selection and are limited to neutral alleles. There are several methods to compute this percentage. The two main ways are the path method and the tabular method. Typical coancestries between relatives are as follows: Father/daughter or mother/son → 25% Brother/sister → 25% Grandfather/granddaughter or grandmother/grandson → 12.5% Half-brother/half-sister, Double cousins → 12.5% Uncle/niece or aunt/nephew → 12.5% Great-grandfather/great-granddaughter or great-grandmother/great-grandson → 6.25% Half-uncle/niece or half-aunt/nephew → 6.25% First cousins → 6.25% Animals Wild animals Banded mongoose females regularly mate with their fathers and brothers. Bed bugs: North Carolina State University found that bedbugs, in contrast to most other insects, tolerate incest and are able to genetically withstand the effects of inbreeding quite well. Common fruit fly females prefer to mate with their own brothers over unrelated males. Cottony cushion scales: 'It turns out that females in these hermaphrodite insects are not really fertilizing their eggs themselves, but instead are having this done by a parasitic tissue that infects them at birth,' says Laura Ross of Oxford University's Department of Zoology. 'It seems that this infectious tissue derives from left-over sperm from their father, who has found a sneaky way of having more children by mating with his daughters.' Adactylidium: The single male offspring mite mates with all the daughters when they are still in the mother. The females, now impregnated, cut holes in their mother's body so that they can emerge. The male emerges as well, but does not look for food or new mates, and dies after a few hours. The females die at the age of 4 days, when their own offspring eat them alive from the inside. Domestic animals Breeding in domestic animals is primarily assortative breeding (see selective breeding). Without the sorting of individuals by trait, a breed could not be established, nor could poor genetic material be removed. Homozygosity is the case where similar or identical alleles combine to express a trait that is not otherwise expressed (recessiveness). Inbreeding exposes recessive alleles through increasing homozygosity. Breeders must avoid breeding from individuals that demonstrate either homozygosity or heterozygosity for disease causing alleles. The goal of preventing the transfer of deleterious alleles may be achieved by reproductive isolation, sterilization, or, in the extreme case, culling. Culling is not strictly necessary if genetics are the only issue in hand. Small animals such as cats and dogs may be sterilized, but in the case of large agricultural animals, such as cattle, culling is usually the only economic option. The issue of casual breeders who inbreed irresponsibly is discussed in the following quotation on cattle: Meanwhile, milk production per cow per lactation increased from 17,444 lbs to 25,013 lbs from 1978 to 1998 for the Holstein breed. Mean breeding values for milk of Holstein cows increased by 4,829 lbs during this period. High producing cows are increasingly difficult to breed and are subject to higher health costs than cows of lower genetic merit for production (Cassell, 2001). Intensive selection for higher yield has increased relationships among animals within breed and increased the rate of casual inbreeding. Many of the traits that affect profitability in crosses of modern dairy breeds have not been studied in designed experiments. Indeed, all crossbreeding research involving North American breeds and strains is very dated (McAllister, 2001) if it exists at all. The BBC produced two documentaries on dog inbreeding titled Pedigree Dogs Exposed and Pedigree Dogs Exposed: Three Years On that document the negative health consequences of excessive inbreeding. Linebreeding Linebreeding is a form of inbreeding. There is no clear distinction between the two terms, but linebreeding may encompass crosses between individuals and their descendants or two cousins. This method can be used to increase a particular animal's contribution to the population. While linebreeding is less likely to cause problems in the first generation than does inbreeding, over time, linebreeding can reduce the genetic diversity of a population and cause problems related to a too-small gene pool that may include an increased prevalence of genetic disorders and inbreeding depression. Outcrossing Outcrossing is where two unrelated individuals are crossed to produce progeny. In outcrossing, unless there is verifiable genetic information, one may find that all individuals are distantly related to an ancient progenitor. If the trait carries throughout a population, all individuals can have this trait. This is called the founder effect. In the well established breeds, that are commonly bred, a large gene pool is present. For example, in 2004, over 18,000 Persian cats were registered. A possibility exists for a complete outcross, if no barriers exist between the individuals to breed. However, it is not always the case, and a form of distant linebreeding occurs. Again it is up to the assortative breeder to know what sort of traits, both positive and negative, exist within the diversity of one breeding. This diversity of genetic expression, within even close relatives, increases the variability and diversity of viable stock. Laboratory animals Systematic inbreeding and maintenance of inbred strains of laboratory mice and rats is of great importance for biomedical research. The inbreeding guarantees a consistent and uniform animal model for experimental purposes and enables genetic studies in congenic and knock-out animals. In order to achieve a mouse strain that is considered inbred, a minimum of 20 sequential generations of sibling matings must occur. With each successive generation of breeding, homozygosity in the entire genome increases, eliminating heterozygous loci. With 20 generations of sibling matings, homozygosity is occurring at roughly 98.7% of all loci in the genome, allowing for these offspring to serve as animal models for genetic studies. The use of inbred strains is also important for genetic studies in animal models, for example to distinguish genetic from environmental effects. The mice that are inbred typically show considerably lower survival rates. Humans Effects Inbreeding increases homozygosity, which can increase the chances of the expression of deleterious or beneficial recessive alleles and therefore has the potential to either decrease or increase the fitness of the offspring. Depending on the rate of inbreeding, natural selection may still be able to eliminate deleterious alleles. With continuous inbreeding, genetic variation is lost and homozygosity is increased, enabling the expression of recessive deleterious alleles in homozygotes. The coefficient of inbreeding, or the degree of inbreeding in an individual, is an estimate of the percent of homozygous alleles in the overall genome. The more biologically related the parents are, the greater the coefficient of inbreeding, since their genomes have many similarities already. This overall homozygosity becomes an issue when there are deleterious recessive alleles in the gene pool of the family. By pairing chromosomes of similar genomes, the chance for these recessive alleles to pair and become homozygous greatly increases, leading to offspring with autosomal recessive disorders. However, these deleterious effects are common for very close relatives but not for those related on the 3rd cousin or greater level, who exhibit increased fitness. Inbreeding is especially problematic in small populations where the genetic variation is already limited. By inbreeding, individuals are further decreasing genetic variation by increasing homozygosity in the genomes of their offspring. Thus, the likelihood of deleterious recessive alleles to pair is significantly higher in a small inbreeding population than in a larger inbreeding population. The fitness consequences of consanguineous mating have been studied since their scientific recognition by Charles Darwin in 1839. Some of the most harmful effects known from such breeding includes its effects on the mortality rate as well as on the general health of the offspring. Since the 1960s, there have been many studies to support such debilitating effects on the human organism. Specifically, inbreeding has been found to decrease fertility as a direct result of increasing homozygosity of deleterious recessive alleles. Fetuses produced by inbreeding also face a greater risk of spontaneous abortions due to inherent complications in development. Among mothers who experience stillbirths and early infant deaths, those that are inbreeding have a significantly higher chance of reaching repeated results with future offspring. Additionally, consanguineous parents possess a high risk of premature birth and producing underweight and undersized infants. Viable inbred offspring are also likely to be inflicted with physical deformities and genetically inherited diseases. Studies have confirmed an increase in several genetic disorders due to inbreeding such as blindness, hearing loss, neonatal diabetes, limb malformations, disorders of sex development, schizophrenia and several others. Moreover, there is an increased risk for congenital heart disease depending on the inbreeding coefficient (See coefficient of inbreeding) of the offspring, with significant risk accompanied by an F =.125 or higher. Prevalence The general negative outlook and eschewal of inbreeding that is prevalent in the Western world today has roots from over 2000 years ago. Specifically, written documents such as the Bible illustrate that there have been laws and social customs that have called for the abstention from inbreeding. Along with cultural taboos, parental education and awareness of inbreeding consequences have played large roles in minimizing inbreeding frequencies in areas like Europe. That being so, there are less urbanized and less populated regions across the world that have shown continuity in the practice of inbreeding. The continuity of inbreeding is often either by choice or unavoidably due to the limitations of the geographical area. When by choice, the rate of consanguinity is highly dependent on religion and culture. In the Western world, some Anabaptist groups are highly inbred because they originate from small founder populations that have bred as a closed population. Of the practicing regions, Middle Eastern and northern Africa territories show the greatest frequencies of consanguinity. Among these populations with high levels of inbreeding, researchers have found several disorders prevalent among inbred offspring. In Lebanon, Saudi Arabia, Egypt, and in Israel, the offspring of consanguineous relationships have an increased risk of congenital malformations, congenital heart defects, congenital hydrocephalus and neural tube defects. Furthermore, among inbred children in Palestine and Lebanon, there is a positive association between consanguinity and reported cleft lip/palate cases. Historically, populations of Qatar have engaged in consanguineous relationships of all kinds, leading to high risk of inheriting genetic diseases. As of 2014, around 5% of the Qatari population suffered from hereditary hearing loss; most were descendants of a consanguineous relationship. Royalty and nobility Inter-nobility marriage was used as a method of forming political alliances among elites. These ties were often sealed only upon the birth of progeny within the arranged marriage. Thus marriage was seen as a union of lines of nobility and not as a contract between individuals. Royal intermarriage was often practiced among European royal families, usually for interests of state. Over time, due to the relatively limited number of potential consorts, the gene pool of many ruling families grew progressively smaller, until all European royalty was related. This also resulted in many being descended from a certain person through many lines of descent, such as the numerous European royalty and nobility descended from the British Queen Victoria or King Christian IX of Denmark. The House of Habsburg was known for its intermarriages; the Habsburg lip often cited as an ill-effect. The closely related houses of Habsburg, Bourbon, Braganza and Wittelsbach also frequently engaged in first-cousin unions as well as the occasional double-cousin and uncle–niece marriages. In ancient Egypt, royal women were believed to carry the bloodlines and so it was advantageous for a pharaoh to marry his sister or half-sister; in such cases a special combination between endogamy and polygamy is found. Normally, the old ruler's eldest son and daughter (who could be either siblings or half-siblings) became the new rulers. All rulers of the Ptolemaic dynasty uninterruptedly from Ptolemy IV (Ptolemy II married his sister but had no issue) were married to their brothers and sisters, so as to keep the Ptolemaic blood "pure" and to strengthen the line of succession. King Tutankhamun's mother is reported to have been the half-sister to his father, Cleopatra VII (also called Cleopatra VI) and Ptolemy XIII, who married and became co-rulers of ancient Egypt following their father's death, are the most widely known example. See also References External links Dale Vogt, Helen A. Swartz and John Massey, 1993. Inbreeding: Its Meaning, Uses and Effects on Farm Animals. University of Missouri, Extension. Consanguineous marriages with global map Population genetics Breeding Incest Kinship and descent
0.780195
0.998996
0.779411
Biological specificity
Biological specificity is the tendency of a characteristic such as a behavior or a biochemical variation to occur in a particular species. Biochemist Linus Pauling stated that "Biological specificity is the set of characteristics of living organisms or constituents of living organisms of being special or doing something special. Each animal or plant species is special. It differs in some way from all other species...biological specificity is the major problem about understanding life." Biological specificity within Homo sapiens Homo sapiens has many characteristics that show the biological specificity in the form of behavior and morphological traits. Morphologically, humans have an enlarged cranial capacity and more gracile features in comparison to other hominins. The reduction of dentition is a feature that allows for the advantage of adaptability in diet and survival. As a species, humans are culture dependent and much of human survival relies on the culture and social relationships. With the evolutionary change of the reduction of the pelvis and enlarged cranial capacity; events like childbirth are dependent on a safe, social setting to assist in the childbirth; a birthing mother will seek others when going into labor. This is a uniquely human experience, as other animals are able to give birth on their own and often choose to isolate themselves to do so to protect their young. An example of a genetic adaptation unique to humans is the gene apolipoprotein E (APOE4) on chromosome 19. While chimpanzees may have the APOE gene, the study "The apolipoprotein E (APOE) gene appears functionally monomorphic in chimpanzees" shows that the diversity of the APOE gene in humans in unique. The polymorphism in APOE is only in humans as they carry alleles APOE2, APOE3, APOE4; APOE4 which allows human to break down fatty protein and eat more protein than their ancestors is also a genomic risk factor for Alzheimer's disease. There are many behavioral characteristics that are specific to Homo sapiens in addition to childbirth. Specific and elaborate tool creation and use and language are other areas. Humans do not simply communicate; language is essential to their survival and complex culture. This culture must be learned, is variable and highly malleable to fit distinct social parameters. Humans do not simply communicate with a code or general understanding, but adhere to social standards, hierarchies, technologies, complex system of regulations and must maintain many dimensions of relationships in order to survive. This complexity of language and the dependence on culture is uniquely human. Intraspecific behaviors and variations exist within Homo sapiens which adds to the complexity of culture and language. Intraspecific variations are differences in behavior or biology within a species. These variations and the complexity within our society lead to social constructs such as race, gender, and roles. These add to power dynamics and hierarchies within the already multifaceted society. Subtopics Characteristics may further be described as being interspecific, intraspecific, and conspecific. Interspecific Interspecificity (literally between/among species), or being interspecific, describes issues between individuals of separate species. These may include: Interspecies communication, communication between different species of animals, plants, fungi or bacteria Interspecific competition, when individuals of different species compete for the same resource in an ecosystem Interspecific feeding, when adults of one species feed the young of another species Interspecific hybridization, when two species within the same genus generate offspring. Offspring may develop into adults but may be sterile. Interspecific interaction, the effects organisms in a community have on one another Interspecific pregnancy, pregnancy involving an embryo or fetus belonging to another species than the carrier Intraspecific Intraspecificity (literally within species), or being intraspecific, describes behaviors, biochemical variations and other issues within individuals of a single species. These may include: Intraspecific antagonism, when individuals of the same species are hostile to one another Intraspecific competition, when individuals or groups of individuals from the same species compete for the same resource in an ecosystem Intraspecific hybridization, hybridization between sub-species within a species. Intraspecific mimicry Conspecific Two or more organisms, populations, or taxa are conspecific if they belong to the same species. Where different species can interbreed and their gametes compete, the conspecific gametes take precedence over heterospecific gametes. This is known as conspecific sperm precedence, or conspecific pollen precedence in plants. Heterospecific The antonym of conspecificity is the term heterospecificity: two individuals are heterospecific if they are considered to belong to different biological species. Related concepts Congeners are organisms within the same genus. See also Evolutionary biology References External links Evolutionary biology concepts
0.790244
0.986286
0.779406
Animal
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from to . They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. Most living animal species belong to the infrakingdom Bilateria, a highly proliferative clade whose members have a bilaterally symmetric body plan. The vast majority belong to two large superphyla: the protostomes, which includes organisms such as the arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include the echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The simple Xenacoelomorpha have an uncertain position within Bilateria. Animals first appear in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran. Earlier evidence of animals is still controversial; the sponge-like organism Otavia has been dated back to the Tonian period at the start of the Neoproterozoic, but its identity as an animal is heavily contested. Nearly all modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun of the same meaning, which is itself derived from Latin 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα (meta) 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ (zōia) 'animals', plural of ζῷον zōion 'animal'. Characteristics Animals have several characteristics that set them apart from other living things. Animals are eukaryotic and multicellular. Unlike plants and algae, which produce their own nutrients, animals are heterotrophic, feeding on organic material and digesting it internally. With very few exceptions, animals respire aerobically. All animals are motile (able to spontaneously move their bodies) during at least part of their life cycle, but some animals, such as sponges, corals, mussels, and barnacles, later become sessile. The blastula is a stage in embryonic development that is unique to animals, allowing cells to be differentiated into specialised tissues and organs. Structure All animals are composed of cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised, making the formation of complex structures possible. This may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Animal cells uniquely possess the cell junctions called tight junctions, gap junctions, and desmosomes. With few exceptions—in particular, the sponges and placozoans—animal bodies are differentiated into tissues. These include muscles, which enable locomotion, and nerve tissues, which transmit signals and coordinate the body. Typically, there is also an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Reproduction and development Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorized into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidize carbohydrates, lipids, proteins and other biomolecules, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidizing inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals evolved in the sea. Lineages of arthropods colonised land around the same time as land plants, probably between 510 and 471 million years ago during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above or in the most extreme cold deserts of continental Antarctica. Diversity Size The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 meters. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. Numbers and habitats of major phyla The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011. Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialized for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artifact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny External phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. The dates on the phylogenetic tree indicate approximately how many millions of years ago the lineages split. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. Internal phylogeny The most basal animals, the Porifera, Ctenophora, Cnidaria, and Placozoa, have body plans that lack bilateral symmetry. Their relationships are still disputed; the sister group to all other animals could be the Porifera or the Ctenophora, both of which lack hox genes, which are important for body plan development. Hox genes are found in the Placozoa, Cnidaria, and Bilateria. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. 25 of these are novel core gene groups, found only in animals; of those, 8 are for essential components of the Wnt and TGF-beta signalling pathways which may have enabled animals to become multicellular by providing a pattern for the body's system of axes (in three dimensions), and another 7 are for transcription factors including homeodomain proteins involved in the control of development. Giribet and Edgecombe (2020) provide what they consider to be a consensus internal phylogeny of the animals, embodying uncertainty about the structure at the base of the tree (dashed lines). An alternative phylogeny, from Kapli and colleagues (2021), proposes a clade Xenambulacraria for the Xenacoelamorpha + Ambulacraria; this is either within Deuterostomia, as sister to Chordata, or the Deuterostomia are recovered as paraphyletic, and Xenambulacraria is sister to the proposed clade Centroneuralia, consisting of Chordata + Protostomia. Eumetazoa, a clade which contains Ctenophora and ParaHoxozoa, has been proposed as a sister group to Porifera. A competing hypothesis is the Benthozoa clade, which would consist of Porifera and ParaHoxozoa as a sister group of Ctenophora. Non-bilateria Several animal phyla lack bilateral symmetry. These are the Porifera (sea sponges), Placozoa, Cnidaria (which includes jellyfish, sea anemones, and corals), and Ctenophora (comb jellies). Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organization found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The comb jellies and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. Bilateria The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome phyla are the Echinodermata and the Chordata. Echinoderms are exclusively marine and include starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The deuterostomes also include the Hemichordata (acorn worms). Ecdysozoa The Ecdysozoa are protostomes, named after their shared trait of ecdysis, growth by moulting. They include the largest animal phylum, the Arthropoda, which contains insects, spiders, crabs, and their kin. All of these have a body divided into repeating segments, typically with paired appendages. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share these traits. The ecdysozoans also include the Nematoda or roundworms, perhaps the second largest animal phylum. Roundworms are typically microscopic and occur in nearly every environment where there is water; some are important parasites. Smaller phyla related to them are the Nematomorpha or horsehair worms, and the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom. Spiralia The Spiralia are a large group of protostomes that develop by spiral cleavage in the early embryo. The Spiralia's phylogeny has been disputed, but it contains a large clade, the superphylum Lophotrochozoa, and smaller groups of phyla such as the Rouphozoa which includes the gastrotrichs and the flatworms. All of these are grouped as the Platytrochozoa, which has a sister group, the Gnathifera, which includes the rotifers. The Lophotrochozoa includes the molluscs, annelids, brachiopods, nemerteans, bryozoa and entoprocts. The molluscs, the second-largest animal phylum by number of described species, includes snails, clams, and squids, while the annelids are the segmented worms, such as earthworms, lugworms, and leeches. These two groups have long been considered close relatives because they share trochophore larvae. History of classification In the classical era, Aristotle divided animals, based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, 2 legs, rational soul) down through the live-bearing tetrapods (with blood, 4 legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos (a chaotic mess) and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created 9 phyla apart from vertebrates (where he still had 4 phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ("branches" with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture Practical uses The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects – principally bees and silkworms – and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. Symbolic uses The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Animal coloration Ethology Lists of organisms by population World Animal Day, observed on 4 October Notes References External links Tree of Life Project. Animal Diversity Web – University of Michigan's database of animals Wildscreen Arkive – multimedia database of endangered/protected species Animals Animals Cryogenian first appearances Taxa named by Carl Linnaeus Biology terminology
0.779453
0.999911
0.779383
Biotechnology
Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms and parts thereof for products and services. The term biotechnology was first used by Károly Ereky in 1919 to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances. Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, and consequently, create new traits or modifying existing ones. Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese. The applications of biotechnology are diverse and have led to the development of essential products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites. Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surrounding the use and application of biotechnology in various industries and fields. Definition The concept of biotechnology encompasses a wide range of procedures for modifying living organisms for human purposes, going back to domestication of animals, cultivation of plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering, as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms, such as pharmaceuticals, crops, and livestock. As per the European Federation of Biotechnology, biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services. Biotechnology is based on the basic biological sciences (e.g., molecular biology, biochemistry, cell biology, embryology, genetics, microbiology) and conversely provides methods to support and perform basic research in biology. Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation, and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products). The utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology. By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells, and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering. History Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best-suited crops (e.g., those with the highest yields) to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants — one of the first forms of biotechnology. These processes also were included in early fermentation of beer. These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains broke down into alcohols, such as ethanol. Later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form. Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection. For thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops. In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I. Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley – to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans. The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the genus Pseudomonas) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium). The MOSFET invented at Bell Labs between 1955 and 1960, Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters. The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld in 1970. It is a special type of MOSFET, where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology. By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed. A factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products. Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds that resist pests and drought. By increasing farm productivity, biotechnology boosts biofuel production. Examples Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non-food (industrial) uses of crops and other products (e.g., biodegradable plastics, vegetable oil, biofuels), and environmental uses. For example, one application of biotechnology is the directed use of microorganisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons. A series of derived terms have been coined to identify several branches of biotechnology, for example: Bioinformatics (or "gold biotechnology") is an interdisciplinary field that addresses biological problems using computational techniques, and makes the rapid organization as well as analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale". Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector. Blue biotechnology is based on the exploitation of sea resources to create products and industrial applications. This branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio-oils with photosynthetic micro-algae. Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environments in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. It is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. On the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. Red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. This branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. As well as the development of hormones, stem cells, antibodies, siRNA and diagnostic tests. White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. "Yellow biotechnology" refers to the use of biotechnology in food production (food industry), for example in making wine (winemaking), cheese (cheesemaking), and beer (brewing) by fermentation. It has also been used to refer to biotechnology applied to insects. This includes biotechnology-based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. Gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. Brown biotechnology is related to the management of arid lands and deserts. One application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. Violet biotechnology is related to law, ethical and philosophical issues around biotechnology. Microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity (space bioeconomy) Dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops. Medicine In medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing (or genetic screening). In 2021, nearly 40% of the total company value of pharmaceutical biotech companies worldwide were active in Oncology with Neurology and Rare Diseases being the other two big applications. Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs. Researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity. The purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects. Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup. Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology – biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle or pigs). The genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins. Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use. Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. Agriculture Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases, the main aim is to introduce a new trait that does not occur naturally in the species. Biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. Furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. Examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments (e.g. resistance to a herbicide), reduction of spoilage, or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from . 10% of the world's crop lands were planted with GM crops in 2010. As of 2011, 11 different transgenic crops were grown commercially on in 29 countries such as the US, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain. Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato. To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed; in November 2013 none were available on the market, but in 2015 the FDA approved the first GM salmon for commercial production and consumption. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. GM crops also provide a number of ecological benefits, if not used in excess. Insect-resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. Biotechnology has several applications in the realm of food security. Crops like Golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. Though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. Additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. Transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in India and other countries. Industrial Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. In the current decades, significant progress has been done in creating genetically modified organisms (GMOs) that enhance the diversity of applications and economical viability of industrial biotechnology. By using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical-based economy. Synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. Jointly biotechnology and synthetic biology play a crucial role in generating cost-effective products with nature-friendly features by using bio-based production instead of fossil-based. Synthetic biology can be used to engineer model microorganisms, such as Escherichia coli, by genome editing tools to enhance their ability to produce bio-based products, such as bioproduction of medicines and biofuels. For instance, E. coli and Saccharomyces cerevisiae in a consortium could be used as industrial microbes to produce precursors of the chemotherapeutic agent paclitaxel by applying the metabolic engineering in a co-culture approach to exploit the benefits from the two microbes. Another example of synthetic biology applications in industrial biotechnology is the re-engineering of the metabolic pathways of E. coli by CRISPR and CRISPRi systems toward the production of a chemical known as 1,4-butanediol, which is used in fiber manufacturing. In order to produce 1,4-butanediol, the authors alter the metabolic regulation of the Escherichia coli by CRISPR to induce point mutation in the gltA gene, knockout of the sad gene, and knock-in six genes (cat1, sucD, 4hbd, cat2, bld, and bdh). Whereas CRISPRi system used to knockdown the three competing genes (gabD, ybgC, and tesB) that affect the biosynthesis pathway of 1,4-butanediol. Consequently, the yield of 1,4-butanediol significantly increased from 0.9 to 1.8 g/L. Environmental Environmental biotechnology includes various disciplines that play an essential role in reducing environmental waste and providing environmentally safe processes, such as biofiltration and biodegradation. The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g., bioremediation is to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g., flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively. Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology. Many cities have installed CityTrees, which use biotechnology to filter pollutants from urban atmospheres. Regulation The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the coexistence of GM and non-GM crops. Depending on the coexistence regulations, incentives for the cultivation of GM crops differ. Database for the GMOs used in the EU The EUginius (European GMO Initiative for a Unified Database System) database is intended to help companies, interested private users and competent authorities to find precise information on the presence, detection and identification of GMOs used in the European Union. The information is provided in English. Learning In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support are provided for two or three years during the course of their PhD thesis work. Nineteen institutions offer NIGMS supported BTPs. Biotechnology training is also offered at the undergraduate level and in community colleges. References and notes External links What is Biotechnology? – A curated collection of resources about the people, places and technologies that have enabled biotechnology
0.78013
0.999025
0.779369
Systems science
Systems science, also referred to as systems research, or, simply, systems, is a transdisciplinary field that is concerned with understanding simple and complex systems in nature and society, which leads to the advancements of formal, natural, social, and applied attributions throughout engineering, technology and science, itself. To systems scientists, the world can be understood as a system of systems. The field aims to develop transdisciplinary foundations that are applicable in a variety of areas, such as psychology, biology, medicine, communication, business, technology, computer science, engineering, and social sciences. Themes commonly stressed in system science are (a) holistic view, (b) interaction between a system and its embedding environment, and (c) complex (often subtle) trajectories of dynamic behavior that sometimes are stable (and thus reinforcing), while at various 'boundary conditions' can become wildly unstable (and thus destructive). Concerns about Earth-scale biosphere/geosphere dynamics is an example of the nature of problems to which systems science seeks to contribute meaningful insights. Associated fields The systems sciences are a broad array of fields. One way of conceiving of these is in three groups: fields that have developed systems ideas primarily through theory; those that have done so primarily through practical engagements with problem situations; and those that have applied ideas for other disciplines. Theoretical fields Chaos and dynamical systems Complexity Control theory Affect control theory Control engineering Control systems Cybernetics Autopoiesis Conversation Theory Engineering Cybernetics Perceptual Control Theory Management Cybernetics Second-Order Cybernetics Cyber-Physical Systems Artificial Intelligence Synthetic Intelligence Information theory General systems theory Systems theory in anthropology Biochemical systems theory Ecological systems theory Developmental systems theory General systems theory Living systems theory LTI system theory Social systems Sociotechnical systems theory Mathematical system theory World-systems theory Hierarchy Theory Practical fields Critical systems thinking Operations research and management science Soft systems methodology The soft systems methodology was developed in England by academics at the University of Lancaster Systems Department through a ten-year action research programme. The main contributor is Peter Checkland (born 18 December 1930, in Birmingham, UK), a British management scientist and emeritus professor of systems at Lancaster University. Systems analysis Systems analysis branch of systems science that analyzes systems, the interactions within those systems, or interaction with its environment, often prior to their automation as computer models. Systems analysis is closely associated with the RAND corporation. Systemic design Systemic design integrates methodologies from systems thinking with advanced design practices to address complex, multi-stakeholder situations. Systems dynamics System dynamics is an approach to understanding the behavior of complex systems over time. It offers "simulation technique for modeling business and social systems", which deals with internal feedback loops and time delays that affect the behavior of the entire system. What makes using system dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows. Systems engineering Systems engineering (SE) is an interdisciplinary field of engineering, that focuses on the development and organization of complex systems. It is the "art and science of creating whole solutions to complex problems", for example: signal processing systems, control systems and communication system, or other forms of high-level modelling and design in specific fields of engineering. Systems Science is foundational to the Embedded Software Development that is founded in the embedded requirements of Systems Engineering. Aerospace systems Biological systems engineering Earth systems engineering and management Electronic systems Enterprise systems engineering Software systems Systems analysis Applications in other disciplines Earth system science Climate systems Systems geology Systems biology Computational systems biology Synthetic biology Systems immunology Systems neuroscience Systems chemistry Systems ecology Ecosystem ecology Agroecology Systems psychology Ergonomics Family systems theory Systemic therapy See also Antireductionism Evolutionary prototyping Holism Cybernetics System engineering System Dynamics Systemics System equivalence Systems theory Tektology World-systems theory Complex Systems References Further reading B. A. Bayraktar, Education in Systems Science, 1979, 369 pp. Kenneth D. Bailey, "Fifty Years of Systems Science:Further Reflections", Systems Research and Behavioral Science, 22, 2005, pp. 355–361. Robert L. Flood, Ewart R Carson, Dealing with Complexity: An Introduction to the Theory and Application of Systems Science (2nd Edition), 1993. George J. Klir, Facets of Systems Science (2nd Edition), Kluwer Academic/Plenum Publishers, 2001. Ervin László, Systems Science and World Order: Selected Studies, 1983. G. E. Mobus & M. C. Kalton, Principles of Systems Science, 2015, New York:Springer. Anatol Rapoport (ed.), General Systems: Yearbook of the Society for the Advancement of General Systems Theory, Society for General Systems Research, Vol 1., 1956. Li D. Xu, "The contributions of Systems Science to Information Systems Research", Systems Research and Behavioral Science, 17, 2000, pp. 105–116. Graeme Donald Snooks, "A general theory of complex living systems: Exploring the demand side of dynamics", Complexity, vol. 13, no. 6, July/August 2008. John N. Warfield, "A proposal for Systems Science", Systems Research and Behavioral Science, 20, 2003, pp. 507–520. Michael C. Jackson, Critical Systems Thinking and the Management of Complexity, 2019 , Wiley. External links Principia Cybernetica Web Institute of System Science Knowledge (ISSK.org) International Society for the System Sciences American Society for Cybernetics UK Systems Society Cybernetics Society Formal sciences
0.786218
0.99128
0.779363
Genetic diversity
Genetic diversity is the total number of genetic characteristics in the genetic makeup of a species. It ranges widely, from the number of species to differences within species, and can be correlated to the span of survival for a species. It is distinguished from genetic variability, which describes the tendency of genetic characteristics to vary. Genetic diversity serves as a way for populations to adapt to changing environments. With more variation, it is more likely that some individuals in a population will possess variations of alleles that are suited for the environment. Those individuals are more likely to survive to produce offspring bearing that allele. The population will continue for more generations because of the success of these individuals. The academic field of population genetics includes several hypotheses and theories regarding genetic diversity. The neutral theory of evolution proposes that diversity is the result of the accumulation of neutral substitutions. Diversifying selection is the hypothesis that two subpopulations of a species live in different environments that select for different alleles at a particular locus. This may occur, for instance, if a species has a large range relative to the mobility of individuals within it. Frequency-dependent selection is the hypothesis that as alleles become more common, they become more vulnerable. This occurs in host–pathogen interactions, where a high frequency of a defensive allele among the host means that it is more likely that a pathogen will spread if it is able to overcome that allele. Within-species diversity A study conducted by the National Science Foundation in 2007 found that genetic diversity (within-species diversity) and biodiversity are dependent upon each other — i.e. that diversity within a species is necessary to maintain diversity among species, and vice versa. According to the lead researcher in the study, Dr. Richard Lankau, "If any one type is removed from the system, the cycle can break down, and the community becomes dominated by a single species." Genotypic and phenotypic diversity have been found in all species at the protein, DNA, and organismal levels; in nature, this diversity is nonrandom, heavily structured, and correlated with environmental variation and stress. The interdependence between genetic and species diversity is delicate. Changes in species diversity lead to changes in the environment, leading to adaptation of the remaining species. Changes in genetic diversity, such as in loss of species, leads to a loss of biological diversity. Loss of genetic diversity in domestic animal populations has also been studied and attributed to the extension of markets and economic globalization. Neutral and adaptive genetic diversity Neutral genetic diversity consists of genes that do not increase fitness and are not responsible for adaptability. Natural selection does not act on these neutral genes. Adaptive genetic diversity consists of genes that increase fitness and are responsible for adaptability to changes in the environment. Adaptive genes are responsible for ecological, morphological, and behavioral traits. Natural selection acts on adaptive genes which allows the organisms to evolve. The rate of evolution on adaptive genes is greater than on neutral genes due to the influence of selection. However, it has been difficult to identify alleles for adaptive genes and thus adaptive genetic diversity is most often measured indirectly. For example, heritability can be measured as or adaptive population differentiation can be measured as . It may be possible to identify adaptive genes through genome-wide association studies by analyzing genomic data at the population level. Identifying adaptive genetic diversity is important for conservation because the adaptive potential of a species may dictate whether it survives or becomes extinct, especially as the climate changes. This is magnified by a lack of understanding whether low neutral genetic diversity is correlated with high genetic drift and high mutation load. In a review of current research, Teixeira and Huber (2021), discovered some species, such as those in the genus Arabidopsis, appear to have high adaptive potential despite suffering from low genetic diversity overall due to severe bottlenecks. Therefore species with low neutral genetic diversity may possess high adaptive genetic diversity, but since it is difficult to identify adaptive genes, a measurement of overall genetic diversity is important for planning conservation efforts and a species that has experienced a rapid decline in genetic diversity may be highly susceptible to extinction. Evolutionary importance of genetic diversity Adaptation Variation in the populations gene pool allows natural selection to act upon traits that allow the population to adapt to changing environments. Selection for or against a trait can occur with changing environment – resulting in an increase in genetic diversity (if a new mutation is selected for and maintained) or a decrease in genetic diversity (if a disadvantageous allele is selected against). Hence, genetic diversity plays an important role in the survival and adaptability of a species. The capability of the population to adapt to the changing environment will depend on the presence of the necessary genetic diversity. The more genetic diversity a population has, the more likelihood the population will be able to adapt and survive. Conversely, the vulnerability of a population to changes, such as climate change or novel diseases will increase with reduction in genetic diversity. For example, the inability of koalas to adapt to fight Chlamydia and the koala retrovirus (KoRV) has been linked to the koala's low genetic diversity. This low genetic diversity also has geneticists concerned for the koalas' ability to adapt to climate change and human-induced environmental changes in the future. Small populations Large populations are more likely to maintain genetic material and thus generally have higher genetic diversity. Small populations are more likely to experience the loss of diversity over time by random chance, which is an example of genetic drift. When an allele (variant of a gene) drifts to fixation, the other allele at the same locus is lost, resulting in a loss in genetic diversity. In small population sizes, inbreeding, or mating between individuals with similar genetic makeup, is more likely to occur, thus perpetuating more common alleles to the point of fixation, thus decreasing genetic diversity. Concerns about genetic diversity are therefore especially important with large mammals due to their small population size and high levels of human-caused population effects.[16] A genetic bottleneck can occur when a population goes through a period of low number of individuals, resulting in a rapid decrease in genetic diversity. Even with an increase in population size, the genetic diversity often continues to be low if the entire species began with a small population, since beneficial mutations (see below) are rare, and the gene pool is limited by the small starting population. This is an important consideration in the area of conservation genetics, when working toward a rescued population or species that is genetically healthy. Mutation Random mutations consistently generate genetic variation. A mutation will increase genetic diversity in the short term, as a new gene is introduced to the gene pool. However, the persistence of this gene is dependent of drift and selection (see above). Most new mutations either have a neutral or negative effect on fitness, while some have a positive effect. A beneficial mutation is more likely to persist and thus have a long-term positive effect on genetic diversity. Mutation rates differ across the genome, and larger populations have greater mutation rates. In smaller populations a mutation is less likely to persist because it is more likely to be eliminated by drift. Gene flow Gene flow, often by migration, is the movement of genetic material (for example by pollen in the wind, or the migration of a bird). Gene flow can introduce novel alleles to a population. These alleles can be integrated into the population, thus increasing genetic diversity. For example, an insecticide-resistant mutation arose in Anopheles gambiae African mosquitoes. Migration of some A. gambiae mosquitoes to a population of Anopheles coluzziin mosquitoes resulted in a transfer of the beneficial resistance gene from one species to the other. The genetic diversity was increased in A. gambiae by mutation and in A. coluzziin by gene flow. In agriculture In crops When humans initially started farming, they used selective breeding to pass on desirable traits of the crops while omitting the undesirable ones. Selective breeding leads to monocultures: entire farms of nearly genetically identical plants. Little to no genetic diversity makes crops extremely susceptible to widespread disease; bacteria morph and change constantly and when a disease-causing bacterium changes to attack a specific genetic variation, it can easily wipe out vast quantities of the species. If the genetic variation that the bacterium is best at attacking happens to be that which humans have selectively bred to use for harvest, the entire crop will be wiped out. The nineteenth-century Great Famine in Ireland was caused in part by a lack of biodiversity. Since new potato plants do not come as a result of reproduction, but rather from pieces of the parent plant, no genetic diversity is developed, and the entire crop is essentially a clone of one potato, it is especially susceptible to an epidemic. In the 1840s, much of Ireland's population depended on potatoes for food. They planted namely the "lumper" variety of potato, which was susceptible to a rot-causing oomycete called Phytophthora infestans. The fungus destroyed the vast majority of the potato crop, and left one million people to starve to death. Genetic diversity in agriculture does not only relate to disease, but also herbivores. Similarly, to the above example, monoculture agriculture selects for traits that are uniform throughout the plot. If this genotype is susceptible to certain herbivores, this could result in the loss of a large portion of the crop. One way farmers get around this is through inter-cropping. By planting rows of unrelated, or genetically distinct crops as barriers between herbivores and their preferred host plant, the farmer effectively reduces the ability of the herbivore to spread throughout the entire plot. In livestock The genetic diversity of livestock species permits animal husbandry in a range of environments and with a range of different objectives. It provides the raw material for selective breeding programmes and allows livestock populations to adapt as environmental conditions change. Livestock biodiversity can be lost as a result of breed extinctions and other forms of genetic erosion. As of June 2014, among the 8,774 breeds recorded in the Domestic Animal Diversity Information System (DAD-IS), operated by the Food and Agriculture Organization of the United Nations (FAO), 17 percent were classified as being at risk of extinction and 7 percent already extinct. There is now a Global Plan of Action for Animal Genetic Resources that was developed under the auspices of the Commission on Genetic Resources for Food and Agriculture in 2007, that provides a framework and guidelines for the management of animal genetic resources. Awareness of the importance of maintaining animal genetic resources has increased over time. FAO has published two reports on the state of the world's animal genetic resources for food and agriculture, which cover detailed analyses of our global livestock diversity and ability to manage and conserve them. Viral implications High genetic diversity in viruses must be considered when designing vaccinations. High genetic diversity results in difficulty in designing targeted vaccines, and allows for viruses to quickly evolve to resist vaccination lethality. For example, malaria vaccinations are impacted by high levels of genetic diversity in the protein antigens. In addition, HIV-1 genetic diversity limits the use of currently available viral load and resistance tests. Coronavirus populations have considerable evolutionary diversity due to mutation and homologous recombination. For example, the sequencing of 86 SARS-CoV-2 coronavirus samples obtained from infected patients revealed 93 mutations indicating the presence of considerable genetic diversity. Replication of the coronavirus RNA genome is catalyzed by an RNA-dependent RNA polymerase. During replication this polymerase may undergo template switching, a form of homologous recombination. This process which also generates genetic diversity appears to be an adaptation for coping with RNA genome damage. Coping with low genetic diversity Natural The natural world has several ways of preserving or increasing genetic diversity. Among oceanic plankton, viruses aid in the genetic shifting process. Ocean viruses, which infect the plankton, carry genes of other organisms in addition to their own. When a virus containing the genes of one cell infects another, the genetic makeup of the latter changes. This constant shift of genetic makeup helps to maintain a healthy population of plankton despite complex and unpredictable environmental changes. Cheetahs are a threatened species. Low genetic diversity and resulting poor sperm quality has made breeding and survivorship difficult for cheetahs. Moreover, only about 5% of cheetahs survive to adulthood. However, it has been recently discovered that female cheetahs can mate with more than one male per litter of cubs. They undergo induced ovulation, which means that a new egg is produced every time a female mates. By mating with multiple males, the mother increases the genetic diversity within a single litter of cubs. Human intervention Attempts to increase the viability of a species by increasing genetic diversity is called genetic rescue. For example, eight panthers from Texas were introduced to the Florida panther population, which was declining and suffering from inbreeding depression. Genetic variation was thus increased and resulted in a significant increase in population growth of the Florida Panther. Creating or maintaining high genetic diversity is an important consideration in species rescue efforts, in order to ensure the longevity of a population. Measures Genetic diversity of a population can be assessed by some simple measures. Gene diversity is the proportion of polymorphic loci across the genome. Heterozygosity is the fraction of individuals in a population that are heterozygous for a particular locus. Alleles per locus is also used to demonstrate variability. Nucleotide diversity is the extent of nucleotide polymorphisms within a population, and is commonly measured through molecular markers such as micro- and minisatellite sequences, mitochondrial DNA, and single-nucleotide polymorphisms (SNPs). Furthermore, stochastic simulation software is commonly used to predict the future of a population given measures such as allele frequency and population size. Genetic diversity can also be measured. The various recorded ways of measuring genetic diversity include: Species richness is a measure of the number of species Species abundance a relative measure of the abundance of species Species density an evaluation of the total number of species per unit area See also Biodiversity Genetic variance Center of diversity Genetic variation Genetic resources Human genetic variation Human Variome Project International HapMap Project Conservation biology QST (genetics) References External links Implementing the Global Plan of Action on Animal Genetic Resources Domestic Animal Diversity Information System Commission on Genetic Resources for Food and Agriculture Biodiversity Genetics concepts Population genetics
0.784062
0.993965
0.77933
Heterosis
Heterosis, hybrid vigor, or outbreeding enhancement is the improved or increased function of any biological quality in a hybrid offspring. An offspring is heterotic if its traits are enhanced as a result of mixing the genetic contributions of its parents. The heterotic offspring often has traits that are more than the simple addition of the parents' traits, and can be explained by Mendelian or non-Mendelian inheritance. Typical heterotic/hybrid traits of interest in agriculture are higher yield, quicker maturity, stability, drought tolerance etc. Definitions In proposing the term heterosis to replace the older term heterozygosis, G.H. Shull aimed to avoid limiting the term to the effects that can be explained by heterozygosity in Mendelian inheritance. Heterosis is often discussed as the opposite of inbreeding depression, although differences in these two concepts can be seen in evolutionary considerations such as the role of genetic variation or the effects of genetic drift in small populations on these concepts. Inbreeding depression occurs when related parents have children with traits that negatively influence their fitness largely due to homozygosity. In such instances, outcrossing should result in heterosis. Not all outcrosses result in heterosis. For example, when a hybrid inherits traits from its parents that are not fully compatible, fitness can be reduced. This is a form of outbreeding depression, the effects of which are similar to inbreeding depression. Genetic and epigenetic bases Since the early 1900s, two competing genetic hypotheses, not necessarily mutually exclusive, have been developed to explain hybrid vigor. More recently, an epigenetic component of hybrid vigor has also been established. Dominance and overdominance When a population is small or inbred, it tends to lose genetic diversity. Inbreeding depression is the loss of fitness due to loss of genetic diversity. Inbred strains tend to be homozygous for recessive alleles that are mildly harmful (or produce a trait that is undesirable from the standpoint of the breeder). Heterosis or hybrid vigor, on the other hand, is the tendency of outbred strains to exceed both inbred parents in fitness. Selective breeding of plants and animals, including hybridization, began long before there was an understanding of underlying scientific principles. In the early 20th century, after Mendel's laws came to be understood and accepted, geneticists undertook to explain the superior vigor of many plant hybrids. Two competing hypotheses, which are not mutually exclusive, were developed: Dominance hypothesis. The dominance hypothesis attributes the superiority of hybrids to the suppression of undesirable recessive alleles from one parent by dominant alleles from the other. It attributes the poor performance of inbred strains to loss of genetic diversity, with the strains becoming purely homozygous at many loci. The dominance hypothesis was first expressed in 1908 by the geneticist Charles Davenport. Under the dominance hypothesis, deleterious alleles are expected to be maintained in a random-mating population at a selection–mutation balance that would depend on the rate of mutation, the effect of the alleles and the degree to which alleles are expressed in heterozygotes. Overdominance hypothesis. Certain combinations of alleles that can be obtained by crossing two inbred strains are advantageous in the heterozygote. The overdominance hypothesis attributes the heterozygote advantage to the survival of many alleles that are recessive and harmful in homozygotes. It attributes the poor performance of inbred strains to a high percentage of these harmful recessives. The overdominance hypothesis was developed independently by Edward M. East (1908) and George Shull (1908). Genetic variation at an overdominant locus is expected to be maintained by balancing selection. The high fitness of heterozygous genotypes favours the persistence of an allelic polymorphism in the population. This hypothesis is commonly invoked to explain the persistence of some alleles (most famously the Sickle cell trait allele) that are harmful in homozygotes. In normal circumstances, such harmful alleles would be removed from a population through the process of natural selection. Like the dominance hypothesis, it attributes the poor performance of inbred strains to expression of such harmful recessive alleles. Dominance and overdominance have different consequences for the gene expression profile of the individuals. If overdominance is the main cause for the fitness advantages of heterosis, then there should be an over-expression of certain genes in the heterozygous offspring compared to the homozygous parents. On the other hand, if dominance is the cause, fewer genes should be under-expressed in the heterozygous offspring compared to the parents. Furthermore, for any given gene, the expression should be comparable to the one observed in the fitter of the two parents. In any case, outcross matings provide the benefit of masking deleterious recessive alleles in progeny. This benefit has been proposed to be a major factor in the maintenance of sexual reproduction among eukaryotes, as summarized in the article Evolution of sexual reproduction. Historical retrospective Which of the two mechanisms are the "main" reason for heterosis has been a scientific controversy in the field of genetics. Population geneticist James Crow (1916–2012) believed, in his younger days, that overdominance was a major contributor to hybrid vigor. In 1998 he published a retrospective review of the developing science. According to Crow, the demonstration of several cases of heterozygote advantage in Drosophila and other organisms first caused great enthusiasm for the overdominance theory among scientists studying plant hybridization. But overdominance implies that yields on an inbred strain should decrease as inbred strains are selected for the performance of their hybrid crosses, as the proportion of harmful recessives in the inbred population rises. Over the years, experimentation in plant genetics has proven that the reverse occurs, that yields increase in both the inbred strains and the hybrids, suggesting that dominance alone may be adequate to explain the superior yield of hybrids. Only a few conclusive cases of overdominance have been reported in all of genetics. Since the 1980s, as experimental evidence has mounted, the dominance theory has made a comeback. Crow wrote: The current view ... is that the dominance hypothesis is the major explanation of inbreeding decline and [of] the high yield of hybrids. There is little statistical evidence for contributions from overdominance and epistasis. But whether the best hybrids are getting an extra boost from overdominance or favorable epistatic contributions remains an open question. Epigenetics An epigenetic contribution to heterosis has been established in plants, and it has also been reported in animals. MicroRNAs (miRNAs), discovered in 1993, are a class of non-coding small RNAs which repress the translation of messenger RNAs (mRNAs) or cause degradation of mRNAs. In hybrid plants, most miRNAs have non-additive expression (it might be higher or lower than the levels in the parents). This suggests that the small RNAs are involved in the growth, vigor and adaptation of hybrids. 'Heterosis without hybridity' effects on plant size have been demonstrated in genetically isogenic F1 triploid (autopolyploid) plants, where paternal genome excess F1 triploids display positive heterosis, whereas maternal genome excess F1s display negative heterosis effects. Such findings demonstrate that heterosis effects, with a genome dosage-dependent epigenetic basis, can be generated in F1 offspring that are genetically isogenic (i.e. harbour no heterozygosity). It has been shown that hybrid vigor in an allopolyploid hybrid of two Arabidopsis species was due to epigenetic control in the upstream regions of two genes, which caused major downstream alteration in chlorophyll and starch accumulation. The mechanism involves acetylation or methylation of specific amino acids in histone H3, a protein closely associated with DNA, which can either activate or repress associated genes. Specific mechanisms Major histocompatibility complex in animals One example of where particular genes may be important in vertebrate animals for heterosis is the major histocompatibility complex (MHC). Vertebrates inherit several copies of both MHC class I and MHC class II from each parent, which are used in antigen presentation as part of the adaptive immune system. Each different copy of the genes is able to bind and present a different set of potential peptides to T-lymphocytes. These genes are highly polymorphic throughout populations, but are more similar in smaller, more closely related populations. Breeding between more genetically distant individuals decreases the chance of inheriting two alleles that are the same or similar, allowing a more diverse range of peptides to be presented. This, therefore, increases the chance that any particular pathogen will be recognised, and means that more antigenic proteins on any pathogen are likely to be recognised, giving a greater range of T-cell activation, so a greater response. This also means that the immunity acquired to the pathogen is against a greater range of antigens, meaning that the pathogen must mutate more before immunity is lost. Thus, hybrids are less likely to succumb to pathogenic disease and are more capable of fighting off infection. This may be the cause, though, of autoimmune diseases. Plants Crosses between inbreds from different heterotic groups result in vigorous F1 hybrids with significantly more heterosis than F1 hybrids from inbreds within the same heterotic group or pattern. Heterotic groups are created by plant breeders to classify inbred lines, and can be progressively improved by reciprocal recurrent selection. Heterosis is used to increase yields, uniformity, and vigor. Hybrid breeding methods are used in maize, sorghum, rice, sugar beet, onion, spinach, sunflowers, broccoli and to create a more psychoactive cannabis. Corn (maize) Nearly all field corn (maize) grown in most developed nations exhibits heterosis. Modern corn hybrids substantially outyield conventional cultivars and respond better to fertilizer. Corn heterosis was famously demonstrated in the early 20th century by George H. Shull and Edward M. East after hybrid corn was invented by Dr. William James Beal of Michigan State University based on work begun in 1879 at the urging of Charles Darwin. Dr. Beal's work led to the first published account of a field experiment demonstrating hybrid vigor in corn, by Eugene Davenport and Perry Holden, 1881. These various pioneers of botany and related fields showed that crosses of inbred lines made from a Southern dent and a Northern flint, respectively, showed substantial heterosis and outyielded conventional cultivars of that era. However, at that time such hybrids could not be economically made on a large scale for use by farmers. Donald F. Jones at the Connecticut Agricultural Experiment Station, New Haven invented the first practical method of producing a high-yielding hybrid maize in 1914–1917. Jones' method produced a double-cross hybrid, which requires two crossing steps working from four distinct original inbred lines. Later work by corn breeders produced inbred lines with sufficient vigor for practical production of a commercial hybrid in a single step, the single-cross hybrids. Single-cross hybrids are made from just two original parent inbreds. They are generally more vigorous and also more uniform than the earlier double-cross hybrids. The process of creating these hybrids often involves detasseling. Temperate maize hybrids are derived from two main heterotic groups: 'Iowa Stiff Stalk Synthetic', and nonstiff stalk. Rice (Oryza sativa) Hybrid rice sees cultivation in many countries, including China, India, Vietnam, and the Philippines. Compared to inbred lines, hybrids produce approximately 20% greater yield, and comprise 45% of rice planting area in China. Rice production has seen enormous rise in China due to heavy uses of hybrid rice. In China, efforts have generated a super hybrid rice strain ('LYP9') with a production capability around 15 tons per hectare. In India also, several varieties have shown high vigor, including 'RH-10' and 'Suruchi 5401'. Since rice is a self-pollinating species, it requires the use of male-sterile lines to generate hybrids from separate lineages. The most common way of achieving this is using lines with genetic male-sterility, as manual emasculation is not optimal for large-scale hybridization. The first generation of hybrid rice was developed in the 1970s. It relies on three lines: a cytoplasmic male sterile (CMS) line, a maintainer line, and a restorer line. The second generation was widely adopted in the 1990s. Instead of a CMS line, it uses an environment-sensitive genic male sterile line (EGMS), which can have its sterility reversed based on light or temperature. This removes the need for a maintainer, making the hybridization and breeding process more efficient (albeit still high-maintenance). Second generation lines show a yield increase of 5-10% over first generation lines. The third and current generation uses a nuclear male sterile line (NMS). Third generation lines have a recessive sterility gene, and their cultivation is more lenient towards maintainer lines and environmental conditions. Additionally, transgenes are only present in the maintainer, so hybrid plants can benefit from hybrid vigor without requiring special oversight. Animals Hybrid livestock The concept of heterosis is also applied in the production of commercial livestock. In cattle, crosses between Black Angus and Hereford produce a cross known as a "Black Baldy". In swine, "blue butts" are produced by the cross of Hampshire and Yorkshire. Other, more exotic hybrids (two different species, so genetically more dissimilar), such as "beefalo" which are hybrids of cattle and bison, are also used for specialty markets. Poultry Within poultry, sex-linked genes have been used to create hybrids in which males and females can be sorted at one day old by color. Specific genes used for this are genes for barring and wing feather growth. Crosses of this sort create what are sold as Black Sex-links, Red Sex-links, and various other crosses that are known by trade names. Commercial broilers are produced by crossing different strains of White Rocks and White Cornish, the Cornish providing a large frame and the Rocks providing the fast rate of gain. The hybrid vigor produced allows the production of uniform birds at a marketable carcass weight at 6–9 weeks of age. Likewise, hybrids between different strains of White Leghorn are used to produce laying flocks that provide the majority of white eggs for sale in the United States. Dogs In 2013, a study found that mixed breeds live on average 1.2 years longer than pure breeds. John Scott and John L. Fuller performed a detailed study of purebred Cocker Spaniels, purebred Basenjis, and hybrids between them. They found that hybrids ran faster than either parent, perhaps due to heterosis. Other characteristics, such as basal heart rate, did not show any heterosis—the dog's basal heart rate was close to the average of its parents—perhaps due to the additive effects of multiple genes. Sometimes people working on a dog-breeding program find no useful heterosis. All this said, studies do not provide definitive proof of hybrid vigor in dogs. This is largely due to the unknown heritage of most mixed breed dogs used. Results vary wildly, with some studies showing benefit and others finding the mixed breed dogs to be more prone to genetic conditions. Birds In 2014, a study undertaken by the Centre for Integrative Ecology at Deakin University in Geelong, Victoria, concluded that intraspecific hybrids between the subspecies Platycercus elegans flaveolus and P. e. elegans of the crimson rosella (P. elegans) were more likely to fight off diseases than their pure counterparts. Humans Human beings are all extremely genetically similar to one another. Michael Mingroni has proposed heterosis, in the form of hybrid vigor associated with historical reductions of the levels of inbreeding, as an explanation of the Flynn effect, the steady rise in IQ test scores around the world during the 20th century, though a review of nine studies found that there is no evidence to suggest inbreeding has an effect on IQ. Controversy The term heterosis often causes confusion and even controversy, particularly in selective breeding of domestic animals, because it is sometimes (incorrectly) claimed that all crossbred plants and animals are "genetically superior" to their parents, due to heterosis,. but two problems exist with this claim: according to an article published in the journal Genome Biology, "genetic superiority" is an ill-defined term and not generally accepted terminology within the scientific field of genetics. A related term fitness is well defined, but it can rarely be directly measured. Instead, scientists use objective, measurable quantities, such as the number of seeds a plant produces, the germination rate of a seed, or the percentage of organisms that survive to reproductive age. From this perspective, crossbred plants and animals exhibiting heterosis may have "superior" traits, but this does not necessarily equate to any evidence of outright "genetic superiority". Use of the term "superiority" is commonplace for example in crop breeding, where it is well understood to mean a better-yielding, more robust plant for agriculture. Such a plant may yield better on a farm, but would likely struggle to survive in the wild, making this use open to misinterpretation. In human genetics any question of "genetic superiority" is even more problematic due to the historical and political implications of any such claim. Some may even go as far as to describe it as a questionable value judgement in the realm of politics, not science. not all hybrids exhibit heterosis (see outbreeding depression). An example of the ambiguous value judgements imposed on hybrids and hybrid vigor is the mule. While mules are almost always infertile, they are valued for a combination of hardiness and temperament that is different from either of their horse or donkey parents. While these qualities may make them "superior" for particular uses by humans, the infertility issue implies that these animals would most likely become extinct without the intervention of humans through animal husbandry, making them "inferior" in terms of natural selection. See also F1 hybrid Genetic admixture Heterozygote advantage Outbreeding depression References Further reading NOAA Tech Memo NMFS NWFSC-30: Genetic Effects of Straying of Non-Native Hatchery Fish into Natural Populations: Inbreeding Depression and Outbreeding Depression "Hybrids & Heirlooms"—an article from University of Illinois Extension's Home Hort Hints Roybal, J. (July 1, 1998). "Ranchstar". Beef (beefmagazine.com). "Sex-Links"—regarding poultry; at FeatherSite Breeding Classical genetics Plant sexuality
0.783837
0.994144
0.779247
Population bottleneck
A population bottleneck or genetic bottleneck is a sharp reduction in the size of a population due to environmental events such as famines, earthquakes, floods, fires, disease, and droughts; or human activities such as genocide, speciocide, widespread violence or intentional culling. Such events can reduce the variation in the gene pool of a population; thereafter, a smaller population, with a smaller genetic diversity, remains to pass on genes to future generations of offspring. Genetic diversity remains lower, increasing only when gene flow from another population occurs or very slowly increasing with time as random mutations occur. This results in a reduction in the robustness of the population and in its ability to adapt to and survive selecting environmental changes, such as climate change or a shift in available resources. Alternatively, if survivors of the bottleneck are the individuals with the greatest genetic fitness, the frequency of the fitter genes within the gene pool is increased, while the pool itself is reduced. The genetic drift caused by a population bottleneck can change the proportional random distribution of alleles and even lead to loss of alleles. The chances of inbreeding and genetic homogeneity can increase, possibly leading to inbreeding depression. Smaller population size can also cause deleterious mutations to accumulate. Population bottlenecks play an important role in conservation biology (see minimum viable population size) and in the context of agriculture (biological and pest control). Minimum viable population size In conservation biology, minimum viable population (MVP) size helps to determine the effective population size when a population is at risk for extinction. The effects of a population bottleneck often depend on the number of individuals remaining after the bottleneck and how that compares to the minimum viable population size. Founder effects A slightly different form of bottleneck can occur if a small group becomes reproductively (e.g., geographically) separated from the main population, such as through a founder event, e.g., if a few members of a species successfully colonize a new isolated island, or from small captive breeding programs such as animals at a zoo. Alternatively, invasive species can undergo population bottlenecks through founder events when introduced into their invaded range. Examples Humans According to a 1999 model, a severe population bottleneck, or more specifically a full-fledged speciation, occurred among a group of Australopithecina as they transitioned into the species known as Homo erectus two million years ago. It is believed that additional bottlenecks must have occurred since Homo erectus started walking the Earth, but current archaeological, paleontological, and genetic data are inadequate to give much reliable information about such conjectured bottlenecks. Nonetheless, a 2023 genetic analysis discerned such a human ancestor population bottleneck of a possible 100,000 to 1000 individuals "around 930,000 and 813,000 years ago [which] lasted for about 117,000 years and brought human ancestors close to extinction." A 2005 study from Rutgers University theorized that the pre-1492 native populations of the Americas are the descendants of only 70 individuals who crossed the land bridge between Asia and North America. The Neolithic Y-chromosome bottleneck refers to a period around 5000 BC where the diversity in the male y-chromosome dropped precipitously, to a level equivalent to reproduction occurring with a ratio between men and women of 1:17. Discovered in 2015 the research suggests that the reason for the bottleneck was not a reduction in the number of males, but a drastic decrease in the percentage of males with reproductive success. Toba catastrophe theory The controversial Toba catastrophe theory, presented in the late 1990s to early 2000s, suggested that a bottleneck of the human population occurred approximately 75,000 years ago, proposing that the human population was reduced to perhaps 10,000–30,000 individuals when the Toba supervolcano in Indonesia erupted and triggered a major environmental change. Parallel bottlenecks were proposed to exist among chimpanzees, gorillas, rhesus macaques, orangutans and tigers. The hypothesis was based on geological evidence of sudden climate change and on coalescence evidence of some genes (including mitochondrial DNA, Y-chromosome DNA and some nuclear genes) and the relatively low level of genetic variation in humans. However, subsequent research, especially in the 2010s, appeared to refute both the climate argument and the genetic argument. Recent research shows the extent of climate change was much smaller than believed by proponents of the theory. In 2000, a Molecular Biology and Evolution paper suggested a transplanting model or a 'long bottleneck' to account for the limited genetic variation, rather than a catastrophic environmental change. This would be consistent with suggestions that in sub-Saharan Africa numbers could have dropped at times as low as 2,000, for perhaps as long as 100,000 years, before numbers began to expand again in the Late Stone Age. Other animals European bison, also called wisent (Bison bonasus), faced extinction in the early 20th century. The animals living today are all descended from 12 individuals and they have extremely low genetic variation, which may be beginning to affect the reproductive ability of bulls. The population of American bison (Bison bison) fell due to overhunting, nearly leading to extinction around the year 1890, though it has since begun to recover (see table). A classic example of a population bottleneck is that of the northern elephant seal, whose population fell to about 30 in the 1890s. Although it now numbers in the hundreds of thousands, the potential for bottlenecks within colonies remains. Dominant bulls are able to mate with the largest number of females—sometimes as many as 100. With so much of a colony's offspring descended from just one dominant male, genetic diversity is limited, making the species more vulnerable to diseases and genetic mutations. The golden hamster is a similarly bottlenecked species, with the vast majority of domesticated hamsters descended from a single litter found in the Syrian desert around 1930, and very few wild golden hamsters remain. An extreme example of a population bottleneck is the New Zealand black robin, of which every specimen today is a descendant of a single female, called Old Blue. The Black Robin population is still recovering from its low point of only five individuals in 1980. The genome of the giant panda shows evidence of a severe bottleneck about 43,000 years ago. There is also evidence of at least one primate species, the golden snub-nosed monkey, that also suffered from a bottleneck around this time. An unknown environmental event is suspected to have caused the bottlenecks observed in both of these species. The bottlenecks likely caused the low genetic diversity observed in both species. Other facts can sometimes be inferred from an observed population bottleneck. Among the Galápagos Islands giant tortoises—themselves a prime example of a bottleneck—the comparatively large population on the slopes of the Alcedo volcano is significantly less diverse than four other tortoise populations on the same island. DNA analyses date the bottleneck to around 88,000 years before present (YBP). About 100,000 YBP the volcano erupted violently, deeply burying much of the tortoise habitat in pumice and ash. Another example can be seen in the greater prairie chickens, which were prevalent in North America until the 20th century. In Illinois alone, the number of greater prairie chickens plummeted from over 100 million in 1900 to about 46 in 1998. These declines in population were the result of hunting and habitat destruction, but the random consequences have also caused a great loss in species diversity. DNA analysis comparing the birds from 1990 and mid-century shows a steep genetic decline in recent decades. Management of the greater prairie chickens now includes genetic rescue efforts including the translocation prairie chickens between leks to increase each populations genetic diversity. Population bottlenecking poses a major threat to the stability of species populations as well. Papilio homerus is the largest butterfly in the Americas and is endangered according to the IUCN. The disappearance of a central population poses a major threat of population bottleneck. The remaining two populations are now geographically isolated and the populations face an unstable future with limited remaining opportunity for gene flow. Genetic bottlenecks exist in cheetahs. Selective breeding Bottlenecks also exist among pure-bred animals (e.g., dogs and cats: pugs, Persian) because breeders limit their gene pools by a few (show-winning) individuals for their looks and behaviors. The extensive use of desirable individual animals at the exclusion of others can result in a popular sire effect. Selective breeding for dog breeds caused constricting breed-specific bottlenecks. These bottlenecks have led to dogs having an average of 2–3% more genetic loading than gray wolves. The strict breeding programs and population bottlenecks have led to the prevalence of diseases such as heart disease, blindness, cancers, hip dysplasia, and cataracts. Selective breeding to produce high-yielding crops has caused genetic bottlenecks in these crops and has led to genetic homogeneity. This reduced genetic diversity in many crops could lead to broader susceptibility to new diseases or pests, which threatens global food security. Plants Research showed that there is incredibly low, nearly undetectable amounts of genetic diversity in the genome of the Wollemi pine (Wollemia nobilis). The IUCN found a population count of 80 mature individuals and about 300 seedlings and juveniles in 2011, and previously, the Wollemi pine had fewer than 50 individuals in the wild. The low population size and low genetic diversity indicates that the Wollemi pine went through a severe population bottleneck. A population bottleneck was created in the 1970s through the conservation efforts of the endangered Mauna Kea silversword (Argyroxiphium sandwicense ssp. sandwicense). The small natural population of silversword was augmented through the 1970s with outplanted individuals. All of the outplanted silversword plants were found to be first or subsequent generation offspring of just two maternal founders. The low amount of polymorphic loci in the outplanted individuals led to the population bottleneck, causing the loss of the marker allele at eight of the loci. See also Baby boom Population boom References External links Northern Elephant Seal History Population dynamics Population genetics Human evolution
0.780973
0.99748
0.779005
Degeneracy (biology)
Within biological systems, degeneracy occurs when structurally dissimilar components/pathways can perform similar functions (i.e. are effectively interchangeable) under certain conditions, but perform distinct functions in other conditions. Degeneracy is thus a relational property that requires comparing the behavior of two or more components. In particular, if degeneracy is present in a pair of components, then there will exist conditions where the pair will appear functionally redundant but other conditions where they will appear functionally distinct. Note that this use of the term has practically no relevance to the questionably meaningful concept of evolutionarily degenerate populations that have lost ancestral functions. Biological examples Examples of degeneracy are found in the genetic code, when many different nucleotide sequences encode the same polypeptide; in protein folding, when different polypeptides fold to be structurally and functionally equivalent; in protein functions, when overlapping binding functions and similar catalytic specificities are observed; in metabolism, when multiple, parallel biosynthetic and catabolic pathways may coexist. More generally, degeneracy is observed in proteins of every functional class (e.g. enzymatic, structural, or regulatory), protein complex assemblies, ontogenesis, the nervous system, cell signalling (crosstalk) and numerous other biological contexts reviewed in. Contribution to robustness Degeneracy contributes to the robustness of biological traits through several mechanisms. Degenerate components compensate for one another under conditions where they are functionally redundant, thus providing robustness against component or pathway failure. Because degenerate components are somewhat different, they tend to harbor unique sensitivities so that a targeted attack such as a specific inhibitor is less likely to present a risk to all components at once. There are numerous biological examples where degeneracy contributes to robustness in this way. For instance, gene families can encode for diverse proteins with many distinctive roles yet sometimes these proteins can compensate for each other during lost or suppressed gene expression, as seen in the developmental roles of the adhesins gene family in Saccharomyces. Nutrients can be metabolized by distinct metabolic pathways that are effectively interchangeable for certain metabolites even though the total effects of each pathway are not identical. In cancer, therapies targeting the EGF receptor are thwarted by the co-activation of alternate receptor tyrosine kinases (RTK) that have partial functional overlap with the EGF receptor (and are therefore degenerate), but are not targeted by the same specific EGF receptor inhibitor. Other examples from various levels of biological organization can be found in. Theory Several theoretical developments have outlined links between degeneracy and important biological measurements related to robustness, complexity, and evolvability. These include: Theoretical arguments supported by simulations have proposed that degeneracy can lead to distributed forms of robustness in protein interaction networks. Those authors suggest that similar phenomena is likely to arise in other biological networks and potentially may contribute to the resilience of ecosystems as well. Tononi et al. have found evidence that degeneracy is inseparable from the existence of hierarchical complexity in neural populations. They argue that the link between degeneracy and complexity is likely to be much more general. Fairly abstract simulations have supported the hypothesis that degeneracy fundamentally alters the propensity for a genetic system to access novel heritable phenotypes and that degeneracy could therefore be a precondition for open-ended evolution. The three hypotheses above have been integrated in where they propose that degeneracy plays a central role in the open-ended evolution of biological complexity. In the same article, it was argued that the absence of degeneracy within many designed (abiotic) complex systems may help to explain why robustness appears to be in conflict with flexibility and adaptability, as seen in software, systems engineering, and artificial life. See also Canalisation Equifinality References Further reading Because there are many distinct types of systems that undergo heritable variation and selection (see Universal Darwinism), degeneracy has become a highly interdisciplinary topic. The following provides a brief roadmap to the application and study of degeneracy within different disciplines. Animal Communication Cultural Variation Ecosystems Epigenetics History and philosophy of science Systems biology Evolution Immunology Cohen, I.R., U. Hershberg, and S. Solomon, 2004 Antigen-receptor degeneracy and immunological paradigms. Molecular Immunology, . 40(14–15) pp. 993–996. Tieri, P., G.C. Castellani, D. Remondini, S. Valensin, J. Loroni, S. Salvioli, and C. Franceschi, Capturing degeneracy of the immune system. In Silico Immunology. Springer, 2007. Artificial life, Computational intelligence Andrews, P.S. and J. Timmis, A Computational Model of Degeneracy in a Lymph Node. Lecture Notes in Computer Science, 2006. 4163: p. 164. Mendao, M., J. Timmis, P.S. Andrews, and M. Davies. The Immune System in Pieces: Computational Lessons from Degeneracy in the Immune System. in Foundations of Computational Intelligence (FOCI). 2007. Whitacre, J.M. and A. Bender. Degenerate neutrality creates evolvable fitness landscapes. in WorldComp-2009. 2009. Las Vegas, Nevada, USA. Whitacre, J.M., P. Rohlfshagen, X. Yao, and A. Bender. The role of degenerate robustness in the evolvability of multi-agent systems in dynamic environments. in PPSN XI. 2010. Kraków, Poland. Fernandez-Leon, J.A. (2011). Evolving cognitive-behavioural dependencies in situated agents for behavioural robustness. BioSystems 106, pp. 94–110. Fernandez-Leon, J.A. (2011). Behavioural robustness: a link between distributed mechanisms and coupled transient dynamics. BioSystems 105, Elsevier, pp. 49–61. Fernandez-Leon, J.A. (2010). Evolving experience-dependent robust behaviour in embodied agents. BioSystems 103:1, Elsevier, pp. 45–56. Brain Price, C. and K. Friston, Degeneracy and cognitive anatomy. Trends in Cognitive Sciences, 2002. 6(10) pp. 416–421. Tononi, G., O. Sporns, and G.M. Edelman, Measures of degeneracy and redundancy in biological networks. Proceedings of the National Academy of Sciences, USA, 1999. 96(6) pp. 3257–3262. Mason, P.H. (2014) What is normal? A historical survey and neuroanthropological perspective, in Jens Clausen and Neil Levy. (Eds.) Handbook of Neuroethics, Springer, pp. 343–363. Linguistics Oncology Tian, T., S. Olson, J.M. Whitacre, and A. Harding, The origins of cancer robustness and evolvability. Integrative Biology, 2011. 3: pp. 17–30. Peer Review Lehky, S., Peer Evaluation and Selection Systems: Adaptation and Maladaptation of Individuals and Groups through Peer Review. 2011: BioBitField Press. Researchers Duarte Araujo Sergei Atamas Andrew Barron Keith Davids Gerald Edelman Ryszard Maleszka Paul Mason Ludovic Seifert Ricard Sole Giulio Tononi James Whitacre External links degeneracy research community Biological concepts Biology theories Evolutionarily significant biological phenomena Systems biology Evolutionary dynamics Evolutionary processes
0.803079
0.969972
0.778964
Biological interaction
In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship. Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups. History Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic. The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitude of interaction force (competition/mutualism) or effect of individual fitness, according the stress gradient hypothesis and Mutualism Parasitism Continuum. Evolutionary game theory such as Red Queen Hypothesis, Red King Hypothesis or Black Queen Hypothesis, have demonstrated a classification based on the force of interaction is important. Classification based on time of interaction Short-term interactions Short-term interactions, including predation and pollination, are extremely important in ecology and evolution. These are short-lived in terms of the duration of a single interaction: a predator kills and eats a prey; a pollinator transfers pollen from one flower to another; but they are extremely durable in terms of their influence on the evolution of both partners. As a result, the partners coevolve. Predation In predation, one organism, the predator, kills and eats another organism, its prey. Predators are adapted and often highly specialized for hunting, with acute senses such as vision, hearing, or smell. Many predatory animals, both vertebrate and invertebrate, have sharp claws or jaws to grip, kill, and cut up their prey. Other adaptations include stealth and aggressive mimicry that improve hunting efficiency. Predation has a powerful selective effect on prey, causing them to develop antipredator adaptations such as warning coloration, alarm calls and other signals, camouflage and defensive spines and chemicals. Predation has been a major driver of evolution since at least the Cambrian period. Over the last several decades, microbiologists have discovered a number of fascinating microbes that survive by their ability to prey upon others. Several of the best examples are members of the genera Daptobacter (Campylobacterota), Bdellovibrio, and Vampirococcus. Bdellovibrios are active hunters that are vigorously motile, swimming about looking for susceptible Gram-negative bacterial prey. Upon sensing such a cell, a bdellovibrio cell swims faster until it collides with the prey cell. It then bores a hole through the outer membrane of its prey and enters the periplasmic space. As it grows, it forms a long filament that eventually forms septae and produces progeny bacteria. Lysis of the prey cell releases new bdellovibrio cells. Bdellovibrios will not attack mammalian cells, and Gram-negative prey bacteria have never been observed to acquire resistance to bdellovibrios. This has raised interest in the use of these bacteria as a "probiotic" to treat infected wounds. Although this has not yet been tried, one can imagine that with the rise in antibiotic-resistant pathogens, such forms of treatments may be considered viable alternatives. Pollination In pollination, pollinators including insects (entomophily), some birds (ornithophily), and some bats, transfer pollen from a male flower part to a female flower part, enabling fertilisation, in return for a reward of pollen or nectar. The partners have coevolved through geological time; in the case of insects and flowering plants, the coevolution has continued for over 100 million years. Insect-pollinated flowers are adapted with shaped structures, bright colours, patterns, scent, nectar, and sticky pollen to attract insects, guide them to pick up and deposit pollen, and reward them for the service. Pollinator insects like bees are adapted to detect flowers by colour, pattern, and scent, to collect and transport pollen (such as with bristles shaped to form pollen baskets on their hind legs), and to collect and process nectar (in the case of honey bees, making and storing honey). The adaptations on each side of the interaction match the adaptations on the other side, and have been shaped by natural selection on their effectiveness of pollination. Seed dispersal Seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their propagules, including both abiotic vectors such as the wind and living (biotic) vectors like birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus. Dispersal involves the letting go or detachment of a diaspore from the main parent plant. Long-term interactions (Symbiosis) The six possible types of symbiosis are mutualism, commensalism, parasitism, neutralism, amensalism, and competition. These are distinguished by the degree of benefit or harm they cause to each partner. Mutualism Mutualism is an interaction between two or more species, where species derive a mutual benefit, for example an increased carrying capacity. Similar interactions within a species are known as co-operation. Mutualism may be classified in terms of the closeness of association, the closest being symbiosis, which is often confused with mutualism. One or both species involved in the interaction may be obligate, meaning they cannot survive in the short or long term without the other species. Though mutualism has historically received less attention than other interactions such as predation, it is an important subject in ecology. Examples include cleaning symbiosis, gut flora, Müllerian mimicry, and nitrogen fixation by bacteria in the root nodules of legumes. Commensalism Commensalism benefits one organism and the other organism is neither benefited nor harmed. It occurs when one organism takes benefits by interacting with another organism by which the host organism is not affected. A good example is a remora living with a manatee. Remoras feed on the manatee's faeces. The manatee is not affected by this interaction, as the remora does not deplete the manatee's resources. Parasitism Parasitism is a relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life. The parasite either feeds on the host, or, in the case of intestinal parasites, consumes some of its food. Neutralism Neutralism (a term introduced by Eugene Odum) describes the relationship between two species that interact but do not affect each other. Examples of true neutralism are virtually impossible to prove; the term is in practice used to describe situations where interactions are negligible or insignificant. Amensalism Amensalism (a term introduced by Haskell) is an interaction where an organism inflicts harm to another organism without any costs or benefits received by itself. Amensalism describes the adverse effect that one organism has on another organism (figure 32.1). This is a unidirectional process based on the release of a specific compound by one organism that has a negative effect on another. A classic example of amensalism is the microbial production of antibiotics that can inhibit or kill other, susceptible microorganisms. A clear case of amensalism is where sheep or cattle trample grass. Whilst the presence of the grass causes negligible detrimental effects to the animal's hoof, the grass suffers from being crushed. Amensalism is often used to describe strongly asymmetrical competitive interactions, such as has been observed between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub. Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest the weevils upon it. Amensalisms can be quite complex. Attine ants (ants belonging to a New World tribe) are able to take advantage of an interaction between an actinomycete and a parasitic fungus in the genus Escovopsis. This amensalistic relationship enables the ant to maintain a mutualism with members of another fungal genus, Leucocoprinus. These ants cultivate a garden of Leucocoprinus fungi for their own nourishment. To prevent the parasitic fungus Escovopsis from decimating their fungal garden, the ants also promote the growth of an actinomycete of the genus Pseudonocardia, which produces an antimicrobial compound that inhibits the growth of the Escovopsis fungi. Competition Competition can be defined as an interaction between organisms or species, in which the fitness of one is lowered by the presence of another. Competition is often for a resource such as food, water, or territory in limited supply, or for access to females for reproduction. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources should either adapt or die out. According to evolutionary theory, this competition within and between species for resources plays a critical role in natural selection. Classification based on effect on fitness Biotic interactions can vary in intensity (strength of interaction), and frequency (number of interactions in a given time). There are direct interactions when there is a physical contact between individuals or indirect interactions when there is no physical contact, that is, the interaction occurs with a resource, ecological service, toxine or growth inhibitor. The interactions can be directly determined by individuals (incidentally) or by stochastic processes (accidentally), for instance side effects that one individual have on other. They are divided into six major types: Competition, Antagonism, Amensalism, Neutralism, Commensalism and Mutualism. Competition It is when two organisms fight and both reduce their fitness. An incidental dysbiosis (determined by organisms) is observed. It could be direct competition, when two organisms fight physically and both end up affected. Include interference competition. Indirect competition is when two organisms fight indirectly for a resource or service and both end up affected. It includes exploitation competition, competitive exclusion and apparent exploitation competition. Competition is related to Red Queen Hypothesis. Antagonism It is when one organism takes advantage of another, one increases its fitness and the other decreases it. An incidental antibiosis (determined by chance) is observed. Direct antagonism is when an organism benefits by directly harming, partially or totally consuming another organism. Includes predation, grazing, browsing, and parasitism. Indirect antagonism is when one organism benefits by harming or consuming the resources or ecological services of another organism. Includes allelopathic antagonism, metabolic antagonism, resource exploitation. Amensalism It is when one organism maintains its fitness, but the fitness of another decreases. Accidental antibiosis (determined by chance) is observed. Direct amensalism is when one organism physically inhibits the presence of another, but the latter is neither benefited nor harmed. Includes accidental crushing. (e.g., crushing an ant does not increase or decrease fitness of the crusher). Indirect amensalism is when an organism accidentally inhibits the presence of another with chemical substances (inhibitors) or waste. Includes accidental antibiosis, accidental poisoning and accidental allelopathy. Neutralism It is when two organisms accidentally coexist, but they do not benefit or harm each other physically or through resources or services, there is no change in the fitness for both. Commensalism It is when one organism maintains its fitness, but the fitness of another increases. Accidental probiosis (determined by chance) is observed. Direct comensalism is when an organism physically benefits another organism without harming or benefiting it. Includes facilitation, epibiosis, and phoresis. Indirect comensalism is when an organism benefits from the resource or service of another without affecting or benefiting it. Includes tanatochresis, inquiliny, detrivory, scavenging, coprophagy. Mutualism When two organisms cooperate and both increase their fitness. Incidental probiosis (determined by organisms) is observed. It is subdivided into. Direct mutualism is when two organisms physically cooperate and both benefit, it includes obligate symbiosis. Indirect mutualism is when two organisms cooperate to obtain a resource or service and both benefit. It includes facultative symbiosis, protocooperation, niche construction, metabolic syntrophy, holobiosis, mutual aid, and metabolic coupling. Mutualism is related to the Red King and Black Queen hypotheses. Non-trophic interactions Some examples of non-trophic interactions are habitat modification, mutualism and competition for space. It has been suggested recently that non-trophic interactions can indirectly affect food web topology and trophic dynamics by affecting the species in the network and the strength of trophic links. A number of recent theoretical studies have emphasized the need to integrate trophic and non-trophic interactions in ecological network analyses. The few empirical studies that address this suggest food web structures (network topologies) can be strongly influenced by species interactions outside the trophic network. However these studies include only a limited number of coastal systems, and it remains unclear to what extent these findings can be generalized. Whether non-trophic interactions typically affect specific species, trophic levels, or functional groups within the food web, or, alternatively, indiscriminately mediate species and their trophic interactions throughout the network has yet to be resolved. Some studies suggest sessile species with generally low trophic levels seem to benefit more than others from non-trophic facilitation, while other studies suggest facilitation benefits higher trophic and more mobile species as well. A 2018 study by Borst et al.. tested the general hypothesis that foundation species – spatially dominant habitat-structuring organisms – modify food webs by enhancing their size as indicated by species number, and their complexity as indicated by link density, via facilitation of species, regardless of ecosystem type (see diagram). Additionally, they tested that any change in food web properties caused by foundation species occurs via random facilitation of species throughout the entire food web or via targeted facilitation of specific species that belong to certain trophic levels or functional groups. It was found that species at the base of the food web are less strongly, and carnivores are more strongly facilitated in foundation species' food webs than predicted based on random facilitation, resulting in a higher mean trophic level and a longer average chain length. This indicates foundation species strongly enhance food web complexity through non-trophic facilitation of species across the entire trophic network. Although foundation species are part of the food web like any other species (e.g. as prey or predator), numerous studies have shown that they strongly facilitate the associated community by creating new habitat and alleviating physical stress. This form of non-trophic facilitation by foundation species has been found to occur across a wide range of ecosystems and environmental conditions. In harsh coastal zones, corals, kelps, mussels, oysters, seagrasses, mangroves, and salt marsh plants facilitate organisms by attenuating currents and waves, providing aboveground structure for shelter and attachment, concentrating nutrients, and/or reducing desiccation stress during low tide exposure. In more benign systems, foundation species such as the trees in a forest, shrubs and grasses in savannahs, and macrophytes in freshwater systems, have also been found to play a major habitat-structuring role. Ultimately, all foundation species increase habitat complexity and availability, thereby partitioning and enhancing the niche space available to other species. See also Altruism (biology) Animal sexual behaviour Biological pump – interaction between marine animals and carbon forms Cheating (biology) Collective animal behavior Detritivory Epibiont Evolving digital ecological network Food chain Kin selection Microbial cooperation Microbial loop Quorum sensing Spite (game theory) Swarm behaviour Notes References Further reading Snow, B. K. & Snow, D. W. (1988). Birds and berries: a study of an ecological interaction. Poyser, London Ecology
0.785342
0.991873
0.77896
Environmental resource management
Environmental resource management or environmental management is the management of the interaction and impact of human societies on the environment. It is not, as the phrase might suggest, the management of the environment itself. Environmental resources management aims to ensure that ecosystem services are protected and maintained for future human generations, and also maintain ecosystem integrity through considering ethical, economic, and scientific (ecological) variables. Environmental resource management tries to identify factors between meeting needs and protecting resources. It is thus linked to environmental protection, resource management, sustainability, integrated landscape management, natural resource management, fisheries management, forest management, wildlife management, environmental management systems, and others. Significance Environmental resource management is an issue of increasing concern, as reflected in its prevalence in several texts influencing global sociopolitical frameworks such as the Brundtland Commission's Our Common Future, which highlighted the integrated nature of the environment and international development, and the Worldwatch Institute's annual State of the World reports. The environment determines the nature of people, animals, plants, and places around the Earth, affecting behaviour, religion, culture and economic practices. Scope Environmental resource management can be viewed from a variety of perspectives. It involves the management of all components of the biophysical environment, both living (biotic) and non-living (abiotic), and the relationships among all living species and their habitats. The environment also involves the relationships of the human environment, such as the social, cultural, and economic environment, with the biophysical environment. The essential aspects of environmental resource management are ethical, economical, social, and technological. These underlie principles and help make decisions. The concept of environmental determinism, probabilism, and possibilism are significant in the concept of environmental resource management. Environmental resource management covers many areas in science, including geography, biology, social sciences, political sciences, public policy, ecology, physics, chemistry, sociology, psychology, and physiology. Environmental resource management as a practice and discourse (across these areas) is also the object of study in the social sciences. Aspects Ethical Environmental resource management strategies are intrinsically driven by conceptions of human-nature relationships. Ethical aspects involve the cultural and social issues relating to the environment, and dealing with changes to it. "All human activities take place in the context of certain types of relationships between society and the bio-physical world (the rest of nature)," and so, there is a great significance in understanding the ethical values of different groups around the world. Broadly speaking, two schools of thought exist in environmental ethics: Anthropocentrism and Ecocentrism, each influencing a broad spectrum of environmental resource management styles along a continuum. These styles perceive "...different evidence, imperatives, and problems, and prescribe different solutions, strategies, technologies, roles for economic sectors, culture, governments, and ethics, etc." Anthropocentrism Anthropocentrism, "an inclination to evaluate reality exclusively in terms of human values," is an ethic reflected in the major interpretations of Western religions and the dominant economic paradigms of the industrialised world. Anthropocentrism looks at nature as existing solely for the benefit of humans, and as a commodity to use for the good of humanity and to improve human quality of life. Anthropocentric environmental resource management is therefore not the conservation of the environment solely for the environment's sake, but rather the conservation of the environment, and ecosystem structure, for humans' sake. Ecocentrism Ecocentrists believe in the intrinsic value of nature while maintaining that human beings must use and even exploit nature to survive and live. It is this fine ethical line that ecocentrists navigate between fair use and abuse. At an extreme of the ethical scale, ecocentrism includes philosophies such as ecofeminism and deep ecology, which evolved as a reaction to dominant anthropocentric paradigms. "In its current form, it is an attempt to synthesize many old and some new philosophical attitudes about the relationship between nature and human activity, with particular emphasis on ethical, social, and spiritual aspects that have been downplayed in the dominant economic worldview." Economics Main article: Economics The economy functions within and is dependent upon goods and services provided by natural ecosystems. The role of the environment is recognized in both classical economics and neoclassical economics theories, yet the environment was a lower priority in economic policies from 1950 to 1980 due to emphasis from policy makers on economic growth. With the prevalence of environmental problems, many economists embraced the notion that, "If environmental sustainability must coexist for economic sustainability, then the overall system must [permit] identification of an equilibrium between the environment and the economy." As such, economic policy makers began to incorporate the functions of the natural environment – or natural capital – particularly as a sink for wastes and for the provision of raw materials and amenities. Debate continues among economists as to how to account for natural capital, specifically whether resources can be replaced through knowledge and technology, or whether the environment is a closed system that cannot be replenished and is finite. Economic models influence environmental resource management, in that management policies reflect beliefs about natural capital scarcity. For someone who believes natural capital is infinite and easily substituted, environmental management is irrelevant to the economy. For example, economic paradigms based on neoclassical models of closed economic systems are primarily concerned with resource scarcity and thus prescribe legalizing the environment as an economic externality for an environmental resource management strategy. This approach has often been termed 'Command-and-control'. Colby has identified trends in the development of economic paradigms, among them, a shift towards more ecological economics since the 1990s. Ecology There are many definitions of the field of science commonly called ecology. A typical one is "the branch of biology dealing with the relations and interactions between organisms and their environment, including other organisms." "The pairing of significant uncertainty about the behaviour and response of ecological systems with urgent calls for near-term action constitutes a difficult reality, and a common lament" for many environmental resource managers. Scientific analysis of the environment deals with several dimensions of ecological uncertainty. These include: structural uncertainty resulting from the misidentification, or lack of information pertaining to the relationships between ecological variables; parameter uncertainty referring to "uncertainty associated with parameter values that are not known precisely but can be assessed and reported in terms of the likelihood…of experiencing a defined range of outcomes"; and stochastic uncertainty stemming from chance or unrelated factors. Adaptive management is considered a useful framework for dealing with situations of high levels of uncertainty though it is not without its detractors. A common scientific concept and impetus behind environmental resource management is carrying capacity. Simply put, carrying capacity refers to the maximum number of organisms a particular resource can sustain. The concept of carrying capacity, whilst understood by many cultures over history, has its roots in Malthusian theory. An example is visible in the EU Water Framework Directive. However, "it is argued that Western scientific knowledge ... is often insufficient to deal with the full complexity of the interplay of variables in environmental resource management. These concerns have been recently addressed by a shift in environmental resource management approaches to incorporate different knowledge systems including traditional knowledge, reflected in approaches such as adaptive co-management community-based natural resource management and transitions management among others. Sustainability Sustainability in environmental resource management involves managing economic, social, and ecological systems both within and outside an organizational entity so it can sustain itself and the system it exists in. In context, sustainability implies that rather than competing for endless growth on a finite planet, development improves quality of life without necessarily consuming more resources. Sustainably managing environmental resources requires organizational change that instills sustainability values that portrays these values outwardly from all levels and reinforces them to surrounding stakeholders. The result should be a symbiotic relationship between the sustaining organization, community, and environment. Many drivers compel environmental resource management to take sustainability issues into account. Today's economic paradigms do not protect the natural environment, yet they deepen human dependency on biodiversity and ecosystem services. Ecologically, massive environmental degradation and climate change threaten the stability of ecological systems that humanity depends on. Socially, an increasing gap between rich and poor and the global North–South divide denies many access to basic human needs, rights, and education, leading to further environmental destruction. The planet's unstable condition is caused by many anthropogenic sources. As an exceptionally powerful contributing factor to social and environmental change, the modern organisation has the potential to apply environmental resource management with sustainability principles to achieve highly effective outcomes. To achieve sustainable development with environmental resource management an organisation should work within sustainability principles, including social and environmental accountability, long-term planning; a strong, shared vision; a holistic focus; devolved and consensus decision making; broad stakeholder engagement and justice; transparency measures; trust; and flexibility. Current paradigm shifts To adjust to today's environment of quick social and ecological changes, some organizations have begun to experiment with new tools and concepts. Those that are more traditional and stick to hierarchical decision making have difficulty dealing with the demand for lateral decision making that supports effective participation. Whether it be a matter of ethics or just strategic advantage organizations are internalizing sustainability principles. Some of the world's largest and most profitable corporations are shifting to sustainable environmental resource management: Ford, Toyota, BMW, Honda, Shell, Du Port, Sta toil, Swiss Re, Hewlett-Packard, and Unilever, among others. An extensive study by the Boston Consulting Group reaching 1,560 business leaders from diverse regions, job positions, expertise in sustainability, industries, and sizes of organizations, revealed the many benefits of sustainable practice as well as its viability. Although the sustainability of environmental resource management has improved, corporate sustainability, for one, has yet to reach the majority of global companies operating in the markets. The three major barriers to preventing organizations from shifting towards sustainable practice with environmental resource management are not understanding what sustainability is; having difficulty modeling an economically viable case for the switch; and having a flawed execution plan, or a lack thereof. Therefore, the most important part of shifting an organization to adopt sustainability in environmental resource management would be to create a shared vision and understanding of what sustainability is for that particular organization and to clarify the business case. Stakeholders Public sector The public sector comprises the general government sector plus all public corporations including the central bank. In environmental resource management the public sector is responsible for administering natural resource management and implementing environmental protection legislation. The traditional role of the public sector in environmental resource management is to provide professional judgement through skilled technicians on behalf of the public. With the increase of intractable environmental problems, the public sector has been led to examine alternative paradigms for managing environmental resources. This has resulted in the public sector working collaboratively with other sectors (including other governments, private and civil) to encourage sustainable natural resource management behaviours. Private sector The private sector comprises private corporations and non-profit institutions serving households. The private sector's traditional role in environmental resource management is that of the recovery of natural resources. Such private sector recovery groups include mining (minerals and petroleum), forestry and fishery organisations. Environmental resource management undertaken by the private sectors varies dependent upon the resource type, that being renewable or non-renewable and private and common resources (also see Tragedy of the Commons). Environmental managers from the private sector also need skills to manage collaboration within a dynamic social and political environment. Civil society Civil society comprises associations in which societies voluntarily organise themselves and which represent a wide range of interests and ties. These can include community-based organisations, indigenous peoples' organisations and non-government organisations (NGOs). Functioning through strong public pressure, civil society can exercise their legal rights against the implementation of resource management plans, particularly land management plans. The aim of civil society in environmental resource management is to be included in the decision-making process by means of public participation. Public participation can be an effective strategy to invoke a sense of social responsibility of natural resources. Tools As with all management functions, effective management tools, standards, and systems are required. An environmental management standard or system or protocol attempts to reduce environmental impact as measured by some objective criteria. The ISO 14001 standard is the most widely used standard for environmental risk management and is closely aligned to the European Eco-Management and Audit Scheme (EMAS). As a common auditing standard, the ISO 19011 standard explains how to combine this with quality management. Other environmental management systems (EMS) tend to be based on the ISO 14001 standard and many extend it in various ways: The Green Dragon Environmental Management Standard is a five-level EMS designed for smaller organisations for whom ISO 14001 may be too onerous and for larger organisations who wish to implement ISO 14001 in a more manageable step-by-step approach, BS 8555 is a phased standard that can help smaller companies move to ISO 14001 in six manageable steps, The Natural Step focuses on basic sustainability criteria and helps focus engineering on reducing use of materials or energy use that is unsustainable in the long term, Natural Capitalism advises using accounting reform and a general biomimicry and industrial ecology approach to do the same thing, US Environmental Protection Agency has many further terms and standards that it defines as appropriate to large-scale EMS, The UN and World Bank has encouraged adopting a "natural capital" measurement and management framework. Other strategies exist that rely on making simple distinctions rather than building top-down management "systems" using performance audits and full cost accounting. For instance, Ecological Intelligent Design divides products into consumables, service products or durables and unsaleables – toxic products that no one should buy, or in many cases, do not realize they are buying. By eliminating the unsaleables from the comprehensive outcome of any purchase, better environmental resource management is achieved without systems. Another example that diverges from top-down management is the implementation of community based co-management systems of governance. An example of this is community based subsistence fishing areas, such as is implemented in Ha'ena, Hawaii. Community based systems of governance allow for the communities who most directly interact with the resource and who are most deeply impacted by the overexploitation of said resource to make the decisions regarding its management, thus empowering local communities and more effectively managing resources. Recent successful cases have put forward the notion of integrated management. It shares a wider approach and stresses out the importance of interdisciplinary assessment. It is an interesting notion that might not be adaptable to all cases. Case Study: Kissidougou, Guinea (Fairhead, Leach) Kissidougou, Guinea’s dry season brings about fires in the open grass fires which defoliate the few trees in the savanna. There are villages within this savanna surrounded by “islands” of forests, allowing for forts, hiding, rituals, protection from wind and fire, and shade for crops. According to scholars and researchers in the region during the late-19th and 20th centuries, there was a steady decline in tree cover. This led to colonial Guinea’s implementation of policies, including the switch of upland to swamp farming; bush-fire control; protection of certain species and land; and tree planting in villages. These policies were carried out in the form of permits, fines, and military repression. But, Kissidougou villagers claim their ancestors’ established these islands. Many maps and letters evidence France’s occupation of Guinea, as well as Kissidougou’s past landscape. During the 1780s to 1860s “the whole country [was] prairie.” James Fairhead and Melissa Leach, both environmental anthropologists at the University of Sussex, claim the state’s environmental analyses “casts into question the relationships between society, demography, and environment.” With this, they reformed the state’s narratives: Local land use can be both vegetation enriching and degrading; combined effect on resource management is greater than the sum of their parts; there is evidence of increased population correlating to an increase in forest cover. Fairhead and Leach support the enabling of policy and socioeconomic conditions in which local resource management conglomerates can act effectively. In Kissidougou, there is evidence that local powers and community efforts shaped the island forests that shape the savanna’s landscape. See also Citizen science, cleanup projects that people can take part in. Cleaner production Environmental impact assessment Environmental management scheme Environmental manager Integrated landscape management ISO 14000 Natural resource management Planetary management Political ecology Resource justice Stakeholder analysis Sustainable management References Further reading External links Economic Costs & Benefits of Environmental Management NOAA Economics business.gov – provides businesses with environmental management tips, as well as tips for green business owners (United States) Nonprofit research on managing the environment Resource economics Natural resource management Systems ecology Human-Environment interaction
0.788632
0.987691
0.778924
Evolutionary anthropology
Evolutionary anthropology, the interdisciplinary study of the evolution of human physiology and human behaviour and of the relation between hominids and non-hominid primates, builds on natural science and on social science. Various fields and disciplines of evolutionary anthropology include: human evolution and anthropogeny paleoanthropology and paleontology of both human and non-human primates primatology and primate ethology the sociocultural evolution of human behavior, including phylogenetic approaches to historical linguistics the cultural anthropology and sociology of humans the archaeological study of human technology and of its changes over time and space human evolutionary genetics and changes in the human genome over time the neuroscience, endocrinology, and neuroanthropology of human and primate cognition, culture, actions and abilities human behavioural ecology and the interaction between humans and the environment studies of human anatomy, physiology, molecular biology, biochemistry, and differences and changes between species, variation between human groups, and relationships to cultural factors Evolutionary anthropology studies both the biological and the cultural evolution of humans, past and present. Based on a scientific approach, it brings together fields such as archaeology, behavioral ecology, psychology, primatology, and genetics. As a dynamic and interdisciplinary field, it draws on many lines of evidence to understand the human experience, past and present. Studies of human biological evolution generally focus on the evolution of the human form. Cultural evolution involves the study of cultural change over time and space and frequently incorporates cultural-transmission models. Cultural evolution is not the same as biological evolution: human culture involves the transmission of cultural information (compare memetics), and such transmission can behave in ways quite distinct from human biology and genetics. The study of cultural change increasingly takes place through cladistics and genetic models. See also References Anthropology Anthropology
0.797696
0.976316
0.778803
Evolutionary mismatch
Evolutionary mismatch (also "mismatch theory" or "evolutionary trap") is the evolutionary biology concept that a previously advantageous trait may become maladaptive due to change in the environment, especially when change is rapid. It is said this can take place in humans as well as other animals. Environmental change leading to evolutionary mismatch can be broken down into two major categories: temporal (change of the existing environment over time, e.g. a climate change) or spatial (placing organisms into a new environment, e.g. a population migrating). Since environmental change occurs naturally and constantly, there will certainly be examples of evolutionary mismatch over time. However, because large-scale natural environmental change – like a natural disaster – is often rare, it is less often observed. Another more prevalent kind of environmental change is anthropogenic (human-caused). In recent times, humans have had a large, rapid, and trackable impact on the environment, thus creating scenarios where it is easier to observe evolutionary mismatch. Because of the mechanism of evolution by natural selection, the environment ("nature") determines ("selects") which traits will persist in a population. Therefore, there will be a gradual weeding out of disadvantageous traits over several generations as the population becomes more adapted to its environment. Any significant change in a population's traits that cannot be attributed to other factors (such as genetic drift and mutation) will be responsive to a change in that population's environment; in other words, natural selection is inherently reactive. Shortly following an environmental change, traits that evolved in the previous environment, whether they were advantageous or neutral, are persistent for several generations in the new environment. Because evolution is gradual and environmental changes often occur very quickly on a geological scale, there is always a period of "catching-up" as the population evolves to become adapted to the environment. It is this temporary period of "disequilibrium" that is referred to as mismatch. Mismatched traits are ultimately addressed in one of several possible ways: the organism may evolve such that the maladaptive trait is no longer expressed, the organism may decline and/or become extinct as a result of the disadvantageous trait, or the environment may change such that the trait is no longer selected against. History As evolutionary thought became more prevalent, scientists studied and attempted to explain the existence of disadvantageous traits, known as maladaptations, that are the basis of evolutionary mismatch. The theory of evolutionary mismatch began under the term evolutionary trap as early as the 1940s. In his 1942 book, evolutionary biologist Ernst Mayr described evolutionary traps as the phenomenon that occurs when a genetically uniform population suited for a single set of environmental conditions is susceptible to extinction from sudden environment changes. Since then, key scientists such as Warren J. Gross and Edward O. Wilson have studied and identified numerous examples of evolutionary traps. The first occurrence of the term "evolutionary mismatch" may have been in a paper by Jack E. Riggs published in the Journal of Clinical Epidemiology in 1993. In the years to follow, the term evolutionary mismatch has become widely used to describe biological maladaptations in a wide range of disciplines. A coalition of modern scientists and community organizers assembled to found the Evolution Institute in 2008, and in 2011 published a more recent culmination of information on evolutionary mismatch theory in an article by Elisabeth Lloyd, David Sloan Wilson, and Elliott Sober. In 2018 a popular science book appeared by evolutionary psychologists on evolutionary mismatch and the implications for humans Mismatch in human evolution Neolithic Revolution: transitional context The Neolithic Revolution brought about significant evolutionary changes in humans; namely the transition from a hunter-gatherer lifestyle, in which humans foraged for food, to an agricultural lifestyle. This change occurred approximately 10,000–12,000 years ago. Humans began to domesticate both plants and animals, allowing for the maintenance of constant food resources. This transition quickly and dramatically changed the way that humans interact with the environment, with societies taking up practices of farming and animal husbandry. However, human bodies had evolved to be adapted to their previous foraging lifestyle. The slow pace of evolution in comparison with the very fast pace of human advancement allowed for the persistence of these adaptations in an environment where they are no longer necessary. In some human societies that now function in a vastly different way from the hunter-gatherer lifestyle, these outdated adaptations now lead to the presence of maladaptive, or mismatched, traits. Obesity and diabetes Human bodies are predisposed to maintain homeostasis, especially when storing energy as fat. This trait serves as the main basis for the "thrifty gene hypothesis", the idea that "feast-or-famine conditions during human evolutionary development naturally selected for people whose bodies were efficient in their use of food calories". Hunter-gatherers, who used to live under environmental stress, benefit from this trait; there was an uncertainty of when the next meal would be, and they would spend most of their time performing high levels of physical activity. Therefore, those that consumed many calories would store the extra energy as fat, which they could draw upon in times of hunger. However, modern humans have evolved to a world of more sedentary lifestyles and convenience foods. People are sitting more throughout their days, whether it be in their cars during rush hour or in their cubicles during their full-time jobs. Less physical activity in general means fewer calories burned throughout the day. Human diets have changed considerably over the 10,000 years since the advent of agriculture, with more processed foods in their diets that lack nutritional value and lead them to consume more sodium, sugar, and fat. These high calorie, nutrient-deficient foods cause people to consume more calories than they burn. Fast food combined with decreased physical activity means that the "thrifty gene" that once benefit human predecessors now works against them, causing their bodies to store more fat and leading to higher levels of obesity in the population. Obesity is one consequence of mismatched genes. Known as "metabolic syndrome", this condition is also associated with other health concerns, including insulin resistance, where the body no longer responds to insulin secretion, so blood glucose levels are unable to be lowered, which can lead to type 2 diabetes. Osteoporosis Another human disorder that can be explained by mismatch theory is the rise in osteoporosis in modern humans. In advanced societies, many people, especially women, are remarkably susceptible to osteoporosis during aging. Fossil evidence has suggested that this was not always the case, with bones from elderly hunter-gatherer women often showing no evidence of osteoporosis. Evolutionary biologists have posited that the increase in osteoporosis in modern Western populations is likely due to our considerably sedentary lifestyles. Women in hunter-gatherer societies were physically active both from a young age and well into their late-adult lives. This constant physical activity likely lead to peak bone mass being considerably higher in hunter-gatherer humans than in modern-day humans. While the pattern of bone mass degradation during aging is purportedly the same for both hunter-gatherers and modern humans, the higher peak bone mass associated with more physical activity may have led hunter-gatherers to be able to develop a propensity to avoid osteoporosis during aging. Hygiene hypothesis The hygiene hypothesis, a concept initially theorized by immunologists and epidemiologists, has been proved to have a strong connection with evolutionary mismatch through recent studies. The hygiene hypothesis states that the profound increase in allergies, autoimmune diseases, and some other chronic inflammatory diseases is related to the reduced exposure of the immune system to antigens. Such reduced exposure is more common in industrialized countries and especially urban areas, where the inflammatory chronic diseases are also more frequently seen. Recent analysis and studies have tied the hygiene hypothesis and evolutionary mismatch together. Some researchers suggest that the overly sterilized urban environment changes or depletes the microbiota composition and diversity. Such environmental conditions favor the development of the inflammatory chronic diseases because human bodies have been selected to adapt to a pathogen-rich environment in the history of evolution. For example, studies have shown that change in our symbiont community can lead to the disorder of immune homeostasis, which can be used to explain why antibiotic use in early childhood can result in higher asthma risk. Because the change or depletion of the microbiome is often associated with hygiene hypothesis, the hypothesis is sometimes also called "biome depletion theory". Human behavior Behavioral examples of evolutionary mismatch theory include the abuse of dopaminergic pathways and the reward system. An action or behavior that stimulates the release of dopamine, a neurotransmitter known for generating a sense of pleasure, will likely be repeated since the brain is programmed to continually seek such pleasure. In hunter-gatherer societies, this reward system was beneficial for survival and reproductive success. But now, when there are fewer challenges to survival and reproducing, certain activities in the present environment (gambling, drug use, eating) exploit this system, leading to addictive behaviors. Anxiety Anxiety is another example of a modern manifestation of evolutionary mismatch in humans. An immediate return environment is when decisions made in the present create immediate results. Prehistoric human brains have evolved to assimilate to this particular environment; creating reactions such as anxiety to solve short-term problems. For example, the fear of a predator stalking a human, causes the human to run away consequently immediately ensuring the safety of the human as the distance increases from the predator. However, humans currently live in a different environment called the delayed reaction environment. In this environment, current decisions do not create immediate results. The advancement of society has reduced the threat of external factors such as predators, lack of food, shelter, etc. therefore human problems that once circulated around current survival have changed into how the present will affect the quality of future survival. In summation, traits like anxiety have become outdated as the advancement of society has allowed humans to no longer be under constant threat and instead worry about the future. Work stress Examples of evolutionary mismatch also occur in the modern workplace. Unlike our hunter-gatherer ancestors who lived in small egalitarian societies, the modern work place is large, complex, and hierarchical. Humans spend significant amounts of time interacting with strangers in conditions that are very different from those of our ancestral past. Hunter-gatherers do not separate work from their private lives, they have no bosses to be accountable to, or no deadlines to adhere to. Our stress system reacts to immediate threats and opportunities. The modern workplace exploits evolved psychological mechanisms that are aimed at immediate survival or longer-term reproduction. These basic instincts misfire in the modern workplace, causing conflicts at work, burnout, job alienation and poor management practices. Gambling There are two aspects of gambling that make it an addictive activity: chance and risk. Chance gives gambling its novelty. Back when humans had to forage and hunt for food, novelty-seeking was advantageous for them, particularly for their diet. However, with the development of casinos, this trait of pursuing novelties has become disadvantageous. Risk assessment, the other behavioral trait applicable to gambling, was also beneficial to hunter-gatherers in the face of danger. However, the types of risks hunter-gatherers had to assess are significantly different and more life-threatening than the risks people now face. The attraction to gambling stems from the attraction to risk and reward related activity. Drug addiction Herbivores have created selective pressure for plants to possess specific molecules that deter plant consumption, such as nicotine, morphine, and cocaine. Plant-based drugs, however, have reinforcing and rewarding effects on the human neurological system, suggesting a "paradox of drug reward" in humans. Human behavioral evolutionary mismatch explains the contradiction between plant evolution and human drug use. In the last 10,000 years, humans found the dopaminergic system, or reward system, particularly useful in optimizing Darwinian fitness. While drug use has been a common characteristic of past human populations, drug use involving potent substances and diverse intake methods is a relatively contemporary feature of society. Human ancestors lived in an environment that lacked drug use of this nature, so the reward system was primarily used in maximizing survival and reproductive success. In contrast, present-day humans live in a world where the current nature of drugs render the reward system maladaptive. This class of drugs falsely triggers a fitness benefit in the reward system, leaving people susceptible to drug addiction. The modern-day dopaminergic system presents vulnerabilities to the difference in accessibility and social perception of drugs. Eating In the era of foraging for food, hunter-gatherers rarely knew where their next meal would come from. This food scarcity rewarded consumption of high energy meals in order to save excess energy as fat. Now that food is readily available, the neurological system that once helped people recognize the survival advantages of essential eating has now become disadvantageous as it promotes overeating. This has become especially dangerous after the rise of processed foods, as the popularity of foods that have unnaturally high levels of sugar and fat has significantly increased. Non-human examples Evolutionary mismatch can occur any time an organism is exposed to an environment that does not resemble the typical environment the organism adapted in. Due to human influences, such as global warming and habitat destruction, the environment is changing very rapidly for many organisms, leading to numerous cases of evolutionary mismatch. Examples with human influence Sea turtles and light pollution Female sea turtles create nests to lay their eggs by digging a pit on the beach, typically between the high tide line and dune, using their rear flippers. Consequently, within the first seven days of hatching, hatchling sea turtles must make the journey from the nest back into the ocean. This trip occurs predominantly at night in order to avoid predators and overheating. In order to orient themselves towards the ocean, the hatchlings depend on their eyes to turn towards the brightest direction. This is because the open horizon of the ocean, illuminated by celestial light, tends to be much brighter in a natural undeveloped beach than the dunes and vegetation. Studies propose two mechanisms of the eye for this phenomenon. Referred to as the "raster system", the theory is that sea turtles' eyes contain numerous light sensors which take in the overall brightness information of a general area and make a "measurement" of where the light is most intense. If the light sensors detect the most intense light on a hatchling's left side, the sea turtle would turn left. A similar proposal called the complex phototropotaxis system theorizes that the eyes contain light intensity comparators that take in detailed information of the intensity of light from all directions. Sea turtles are able to "know" that they are facing the brightest direction when the light intensity is balanced between both eyes. This method of finding the ocean is successful in natural beaches, but in developed beaches, the intense artificial lights from buildings, light houses, and even abandoned fires overwhelm the sea turtles and cause them to head towards the artificial light instead of the ocean. Scientists call this misorientation. Sea turtles can also become disoriented and circle around in the same place. Numerous cases show that misoriented hatchling sea turtles either die from dehydration, get consumed by a predator, or even burn to death in an abandoned fire. The direct impact of light pollution on the number of sea turtles has been too difficult to measure. However, this problem is exacerbated because all species of sea turtles are endangered. Other animals, including migratory birds and insects, are also victims to light pollution because they also depend on light intensity at night to properly orient themselves. Dodo bird and hunting The Dodo bird lived on a remote Island, Mauritius, in the absence of predators. Here, the Dodo evolved to lose its instinct for fear and the ability to fly. This allowed them to be easily hunted by Dutch sailors who arrived on the island in the late 16th century. The Dutch sailors also brought foreign animals to the island such as monkeys and pigs that ate the Dodo bird's eggs, which was detrimental to the population growth of the slow breeding bird. Their fearlessness made them easy targets and their inability to fly gave them no opportunity to evade danger. Thus, they were easily driven to extinction within a century of their discovery. The Dodo's inability to fly was once beneficial for the bird because it conserved energy. The Dodo conserved more energy relative to birds with the ability to fly, due to the Dodo's smaller pectoral muscles. Smaller muscle sizes are linked to lower rates of maintenance metabolism, which in turn conserves energy for the Dodo. Lacking an instinct for fear was another mechanism through which the Dodo conserved energy because it never had to expend any energy for a stress response. Both mechanisms of conservation of energy was once advantageous because it enabled the Dodo to execute activities with minimal energy expenditure. However, these proved disadvantageous when their island was invaded, rendering them defenseless to the new dangers that humans brought. Peppered moths during the English Industrial Revolution Before the English Industrial Revolution of the late 18th and early 19th centuries the most common phenotypic color of the peppered moth was white with black speckles. When higher air pollution in urban regions killed the lichens adhering to trees and exposed their darker bark, the light-colored moths stood out more to predators. Natural selection began favoring a previously rare darker variety of the peppered moth referred to as "carbonaria" because the lighter phenotype had become mismatched to its environment. Carbonaria frequencies rose above 90% in some areas of England until efforts in the late 1900s to reduce air pollution caused a resurgence of epiphytes, including lichens, to again lighten the color of trees. Under these conditions the coloring of the carbonaria reverted from an advantage to a disadvantage and that phenotype became mismatched to its environment. Giant jewel beetle and beer bottles Evolutionary mismatch can also be seen among insects. One such example is in the case of the giant jewel beetle (Julodimorpha bakewelli). The male jewel beetle has evolved to be attracted to features of the female jewel beetle that allow the male to identify a female jewel beetle as it flies across the desert. These features include size, color, and texture. However, these physical traits are seen manifested in some beer bottles as well. As a result, males often consider beer bottles more attractive than female jewel beetles due to the beer bottle's large size and attractive coloring. Beer bottles are often discarded by humans in the Australian desert that the jewel beetle thrives in, creating an environment where male jewel beetles prefer to mate with beer bottles instead of females. This is a situation that is extremely disadvantageous as it reduces the reproductive output of the jewel beetle as fewer beetles are mating. This condition can be considered an evolutionary mismatch, as a habit that evolved to aid in reproduction has become disadvantageous due to the littering of beer bottles, an anthropogenic cause. Examples without human influence Information cascades between birds Normally, gaining information from watching other organisms allows the observer to make good decisions without spending effort. More specifically, birds often observe the behavior of other organisms to gain valuable information, such as the presence of predators, good breeding sites, and optimal feeding spots. Although this allows the observer to spend less effort gathering information, it can also lead to bad decisions if the information gained from observing is unreliable. In the case of the nutmeg mannikins, the observer can minimize the time spent looking for an optimal feeder and maximize its feeding time by watching where other nutmeg mannikins feed. However, this relies on the assumption that the observed mannikins also had reliable information that indicated the feeding spot was an ideal one. This behavior can become maladaptive when prioritizing information gained from watching others leads to information cascades, where birds follow the rest of the crowd even though prior experience may have suggested that the decision of the crowd is a poor one. For instance, if a nutmeg mannikin sees enough mannikins feeding at a feeder, nutmeg mannikins have been shown to choose that feeder even if their personal experience indicates that the feeder is a poor one. House finches and the introduction of the MG disease Evolutionary mismatch occurs in house finches when they are exposed to infectious individuals. Male house finches tend to feed in close proximity to other finches that are sick or diseased, because sick individuals are less competitive than usual, in turn making the healthy male more likely to win an aggressive interaction if it happens. To make it less likely to lose a social confrontation, healthy finches are inclined to forage near individuals that are lethargic or listless due to disease. However, this disposition has created an evolutionary trap for the finches after the introduction of the MG disease in 1994. Since this disease is infectious, healthy finches will be in danger of contraction if they are in the vicinity of individuals that have previously developed the disease. The relatively short duration of the disease's introduction has caused an inability for the finches to adapt quickly enough to avoid nearing sick individuals, which ultimately results in the mismatch between their behavior and the changing environment. Exploitation of earthworm's reaction to vibrations Worm charming is a practice used by people to attract earthworms out of the ground by driving in a wooden stake to vibrate the soil. This activity is commonly performed to collect fishing bait and as a competitive sport. Worms that sense the vibrations rise to the surface. Research shows that humans are actually taking advantage of a trait that worms adapted to avoid hungry burrowing moles which prey on the worms. This type of evolutionary trap, where an originally beneficial trait is exploited in order to catch prey, was coined the "rare enemy effect" by Richard Dawkins, an English evolutionary biologist. This trait of worms has been exploited not only by humans, but by other animals. Herring gulls and wood turtles have been observed to also stamp on the ground to drive the worms up to the surface and consume them. See also Evolution Evolutionary biology Evolutionary trap Fisher's geometric model Human impact on the environment Natural environment Person–environment fit Rate of evolution Evolutionary anachronism References Evolutionary biology
0.793286
0.98172
0.778785
Biomass
Biomass is a term used in several contexts: in the context of ecology it means living organisms, and in the context of bioenergy it means matter from recently living (but now dead) organisms. In the latter context, there are variations in how biomass is defined, e.g., only from plants, from plants and algae, from plants and animals. The vast majority of biomass used for bioenergy does come from plants. Bioenergy is a type of renewable energy that the bioenergy industry claims has the potential to assist with climate change mitigation. Uses in different contexts Ecology Biomass (ecology), the mass of living biological organisms in a given area or ecosystem at a given time. This can be the biomass of particular species or the biomass of a particular community or habitat. Energy Biomass (energy), biomass used for energy production or in other words: biological mass used as a renewable energy source (usually produced through agriculture, forestry or aquaculture methods) Bioenergy, energy sources derived from biological material Solid fuel, forms of bioenergy that are solid Biofuel Energy crops Biotechnology Biomass is also used as a term for the mass of microorganisms that are used to produce industrial products like enzymes and medicines. Bioproducts Examples of emerging bioproducts or biobased products include biofuels, bioenergy, biochar, starch-based and cellulose-based ethanol, bio-based adhesives, biochemicals, bioplastics, etc. Biological wastewater treatment In biological wastewater treatment processes, such as the activated sludge process, the term "biomass" is used to denote the mass of bacteria and other microorganisms that break down pollutants in wastewater. The biomass forms part of sewage sludge. Others Biomass (satellite) - an Earth observation satellite Waste biomass fibre - potential source for cleaner production of textile References
0.782468
0.995153
0.778675
Nanobiotechnology
Nanobiotechnology, bionanotechnology, and nanobiology are terms that refer to the intersection of nanotechnology and biology. Given that the subject is one that has only emerged very recently, bionanotechnology and nanobiotechnology serve as blanket terms for various related technologies. This discipline helps to indicate the merger of biological research with various fields of nanotechnology. Concepts that are enhanced through nanobiology include: nanodevices (such as biological machines), nanoparticles, and nanoscale phenomena that occurs within the discipline of nanotechnology. This technical approach to biology allows scientists to imagine and create systems that can be used for biological research. Biologically inspired nanotechnology uses biological systems as the inspirations for technologies not yet created. However, as with nanotechnology and biotechnology, bionanotechnology does have many potential ethical issues associated with it. The most important objectives that are frequently found in nanobiology involve applying nanotools to relevant medical/biological problems and refining these applications. Developing new tools, such as peptoid nanosheets, for medical and biological purposes is another primary objective in nanotechnology. New nanotools are often made by refining the applications of the nanotools that are already being used. The imaging of native biomolecules, biological membranes, and tissues is also a major topic for nanobiology researchers. Other topics concerning nanobiology include the use of cantilever array sensors and the application of nanophotonics for manipulating molecular processes in living cells. Recently, the use of microorganisms to synthesize functional nanoparticles has been of great interest. Microorganisms can change the oxidation state of metals. These microbial processes have opened up new opportunities for us to explore novel applications, for example, the biosynthesis of metal nanomaterials. In contrast to chemical and physical methods, microbial processes for synthesizing nanomaterials can be achieved in aqueous phase under gentle and environmentally benign conditions. This approach has become an attractive focus in current green bionanotechnology research towards sustainable development. Terminology The terms are often used interchangeably. When a distinction is intended, though, it is based on whether the focus is on applying biological ideas or on studying biology with nanotechnology. Bionanotechnology generally refers to the study of how the goals of nanotechnology can be guided by studying how biological "machines" work and adapting these biological motifs into improving existing nanotechnologies or creating new ones. Nanobiotechnology, on the other hand, refers to the ways that nanotechnology is used to create devices to study biological systems. In other words, nanobiotechnology is essentially miniaturized biotechnology, whereas bionanotechnology is a specific application of nanotechnology. For example, DNA nanotechnology or cellular engineering would be classified as bionanotechnology because they involve working with biomolecules on the nanoscale. Conversely, many new medical technologies involving nanoparticles as delivery systems or as sensors would be examples of nanobiotechnology since they involve using nanotechnology to advance the goals of biology. The definitions enumerated above will be utilized whenever a distinction between nanobio and bionano is made in this article. However, given the overlapping usage of the terms in modern parlance, individual technologies may need to be evaluated to determine which term is more fitting. As such, they are best discussed in parallel. Concepts Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as biological computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers. The impact of bionanoscience, achieved through structural and mechanistic analyses of biological processes at nanoscale, is their translation into synthetic and technological applications through nanotechnology. Nanobiotechnology takes most of its fundamentals from nanotechnology. Most of the devices designed for nano-biotechnological use are directly based on other existing nanotechnologies. Nanobiotechnology is often used to describe the overlapping multidisciplinary activities associated with biosensors, particularly where photonics, chemistry, biology, biophysics, nanomedicine, and engineering converge. Measurement in biology using wave guide techniques, such as dual-polarization interferometry, is another example. Applications Applications of bionanotechnology are extremely widespread. Insofar as the distinction holds, nanobiotechnology is much more commonplace in that it simply provides more tools for the study of biology. Bionanotechnology, on the other hand, promises to recreate biological mechanisms and pathways in a form that is useful in other ways. Nanomedicine Nanomedicine is a field of medical science whose applications are increasing. Nanobots The field includes nanorobots and biological machines, which constitute a very useful tool to develop this area of knowledge. In the past years, researchers have made many improvements in the different devices and systems required to develop functional nanorobots – such as motion and magnetic guidance. This supposes a new way of treating and dealing with diseases such as cancer; thanks to nanorobots, side effects of chemotherapy could get controlled, reduced and even eliminated, so some years from now, cancer patients could be offered an alternative to treat such diseases instead of chemotherapy, which causes secondary effects such as hair loss, fatigue or nausea killing not only cancerous cells but also the healthy ones. Nanobots could be used for various therapies, surgery, diagnosis, and medical imaging – such as via targeted drug-delivery to the brain (similar to nanoparticles) and other sites. Programmability for combinations of features such as "tissue penetration, site-targeting, stimuli responsiveness, and cargo-loading" makes such nanobots promising candidates for "precision medicine". At a clinical level, cancer treatment with nanomedicine would consist of the supply of nanorobots to the patient through an injection that will search for cancerous cells while leaving the healthy ones untouched. Patients that are treated through nanomedicine would thereby not notice the presence of these nanomachines inside them; the only thing that would be noticeable is the progressive improvement of their health. Nanobiotechnology may be useful for medicine formulation. "Precision antibiotics" has been proposed to make use of bacteriocin-mechanisms for targeted antibiotics. Nanoparticles Nanoparticles are already widely used in medicine. Its applications overlap with those of nanobots and in some cases it may be difficult to distinguish between them. They can be used to for diagnosis and targeted drug delivery, encapsulating medicine. Some can be manipulated using magnetic fields and, for example, experimentally, remote-controlled hormone release has been achieved this way. On example advanced application under development are "Trojan horse" designer-nanoparticles that makes blood cells eat away – from the inside out – portions of atherosclerotic plaque that cause heart attacks and are the current most common cause of death globally. Artificial cells Artificial cells such as synthetic red blood cells that have all or many of the natural cells' known broad natural properties and abilities could be used to load functional cargos such as hemoglobin, drugs, magnetic nanoparticles, and ATP biosensors which may enable additional non-native functionalities. Other Nanofibers that mimic the matrix around cells and contain molecules that were engineered to wiggle was shown to be a potential therapy for spinal cord injury in mice. Technically, gene therapy can also be considered to be a form of nanobiotechnology or to move towards it. An example of an area of genome editing related developments that is more clearly nanobiotechnology than more conventional gene therapies, is synthetic fabrication of functional materials in tissues. Researcher made C. elegans worms synthesize, fabricate, and assemble bioelectronic materials in its brain cells. They enabled modulation of membrane properties in specific neuron populations and manipulation of behavior in the living animals which might be useful in the study and treatments for diseases such as multiple sclerosis in specific and demonstrates the viability of such synthetic in vivo fabrication. Moreover, such genetically modified neurons may enable connecting external components – such as prosthetic limbs – to nerves. Nanosensors based on e.g. nanotubes, nanowires, cantilevers, or atomic force microscopy could be applied to diagnostic devices/sensors Nanobiotechnology Nanobiotechnology (sometimes referred to as nanobiology) in medicine may be best described as helping modern medicine progress from treating symptoms to generating cures and regenerating biological tissues. Three American patients have received whole cultured bladders with the help of doctors who use nanobiology techniques in their practice. Also, it has been demonstrated in animal studies that a uterus can be grown outside the body and then placed in the body in order to produce a baby. Stem cell treatments have been used to fix diseases that are found in the human heart and are in clinical trials in the United States. There is also funding for research into allowing people to have new limbs without having to resort to prosthesis. Artificial proteins might also become available to manufacture without the need for harsh chemicals and expensive machines. It has even been surmised that by the year 2055, computers may be made out of biochemicals and organic salts. In vivo biosensors Another example of current nanobiotechnological research involves nanospheres coated with fluorescent polymers. Researchers are seeking to design polymers whose fluorescence is quenched when they encounter specific molecules. Different polymers would detect different metabolites. The polymer-coated spheres could become part of new biological assays, and the technology might someday lead to particles which could be introduced into the human body to track down metabolites associated with tumors and other health problems. Another example, from a different perspective, would be evaluation and therapy at the nanoscopic level, i.e. the treatment of nanobacteria (25-200 nm sized) as is done by NanoBiotech Pharma. In vitro biosensors "Nanoantennas" made out of DNA – a novel type of nano-scale optical antenna – can be attached to proteins and produce a signal via fluorescence when these perform their biological functions, in particular for their distinct conformational changes. This could be used for further nanobiotechnology such as various types of nanomachines, to develop new drugs, for bioresearch and for new avenues in biochemistry. Energy It may also be useful in sustainable energy: in 2022, researchers reported 3D-printed nano-"skyscraper" electrodes – albeit micro-scale, the pillars had nano-features of porosity due to printed metal nanoparticle inks – (nanotechnology) that house cyanobacteria for extracting substantially more sustainable bioenergy from their photosynthesis (biotechnology) than in earlier studies. Nanobiology While nanobiology is in its infancy, there are a lot of promising methods that may rely on nanobiology in the future. Biological systems are inherently nano in scale; nanoscience must merge with biology in order to deliver biomacromolecules and molecular machines that are similar to nature. Controlling and mimicking the devices and processes that are constructed from molecules is a tremendous challenge to face for the converging disciplines of nanobiotechnology. All living things, including humans, can be considered to be nanofoundries. Natural evolution has optimized the "natural" form of nanobiology over millions of years. In the 21st century, humans have developed the technology to artificially tap into nanobiology. This process is best described as "organic merging with synthetic". Colonies of live neurons can live together on a biochip device; according to research from Gunther Gross at the University of North Texas. Self-assembling nanotubes have the ability to be used as a structural system. They would be composed together with rhodopsins; which would facilitate the optical computing process and help with the storage of biological materials. DNA (as the software for all living things) can be used as a structural proteomic system – a logical component for molecular computing. Ned Seeman – a researcher at New York University – along with other researchers are currently researching concepts that are similar to each other. Bionanotechnology Distinction from nanobiotechnology Broadly, bionanotechnology can be distinguished from nanobiotechnology in that it refers to nanotechnology that makes use of biological materials/components – it could in principle or does alternatively use abiotic components. It plays a smaller role in medicine (which is concerned with biological organisms). It makes use of natural or biomimetic systems or elements for unique nanoscale structures and various applications that may not be directionally associated with biology rather than mostly biological applications. In contrast, nanobiotechnology uses biotechnology miniaturized to nanometer size or incorporates nanomolecules into biological systems. In some future applications, both fields could be merged. DNA DNA nanotechnology is one important example of bionanotechnology. The utilization of the inherent properties of nucleic acids like DNA to create useful materials or devices – such as biosensors – is a promising area of modern research. DNA digital data storage refers mostly to the use of synthesized but otherwise conventional strands of DNA to store digital data, which could be useful for e.g. high-density long-term data storage that isn't accessed and written to frequently as an alternative to 5D optical data storage or for use in combination with other nanobiotechnology. Membrane materials Another important area of research involves taking advantage of membrane properties to generate synthetic membranes. Proteins that self-assemble to generate functional materials could be used as a novel approach for the large-scale production of programmable nanomaterials. One example is the development of amyloids found in bacterial biofilms as engineered nanomaterials that can be programmed genetically to have different properties. Lipid nanotechnology Lipid nanotechnology is another major area of research in bionanotechnology, where physico-chemical properties of lipids such as their antifouling and self-assembly is exploited to build nanodevices with applications in medicine and engineering. Lipid nanotechnology approaches can also be used to develop next-generation emulsion methods to maximize both absorption of fat-soluble nutrients and the ability to incorporate them into popular beverages. Computing "Memristors" fabricated from protein nanowires of the bacterium Geobacter sulfurreducens which function at substantially lower voltages than previously described ones may allow the construction of artificial neurons which function at voltages of biological action potentials. The nanowires have a range of advantages over silicon nanowires and the memristors may be used to directly process biosensing signals, for neuromorphic computing (see also: wetware computer) and/or direct communication with biological neurons. Other Protein folding studies provide a third important avenue of research, but one that has been largely inhibited by our inability to predict protein folding with a sufficiently high degree of accuracy. Given the myriad uses that biological systems have for proteins, though, research into understanding protein folding is of high importance and could prove fruitful for bionanotechnology in the future. Agriculture In the agriculture industry, engineered nanoparticles have been serving as nano carriers, containing herbicides, chemicals, or genes, which target particular plant parts to release their content. Previously nanocapsules containing herbicides have been reported to effectively penetrate through cuticles and tissues, allowing the slow and constant release of the active substances. Likewise, other literature describes that nano-encapsulated slow release of fertilizers has also become a trend to save fertilizer consumption and to minimize environmental pollution through precision farming. These are only a few examples from numerous research works which might open up exciting opportunities for nanobiotechnology application in agriculture. Also, application of this kind of engineered nanoparticles to plants should be considered the level of amicability before it is employed in agriculture practices. Based on a thorough literature survey, it was understood that there is only limited authentic information available to explain the biological consequence of engineered nanoparticles on treated plants. Certain reports underline the phytotoxicity of various origin of engineered nanoparticles to the plant caused by the subject of concentrations and sizes . At the same time, however, an equal number of studies were reported with a positive outcome of nanoparticles, which facilitate growth promoting nature to treat plant. In particular, compared to other nanoparticles, silver and gold nanoparticles based applications elicited beneficial results on various plant species with less and/or no toxicity. Silver nanoparticles (AgNPs) treated leaves of Asparagus showed the increased content of ascorbate and chlorophyll. Similarly, AgNPs-treated common bean and corn has increased shoot and root length, leaf surface area, chlorophyll, carbohydrate and protein contents reported earlier. The gold nanoparticle has been used to induce growth and seed yield in Brassica juncea. Nanobiotechnology is used in tissue cultures. The administration of micronutrients at the level of individual atoms and molecules allows for the stimulation of various stages of development, initiation of cell division, and differentiation in the production of plant material, which must be qualitatively uniform and genetically homogeneous. The use of nanoparticles of zinc (ZnO NPs) and silver (Ag NPs) compounds gives very good results in the micropropagation of chrysanthemums using the method of single-node shoot fragments. Tools This field relies on a variety of research methods, including experimental tools (e.g. imaging, characterization via AFM/optical tweezers etc.), x-ray diffraction based tools, synthesis via self-assembly, characterization of self-assembly (using e.g. MP-SPR, DPI, recombinant DNA methods, etc.), theory (e.g. statistical mechanics, nanomechanics, etc.), as well as computational approaches (bottom-up multi-scale simulation, supercomputing). Risk management As of 2009, the risks of nanobiotechnologies are poorly understood and in the U.S. there is no solid national consensus on what kind of regulatory policy principles should be followed. For example, nanobiotechnologies may have hard to control effects on the environment or ecosystems and human health. The metal-based nanoparticles used for biomedical prospectives are extremely enticing in various applications due to their distinctive physicochemical characteristics, allowing them to influence cellular processes at the biological level. The fact that metal-based nanoparticles have high surface-to-volume ratios makes them reactive or catalytic. Due to their small size, they are more likely to be able to penetrate biological barriers such as cell membranes and cause cellular dysfunction in living organisms. Indeed, the high toxicity of some transition metals can make it challenging to use mixed oxide NPs in biomedical uses. It triggers adverse effects on organisms, causing oxidative stress, stimulating the formation of ROS, mitochondrial perturbation, and the modulation of cellular functions, with fatal results in some cases. Bonin notes that "Nanotechnology is not a specific determinate homogenous entity, but a collection of diverse capabilities and applications" and that nanobiotechnology research and development is – as one of many fields – affected by dual-use problems. See also Biomimicry Colloidal gold Genome editing (bacteria, (micro-borgs)) Gold nanoparticle Nanobiomechanics Nanoparticle–biomolecule conjugate Nanosubmarine Nanozymes References External links What is Bionanotechnology?—a video introduction to the field Nanobiotechnology in Orthopaedic Nanotechnology Biotechnology
0.790821
0.984548
0.778601
Structure
A structure is an arrangement and organization of interrelated elements in a material object or system, or the object or system so organized. Material structures include man-made objects such as buildings and machines and natural objects such as biological organisms, minerals and chemicals. Abstract structures include data structures in computer science and musical form. Types of structure include a hierarchy (a cascade of one-to-many relationships), a network featuring many-to-many links, or a lattice featuring connections between components that are neighbors in space. Load-bearing Buildings, aircraft, skeletons, anthills, beaver dams, bridges and salt domes are all examples of load-bearing structures. The results of construction are divided into buildings and non-building structures, and make up the infrastructure of a human society. Built structures are broadly divided by their varying design approaches and standards, into categories including building structures, architectural structures, civil engineering structures and mechanical structures. The effects of loads on physical structures are determined through structural analysis, which is one of the tasks of structural engineering. The structural elements can be classified as one-dimensional (ropes, struts, beams, arches), two-dimensional (membranes, plates, slab, shells, vaults), or three-dimensional (solid masses). Three-dimensional elements were the main option available to early structures such as Chichen Itza. A one-dimensional element has one dimension much larger than the other two, so the other dimensions can be neglected in calculations; however, the ratio of the smaller dimensions and the composition can determine the flexural and compressive stiffness of the element. Two-dimensional elements with a thin third dimension have little of either but can resist biaxial traction. The structure elements are combined in structural systems. The majority of everyday load-bearing structures are section-active structures like frames, which are primarily composed of one-dimensional (bending) structures. Other types are Vector-active structures such as trusses, surface-active structures such as shells and folded plates, form-active structures such as cable or membrane structures, and hybrid structures. Load-bearing biological structures such as bones, teeth, shells, and tendons derive their strength from a multilevel hierarchy of structures employing biominerals and proteins, at the bottom of which are collagen fibrils. Biological In biology, one of the properties of life is its highly ordered structure, which can be observed at multiple levels such as in cells, tissues, organs, and organisms. In another context, structure can also observed in macromolecules, particularly proteins and nucleic acids. The function of these molecules is determined by their shape as well as their composition, and their structure has multiple levels. Protein structure has a four-level hierarchy. The primary structure is the sequence of amino acids that make it up. It has a peptide backbone made up of a repeated sequence of a nitrogen and two carbon atoms. The secondary structure consists of repeated patterns determined by hydrogen bonding. The two basic types are the α-helix and the β-pleated sheet. The tertiary structure is a back and forth bending of the polypeptide chain, and the quaternary structure is the way that tertiary units come together and interact. Structural biology is concerned with biomolecular structure of macromolecules. Chemical Chemical structure refers to both molecular geometry and electronic structure. The structure can be represented by a variety of diagrams called structural formulas. Lewis structures use a dot notation to represent the valence electrons for an atom; these are the electrons that determine the role of the atom in chemical reactions. Bonds between atoms can be represented by lines with one line for each pair of electrons that is shared. In a simplified version of such a diagram, called a skeletal formula, only carbon-carbon bonds and functional groups are shown. Atoms in a crystal have a structure that involves repetition of a basic unit called a unit cell. The atoms can be modeled as points on a lattice, and one can explore the effect of symmetry operations that include rotations about a point, reflections about a symmetry planes, and translations (movements of all the points by the same amount). Each crystal has a finite group, called the space group, of such operations that map it onto itself; there are 230 possible space groups. By Neumann's law, the symmetry of a crystal determines what physical properties, including piezoelectricity and ferromagnetism, the crystal can have. Mathematical Musical A large part of numerical analysis involves identifying and interpreting the structure of musical works. Structure can be found at the level of part of a work, the entire work, or a group of works. Elements of music such as pitch, duration and timbre combine into small elements like motifs and phrases, and these in turn combine in larger structures. Not all music (for example, that of John Cage) has a hierarchical organization, but hierarchy makes it easier for a listener to understand and remember the music. In analogy to linguistic terminology, motifs and phrases can be combined to make complete musical ideas such as sentences and phrases. A larger form is known as the period. One such form that was widely used between 1600 and 1900 has two phrases, an antecedent and a consequent, with a half cadence in the middle and a full cadence at the end providing punctuation. On a larger scale are single-movement forms such as the sonata form and the contrapuntal form, and multi-movement forms such as the symphony. Social A social structure is a pattern of relationships. They are social organizations of individuals in various life situations. Structures are applicable to people in how a society is as a system organized by a characteristic pattern of relationships. This is known as the social organization of the group. Sociologists have studied the changing structure of these groups. Structure and agency are two confronted theories about human behaviour. The debate surrounding the influence of structure and agency on human thought is one of the central issues in sociology. In this context, agency refers to the individual human capacity to act independently and make free choices. Structure here refers to factors such as social class, religion, gender, ethnicity, customs, etc. that seem to limit or influence individual opportunities. Data In computer science, a data structure is a way of organizing information in a computer so that it can be used efficiently. Data structures are built out of two basic types: An array has an index that can be used for immediate access to any data item (some programming languages require array size to be initialized). A linked list can be reorganized, grown or shrunk, but its elements must be accessed with a pointer that links them together in a particular order. Out of these any number of other data structures can be created such as stacks, queues, trees and hash tables. In solving a problem, a data structure is generally an integral part of the algorithm. In modern programming style, algorithms and data structures are encapsulated together in an abstract data type. Software Software architecture is the specific choices made between possible alternatives within a framework. For example, a framework might require a database and the architecture would specify the type and manufacturer of the database. The structure of software is the way in which it is partitioned into interrelated components. A key structural issue is minimizing dependencies between these components. This makes it possible to change one component without requiring changes in others. The purpose of structure is to optimise for (brevity, readability, traceability, isolation and encapsulation, maintainability, extensibility, performance and efficiency), examples being: language choice, code, functions, libraries, builds, system evolution, or diagrams for flow logic and design. Structural elements reflect the requirements of the application: for example, if the system requires a high fault tolerance, then a redundant structure is needed so that if a component fails it has backups. A high redundancy is an essential part of the design of several systems in the Space Shuttle. Logical As a branch of philosophy, logic is concerned with distinguishing good arguments from poor ones. A chief concern is with the structure of arguments. An argument consists of one or more premises from which a conclusion is inferred. The steps in this inference can be expressed in a formal way and their structure analyzed. Two basic types of inference are deduction and induction. In a valid deduction, the conclusion necessarily follows from the premises, regardless of whether they are true or not. An invalid deduction contains some error in the analysis. An inductive argument claims that if the premises are true, the conclusion is likely. See also Abstract structure Mathematical structure Structural geology Structure (mathematical logic) Structuralism (philosophy of science) References Further reading External links (syllabus and reading list)
0.782282
0.995282
0.778591
Biosphere
The biosphere, also called the ecosphere, is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago. In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons. Origin and use of the term The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells. While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences. Narrow definition Geochemists define the biosphere as being the total sum of living organisms (the "biomass" or "biota" as referred to by biologists and ecologists). In this sense, the biosphere is but one of four separate components of the geochemical model, the other three being geosphere, hydrosphere, and atmosphere. When these four component spheres are combined into one system, it is known as the ecosphere. This term was coined during the 1960s and encompasses both biological and physical components of the planet. The Second International Conference on Closed Life Systems defined biospherics as the science and technology of analogs and models of Earth's biosphere; i.e., artificial Earth-like biospheres. Others may include the creation of artificial non-Earth biospheres—for example, human-centered biospheres or a native Martian biosphere—as part of the topic of biospherics. Earth's biosphere Overview Currently, the total number of living cells on the Earth is estimated to be 1030; the total number since the beginning of Earth, as 1040, and the total number for the entire time of a habitable planet Earth as 1041. This is much larger than the total number of estimated stars (and Earth-like planets) in the observable universe as 1024, a number which is more than all the grains of beach sand on planet Earth; but less than the total number of atoms estimated in the observable universe as 1082; and the estimated total number of stars in an inflationary universe (observed and unobserved), as 10100. Age The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilized microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe." Extent Every part of the planet, from the polar ice caps to the equator, features life of some kind. Recent advances in microbiology have demonstrated that microbes live deep beneath the Earth's terrestrial surface and that the total mass of microbial life in so-called "uninhabitable zones" may, in biomass, exceed all animal and plant life on the surface. The actual thickness of the biosphere on Earth is difficult to measure. Birds typically fly at altitudes as high as and fish live as much as underwater in the Puerto Rico Trench. There are more extreme examples for life on the planet: Rüppell's vulture has been found at altitudes of ; bar-headed geese migrate at altitudes of at least ; yaks live at elevations as high as above sea level; mountain goats live up to . Herbivorous animals at these elevations depend on lichens, grasses, and herbs. Life forms live in every part of the Earth's biosphere, including soil, hot springs, inside rocks at least deep underground, and at least high in the atmosphere. Marine life under many forms has been found in the deepest reaches of the world ocean while much of the deep sea remains to be explored. Under certain test conditions, microorganisms have been observed to survive the vacuum of outer space. The total amount of soil and subsurface bacterial carbon is estimated as 5 × 1017 g. The mass of prokaryote microorganisms—which includes bacteria and archaea, but not the nucleated eukaryote microorganisms—may be as much as 0.8 trillion tons of carbon (of the total biosphere mass, estimated at between 1 and 4 trillion tons). Barophilic marine microbes have been found at more than a depth of in the Mariana Trench, the deepest spot in the Earth's oceans. In fact, single-celled life forms have been found in the deepest part of the Mariana Trench, by the Challenger Deep, at depths of . Other researchers reported related studies that microorganisms thrive inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, as well as beneath the seabed off Japan. Culturable thermophilic microbes have been extracted from cores drilled more than into the Earth's crust in Sweden, from rocks between . Temperature increases with increasing depth into the Earth's crust. The rate at which the temperature increases depends on many factors, including the type of crust (continental vs. oceanic), rock type, geographic location, etc. The greatest known temperature at which microbial life can exist is (Methanopyrus kandleri Strain 116). It is likely that the limit of life in the "deep biosphere" is defined by temperature rather than absolute depth. On 20 August 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica. Earth's biosphere is divided into several biomes, inhabited by fairly similar flora and fauna. On land, biomes are separated primarily by latitude. Terrestrial biomes lying within the Arctic and Antarctic Circles are relatively barren of plant and animal life. In contrast, most of the more populous biomes lie near the equator. Annual variation Artificial biospheres Experimental biospheres, also called closed ecological systems, have been created to study ecosystems and the potential for supporting life outside the Earth. These include spacecraft and the following terrestrial laboratories: Biosphere 2 in Arizona, United States, 3.15 acres (13,000 m2). BIOS-1, BIOS-2 and BIOS-3 at the Institute of Biophysics in Krasnoyarsk, Siberia, in what was then the Soviet Union. Biosphere J (CEEF, Closed Ecology Experiment Facilities), an experiment in Japan. Micro-Ecological Life Support System Alternative (MELiSSA) at Universitat Autònoma de Barcelona Extraterrestrial biospheres No biospheres have been detected beyond the Earth; therefore, the existence of extraterrestrial biospheres remains hypothetical. The rare Earth hypothesis suggests they should be very rare, save ones composed of microbial life only. On the other hand, Earth analogs may be quite numerous, at least in the Milky Way galaxy, given the large number of planets. Three of the planets discovered orbiting TRAPPIST-1 could possibly contain biospheres. Given limited understanding of abiogenesis, it is currently unknown what percentage of these planets actually develop biospheres. Based on observations by the Kepler Space Telescope team, it has been calculated that provided the probability of abiogenesis is higher than 1 to 1000, the closest alien biosphere should be within 100 light-years from the Earth. It is also possible that artificial biospheres will be created in the future, for example with the terraforming of Mars. See also Climate system Cryosphere Thomas Gold Circumstellar habitable zone Homeostasis Life-support system Man and the Biosphere Programme Montreal Biosphere Noosphere Rare biosphere Shadow biosphere Simple biosphere model Soil biomantle Wardian case Winogradsky column References Further reading The Biosphere (A Scientific American Book), San Francisco, W.H. Freeman and Co., 1970, . This book, originally the December 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources (including solar energy), population trends, and environmental degradation (including global warming). External links Article on the Biosphere at Encyclopedia of Earth GLOBIO.info, an ongoing programme to map the past, current and future impacts of human activities on the biosphere Paul Crutzen Interview, freeview video of Paul Crutzen Nobel Laureate for his work on decomposition of ozone talking to Harry Kroto Nobel Laureate by the Vega Science Trust. Atlas of the Biosphere Oceanography Superorganisms Biological systems
0.779901
0.998233
0.778522
Environmental biotechnology
Environmental biotechnology is biotechnology that is applied to and used to study the natural environment. Environmental biotechnology could also imply that one try to harness biological process for commercial uses and exploitation. The International Society for Environmental Biotechnology defines environmental biotechnology as "the development, use and regulation of biological systems for remediation of contaminated environments (land, air, water), and for environment-friendly processes (green manufacturing technologies and sustainable development)". Environmental biotechnology can simply be described as "the optimal use of nature, in the form of plants, animals, bacteria, fungi and algae, to produce renewable energy, food and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process". Significance for agriculture, food security, climate change mitigation and adaptation and the MDGs The IAASTD has called for the advancement of small-scale agro-ecological farming systems and technology in order to achieve food security, climate change mitigation, climate change adaptation and the realisation of the Millennium Development Goals. Environmental biotechnology has been shown to play a significant role in agroecology in the form of zero waste agriculture and most significantly through the operation of over 15 million biogas digesters worldwide. Significance towards industrial biotechnology Consider the effluents of starch plant which has mixed up with a local water body like a lake or pond. We find huge deposits of starch which are not so easily taken up for degradation by microorganisms except for a few exemptions. Microorganisms from the polluted site are scan for genomic changes that allow them to degrade/utilize the starch better than other microbes of the same genus. The modified genes are then identified. The resultant genes are cloned into industrially significant microorganisms and are used for economically processes like in pharmaceutical industry, fermentations... etc.. Similar situations can be encountered in the case of marine oil spills which require cleanup, where microbes isolated from oil rich environments like oil wells, oil transfer pipelines...etc. have been found having the potential to degrade oil or use it as an energy source. Thus they serve as a remedy to oil spills. Microbes isolated from pesticide-contaminated soils may capable of utilizing the pesticides as energy source and hence when mixed along with bio-fertilizers, could serve as an insurance against increased pesticide-toxicity levels in agricultural platform. On the other hand, these newly introduced microorganisms could create an imbalance in the environment concerned. The mutual harmony in which the organisms in that particular environment existed may have to face alteration and we should be extremely careful so as to not disturb the mutual relationships already existing in the environment of both the benefits and the disadvantages would pave way for an improvised version of environmental biotechnology. Applications and Implications Humans have long been manipulating genetic material through breeding and modern genetic modification for optimizing crop yield, etc.. There can also be unexpected, negative health and environmental outcomes. Environmental biotechnology is about the balance between the applications that provide for these and the implications of manipulating genetic material. Textbooks address both the applications and implications. Environmental engineering texts addressing sewage treatment and biological principles are often now considered to be environmental biotechnology texts. These generally address the applications of biotechnologies, whereas the implications of these technologies are less often addressed; usually in books concerned with potential impacts and even catastrophic events. See also Agricultural biotechnology Microbial ecology Molecular Biotechnology References External links International Society for Environmental Biotechnology Biotechnology Environmental science
0.79677
0.976974
0.778424
Epistasis
Epistasis is a phenomenon in genetics in which the effect of a gene mutation is dependent on the presence or absence of mutations in one or more other genes, respectively termed modifier genes. In other words, the effect of the mutation is dependent on the genetic background in which it appears. Epistatic mutations therefore have different effects on their own than when they occur together. Originally, the term epistasis specifically meant that the effect of a gene variant is masked by that of different gene. The concept of epistasis originated in genetics in 1907 but is now used in biochemistry, computational biology and evolutionary biology. The phenomenon arises due to interactions, either between genes (such as mutations also being needed in regulators of gene expression) or within them (multiple mutations being needed before the gene loses function), leading to non-linear effects. Epistasis has a great influence on the shape of evolutionary landscapes, which leads to profound consequences for evolution and for the evolvability of phenotypic traits. History Understanding of epistasis has changed considerably through the history of genetics and so too has the use of the term. The term was first used by William Bateson and his collaborators Florence Durham and Muriel Wheldale Onslow. In early models of natural selection devised in the early 20th century, each gene was considered to make its own characteristic contribution to fitness, against an average background of other genes. Some introductory courses still teach population genetics this way. Because of the way that the science of population genetics was developed, evolutionary geneticists have tended to think of epistasis as the exception. However, in general, the expression of any one allele depends in a complicated way on many other alleles. In classical genetics, if genes A and B are mutated, and each mutation by itself produces a unique phenotype but the two mutations together show the same phenotype as the gene A mutation, then gene A is epistatic and gene B is hypostatic. For example, the gene for total baldness is epistatic to the gene for brown hair. In this sense, epistasis can be contrasted with genetic dominance, which is an interaction between alleles at the same gene locus. As the study of genetics developed, and with the advent of molecular biology, epistasis started to be studied in relation to quantitative trait loci (QTL) and polygenic inheritance. The effects of genes are now commonly quantifiable by assaying the magnitude of a phenotype (e.g. height, pigmentation or growth rate) or by biochemically assaying protein activity (e.g. binding or catalysis). Increasingly sophisticated computational and evolutionary biology models aim to describe the effects of epistasis on a genome-wide scale and the consequences of this for evolution. Since identification of epistatic pairs is challenging both computationally and statistically, some studies try to prioritize epistatic pairs. Classification Terminology about epistasis can vary between scientific fields. Geneticists often refer to wild type and mutant alleles where the mutation is implicitly deleterious and may talk in terms of genetic enhancement, synthetic lethality and genetic suppressors. Conversely, a biochemist may more frequently focus on beneficial mutations and so explicitly state the effect of a mutation and use terms such as reciprocal sign epistasis and compensatory mutation. Additionally, there are differences when looking at epistasis within a single gene (biochemistry) and epistasis within a haploid or diploid genome (genetics). In general, epistasis is used to denote the departure from 'independence' of the effects of different genetic loci. Confusion often arises due to the varied interpretation of 'independence' among different branches of biology. The classifications below attempt to cover the various terms and how they relate to one another. Additivity Two mutations are considered to be purely additive if the effect of the double mutation is the sum of the effects of the single mutations. This occurs when genes do not interact with each other, for example by acting through different metabolic pathways. Simply, additive traits were studied early on in the history of genetics, however they are relatively rare, with most genes exhibiting at least some level of epistatic interaction. Magnitude epistasis When the double mutation has a fitter phenotype than expected from the effects of the two single mutations, it is referred to as positive epistasis. Positive epistasis between beneficial mutations generates greater improvements in function than expected. Positive epistasis between deleterious mutations protects against the negative effects to cause a less severe fitness drop. Conversely, when two mutations together lead to a less fit phenotype than expected from their effects when alone, it is called negative epistasis. Negative epistasis between beneficial mutations causes smaller than expected fitness improvements, whereas negative epistasis between deleterious mutations causes greater-than-additive fitness drops. Independently, when the effect on fitness of two mutations is more radical than expected from their effects when alone, it is referred to as synergistic epistasis. The opposite situation, when the fitness difference of the double mutant from the wild type is smaller than expected from the effects of the two single mutations, it is called antagonistic epistasis. Therefore, for deleterious mutations, negative epistasis is also synergistic, while positive epistasis is antagonistic; conversely, for advantageous mutations, positive epistasis is synergistic, while negative epistasis is antagonistic. The term genetic enhancement is sometimes used when a double (deleterious) mutant has a more severe phenotype than the additive effects of the single mutants. Strong positive epistasis is sometimes referred to by creationists as irreducible complexity (although most examples are misidentified). Sign epistasis Sign epistasis occurs when one mutation has the opposite effect when in the presence of another mutation. This occurs when a mutation that is deleterious on its own can enhance the effect of a particular beneficial mutation. For example, a large and complex brain is a waste of energy without a range of sense organs, but sense organs are made more useful by a large and complex brain that can better process the information. If a fitness landscape has no sign epistasis then it is called smooth. At its most extreme, reciprocal sign epistasis occurs when two deleterious genes are beneficial when together. For example, producing a toxin alone can kill a bacterium, and producing a toxin exporter alone can waste energy, but producing both can improve fitness by killing competing organisms. If a fitness landscape has sign epistasis but no reciprocal sign epistasis then it is called semismooth. Reciprocal sign epistasis also leads to genetic suppression whereby two deleterious mutations are less harmful together than either one on its own, i.e. one compensates for the other. A clear example of genetic suppression was the demonstration that in the assembly of bacteriophage T4 two deleterious mutations, each causing a deficiency in the level of a different morphogenetic protein, could interact positively. If a mutation causes a reduction in a particular structural component, this can bring about an imbalance in morphogenesis and loss of viable virus progeny, but production of viable progeny can be restored by a second (suppressor) mutation in another morphogenetic component that restores the balance of protein components. The term genetic suppression can also apply to sign epistasis where the double mutant has a phenotype intermediate between those of the single mutants, in which case the more severe single mutant phenotype is suppressed by the other mutation or genetic condition. For example, in a diploid organism, a hypomorphic (or partial loss-of-function) mutant phenotype can be suppressed by knocking out one copy of a gene that acts oppositely in the same pathway. In this case, the second gene is described as a "dominant suppressor" of the hypomorphic mutant; "dominant" because the effect is seen when one wild-type copy of the suppressor gene is present (i.e. even in a heterozygote). For most genes, the phenotype of the heterozygous suppressor mutation by itself would be wild type (because most genes are not haplo-insufficient), so that the double mutant (suppressed) phenotype is intermediate between those of the single mutants. In non reciprocal sign epistasis, fitness of the mutant lies in the middle of that of the extreme effects seen in reciprocal sign epistasis. When two mutations are viable alone but lethal in combination, it is called Synthetic lethality or unlinked non-complementation. Haploid organisms In a haploid organism with genotypes (at two loci) ab, Ab, aB or AB, we can think of different forms of epistasis as affecting the magnitude of a phenotype upon mutation individually (Ab and aB) or in combination (AB). Diploid organisms Epistasis in diploid organisms is further complicated by the presence of two copies of each gene. Epistasis can occur between loci, but additionally, interactions can occur between the two copies of each locus in heterozygotes. For a two locus, two allele system, there are eight independent types of gene interaction. Genetic and molecular causes Additivity This can be the case when multiple genes act in parallel to achieve the same effect. For example, when an organism is in need of phosphorus, multiple enzymes that break down different phosphorylated components from the environment may act additively to increase the amount of phosphorus available to the organism. However, there inevitably comes a point where phosphorus is no longer the limiting factor for growth and reproduction and so further improvements in phosphorus metabolism have smaller or no effect (negative epistasis). Some sets of mutations within genes have also been specifically found to be additive. It is now considered that strict additivity is the exception, rather than the rule, since most genes interact with hundreds or thousands of other genes. Epistasis between genes Epistasis within the genomes of organisms occurs due to interactions between the genes within the genome. This interaction may be direct if the genes encode proteins that, for example, are separate components of a multi-component protein (such as the ribosome), inhibit each other's activity, or if the protein encoded by one gene modifies the other (such as by phosphorylation). Alternatively the interaction may be indirect, where the genes encode components of a metabolic pathway or network, developmental pathway, signalling pathway or transcription factor network. For example, the gene encoding the enzyme that synthesizes penicillin is of no use to a fungus without the enzymes that synthesize the necessary precursors in the metabolic pathway. Epistasis within genes Just as mutations in two separate genes can be non-additive if those genes interact, mutations in two codons within a gene can be non-additive. In genetics this is sometimes called intragenic suppression when one deleterious mutation can be compensated for by a second mutation within that gene. Analysis of bacteriophage T4 mutants that were altered in the rIIB cistron (gene) revealed that certain pairwise combinations of mutations could mutually suppress each other; that is the double mutants had a more nearly wild-type phenotype than either mutant alone. The linear map order of the mutants was established using genetic recombination data, From these sources of information, the triplet nature of the genetic code was logically deduced for the first time in 1961, and other key features of the code were also inferred. Also intragenic suppression can occur when the amino acids within a protein interact. Due to the complexity of protein folding and activity, additive mutations are rare. Proteins are held in their tertiary structure by a distributed, internal network of cooperative interactions (hydrophobic, polar and covalent). Epistatic interactions occur whenever one mutation alters the local environment of another residue (either by directly contacting it, or by inducing changes in the protein structure). For example, in a disulphide bridge, a single cysteine has no effect on protein stability until a second is present at the correct location at which point the two cysteines form a chemical bond which enhances the stability of the protein. This would be observed as positive epistasis where the double-cysteine variant had a much higher stability than either of the single-cysteine variants. Conversely, when deleterious mutations are introduced, proteins often exhibit mutational robustness whereby as stabilising interactions are destroyed the protein still functions until it reaches some stability threshold at which point further destabilising mutations have large, detrimental effects as the protein can no longer fold. This leads to negative epistasis whereby mutations that have little effect alone have a large, deleterious effect together. In enzymes, the protein structure orients a few, key amino acids into precise geometries to form an active site to perform chemistry. Since these active site networks frequently require the cooperation of multiple components, mutating any one of these components massively compromises activity, and so mutating a second component has a relatively minor effect on the already inactivated enzyme. For example, removing any member of the catalytic triad of many enzymes will reduce activity to levels low enough that the organism is no longer viable. Heterozygotic epistasis Diploid organisms contain two copies of each gene. If these are different (heterozygous / heteroallelic), the two different copies of the allele may interact with each other to cause epistasis. This is sometimes called allelic complementation, or interallelic complementation. It may be caused by several mechanisms, for example transvection, where an enhancer from one allele acts in trans to activate transcription from the promoter of the second allele. Alternately, trans-splicing of two non-functional RNA molecules may produce a single, functional RNA. Similarly, at the protein level, proteins that function as dimers may form a heterodimer composed of one protein from each alternate gene and may display different properties to the homodimer of one or both variants. Two bacteriophage T4 mutants defective at different locations in the same gene can undergo allelic complementation during a mixed infection. That is, each mutant alone upon infection cannot produce viable progeny, but upon mixed infection with two complementing mutants, viable phage are formed. Intragenic complementation was demonstrated for several genes that encode structural proteins of the bacteriophage indicating that such proteins function as dimers or even higher order multimers. Evolutionary consequences Fitness landscapes and evolvability In evolutionary genetics, the sign of epistasis is usually more significant than the magnitude of epistasis. This is because magnitude epistasis (positive and negative) simply affects how beneficial mutations are together, however sign epistasis affects whether mutation combinations are beneficial or deleterious. A fitness landscape is a representation of the fitness where all genotypes are arranged in 2D space and the fitness of each genotype is represented by height on a surface. It is frequently used as a visual metaphor for understanding evolution as the process of moving uphill from one genotype to the next, nearby, fitter genotype. If all mutations are additive, they can be acquired in any order and still give a continuous uphill trajectory. The landscape is perfectly smooth, with only one peak (global maximum) and all sequences can evolve uphill to it by the accumulation of beneficial mutations in any order. Conversely, if mutations interact with one another by epistasis, the fitness landscape becomes rugged as the effect of a mutation depends on the genetic background of other mutations. At its most extreme, interactions are so complex that the fitness is 'uncorrelated' with gene sequence and the topology of the landscape is random. This is referred to as a rugged fitness landscape and has profound implications for the evolutionary optimisation of organisms. If mutations are deleterious in one combination but beneficial in another, the fittest genotypes can only be accessed by accumulating mutations in one specific order. This makes it more likely that organisms will get stuck at local maxima in the fitness landscape having acquired mutations in the 'wrong' order. For example, a variant of TEM1 β-lactamase with 5 mutations is able to cleave cefotaxime (a third generation antibiotic). However, of the 120 possible pathways to this 5-mutant variant, only 7% are accessible to evolution as the remainder passed through fitness valleys where the combination of mutations reduces activity. In contrast, changes in environment (and therefore the shape of the fitness landscape) have been shown to provide escape from local maxima. In this example, selection in changing antibiotic environments resulted in a "gateway mutation" which epistatically interacted in a positive manner with other mutations along an evolutionary pathway, effectively crossing a fitness valley. This gateway mutation alleviated the negative epistatic interactions of other individually beneficial mutations, allowing them to better function in concert. Complex environments or selections may therefore bypass local maxima found in models assuming simple positive selection. High epistasis is usually considered a constraining factor on evolution, and improvements in a highly epistatic trait are considered to have lower evolvability. This is because, in any given genetic background, very few mutations will be beneficial, even though many mutations may need to occur to eventually improve the trait. The lack of a smooth landscape makes it harder for evolution to access fitness peaks. In highly rugged landscapes, fitness valleys block access to some genes, and even if ridges exist that allow access, these may be rare or prohibitively long. Moreover, adaptation can move proteins into more precarious or rugged regions of the fitness landscape. These shifting "fitness territories" may act to decelerate evolution and could represent tradeoffs for adaptive traits. The frustration of adaptive evolution by rugged fitness landscapes was recognized as a potential force for the evolution of evolvability. Michael Conrad in 1972 was the first to propose a mechanism for the evolution of evolvability by noting that a mutation which smoothed the fitness landscape at other loci could facilitate the production of advantageous mutations and hitchhike along with them. Rupert Riedl in 1975 proposed that new genes which produced the same phenotypic effects with a single mutation as other loci with reciprocal sign epistasis would be a new means to attain a phenotype otherwise too unlikely to occur by mutation. Rugged, epistatic fitness landscapes also affect the trajectories of evolution. When a mutation has a large number of epistatic effects, each accumulated mutation drastically changes the set of available beneficial mutations. Therefore, the evolutionary trajectory followed depends highly on which early mutations were accepted. Thus, repeats of evolution from the same starting point tend to diverge to different local maxima rather than converge on a single global maximum as they would in a smooth, additive landscape. Evolution of sex Negative epistasis and sex are thought to be intimately correlated. Experimentally, this idea has been tested in using digital simulations of asexual and sexual populations. Over time, sexual populations move towards more negative epistasis, or the lowering of fitness by two interacting alleles. It is thought that negative epistasis allows individuals carrying the interacting deleterious mutations to be removed from the populations efficiently. This removes those alleles from the population, resulting in an overall more fit population. This hypothesis was proposed by Alexey Kondrashov, and is sometimes known as the deterministic mutation hypothesis and has also been tested using artificial gene networks. However, the evidence for this hypothesis has not always been straightforward and the model proposed by Kondrashov has been criticized for assuming mutation parameters far from real world observations. In addition, in those tests which used artificial gene networks, negative epistasis is only found in more densely connected networks, whereas empirical evidence indicates that natural gene networks are sparsely connected, and theory shows that selection for robustness will favor more sparsely connected and minimally complex networks. Methods and model systems Regression analysis Quantitative genetics focuses on genetic variance due to genetic interactions. Any two locus interactions at a particular gene frequency can be decomposed into eight independent genetic effects using a weighted regression. In this regression, the observed two locus genetic effects are treated as dependent variables and the "pure" genetic effects are used as the independent variables. Because the regression is weighted, the partitioning among the variance components will change as a function of gene frequency. By analogy it is possible to expand this system to three or more loci, or to cytonuclear interactions Double mutant cycles When assaying epistasis within a gene, site-directed mutagenesis can be used to generate the different genes, and their protein products can be assayed (e.g. for stability or catalytic activity). This is sometimes called a double mutant cycle and involves producing and assaying the wild type protein, the two single mutants and the double mutant. Epistasis is measured as the difference between the effects of the mutations together versus the sum of their individual effects. This can be expressed as a free energy of interaction. The same methodology can be used to investigate the interactions between larger sets of mutations but all combinations have to be produced and assayed. For example, there are 120 different combinations of 5 mutations, some or all of which may show epistasis... Computational prediction Numerous computational methods have been developed for the detection and characterization of epistasis. Many of these rely on machine learning to detect non-additive effects that might be missed by statistical approaches such as linear regression. For example, multifactor dimensionality reduction (MDR) was designed specifically for nonparametric and model-free detection of combinations of genetic variants that are predictive of a phenotype such as disease status in human populations. Several of these approaches have been broadly reviewed in the literature. Even more recently, methods that utilize insights from theoretical computer science (the Hadamard transform and compressed sensing) or maximum-likelihood inference were shown to distinguish epistatic effects from overall non-linearity in genotype–phenotype map structure, while others used patient survival analysis to identify non-linearity. See also Co-adaptation Epistasis and functional genomics Evolution of sexual reproduction Evolvability Fitness landscape Interactome (Genetic interaction network) Mutation Pleiotropy Quantitative trait locus Synthetic lethality Synthetic viability References External links INTERSNP - a software for genome-wide interaction analysis (GWIA) of case-control and case-only SNP data, including analysis of quantitative traits. Science Aid: Epistasis High school (GCSE, Alevel) resource. GeneticInteractions.org Epistasis.org Classical genetics Genetics concepts
0.782556
0.994663
0.778379
Aquatic ecosystem
An aquatic ecosystem is an ecosystem found in and around a body of water, in contrast to land-based terrestrial ecosystems. Aquatic ecosystems contain communities of organisms—aquatic life—that are dependent on each other and on their environment. The two main types of aquatic ecosystems are marine ecosystems and freshwater ecosystems. Freshwater ecosystems may be lentic (slow moving water, including pools, ponds, and lakes); lotic (faster moving water, for example streams and rivers); and wetlands (areas where the soil is saturated or inundated for at least part of the time). Types Marine ecosystems Marine coastal ecosystem Marine surface ecosystem Freshwater ecosystems Lentic ecosystem (lakes) Lotic ecosystem (rivers) Wetlands Functions Aquatic ecosystems perform many important environmental functions. For example, they recycle nutrients, purify water, attenuate floods, recharge ground water and provide habitats for wildlife. The biota of an aquatic ecosystem contribute to its self-purification, most notably microorganisms, phytoplankton, higher plants, invertebrates, fish, bacteria, protists, aquatic fungi, and more. These organisms are actively involved in multiple self-purification processes, including organic matter destruction and water filtration. It is crucial that aquatic ecosystems are reliably self-maintained, as they also provide habitats for species that reside in them. In addition to environmental functions, aquatic ecosystems are also used for human recreation, and are very important to the tourism industry, especially in coastal regions. They are also used for religious purposes, such as the worshipping of the Jordan River by Christians, and educational purposes, such as the usage of lakes for ecological study. Biotic characteristics (living components) The biotic characteristics are mainly determined by the organisms that occur. For example, wetland plants may produce dense canopies that cover large areas of sediment—or snails or geese may graze the vegetation leaving large mud flats. Aquatic environments have relatively low oxygen levels, forcing adaptation by the organisms found there. For example, many wetland plants must produce aerenchyma to carry oxygen to roots. Other biotic characteristics are more subtle and difficult to measure, such as the relative importance of competition, mutualism or predation. There are a growing number of cases where predation by coastal herbivores including snails, geese and mammals appears to be a dominant biotic factor. Autotrophic organisms Autotrophic organisms are producers that generate organic compounds from inorganic material. Algae use solar energy to generate biomass from carbon dioxide and are possibly the most important autotrophic organisms in aquatic environments. The more shallow the water, the greater the biomass contribution from rooted and floating vascular plants. These two sources combine to produce the extraordinary production of estuaries and wetlands, as this autotrophic biomass is converted into fish, birds, amphibians and other aquatic species. Chemosynthetic bacteria are found in benthic marine ecosystems. These organisms are able to feed on hydrogen sulfide in water that comes from volcanic vents. Great concentrations of animals that feed on these bacteria are found around volcanic vents. For example, there are giant tube worms (Riftia pachyptila) 1.5 m in length and clams (Calyptogena magnifica) 30 cm long. Heterotrophic organisms Heterotrophic organisms consume autotrophic organisms and use the organic compounds in their bodies as energy sources and as raw materials to create their own biomass. Euryhaline organisms are salt tolerant and can survive in marine ecosystems, while stenohaline or salt intolerant species can only live in freshwater environments. Abiotic characteristics (non-living components) An ecosystem is composed of biotic communities that are structured by biological interactions and abiotic environmental factors. Some of the important abiotic environmental factors of aquatic ecosystems include substrate type, water depth, nutrient levels, temperature, salinity, and flow. It is often difficult to determine the relative importance of these factors without rather large experiments. There may be complicated feedback loops. For example, sediment may determine the presence of aquatic plants, but aquatic plants may also trap sediment, and add to the sediment through peat. The amount of dissolved oxygen in a water body is frequently the key substance in determining the extent and kinds of organic life in the water body. Fish need dissolved oxygen to survive, although their tolerance to low oxygen varies among species; in extreme cases of low oxygen, some fish even resort to air gulping. Plants often have to produce aerenchyma, while the shape and size of leaves may also be altered. Conversely, oxygen is fatal to many kinds of anaerobic bacteria. Nutrient levels are important in controlling the abundance of many species of algae. The relative abundance of nitrogen and phosphorus can in effect determine which species of algae come to dominate. Algae are a very important source of food for aquatic life, but at the same time, if they become over-abundant, they can cause declines in fish when they decay. Similar over-abundance of algae in coastal environments such as the Gulf of Mexico produces, upon decay, a hypoxic region of water known as a dead zone. The salinity of the water body is also a determining factor in the kinds of species found in the water body. Organisms in marine ecosystems tolerate salinity, while many freshwater organisms are intolerant of salt. The degree of salinity in an estuary or delta is an important control upon the type of wetland (fresh, intermediate, or brackish), and the associated animal species. Dams built upstream may reduce spring flooding, and reduce sediment accretion, and may therefore lead to saltwater intrusion in coastal wetlands. Freshwater used for irrigation purposes often absorbs levels of salt that are harmful to freshwater organisms. Threats The health of an aquatic ecosystem is degraded when the ecosystem's ability to absorb a stress has been exceeded. A stress on an aquatic ecosystem can be a result of physical, chemical or biological alterations to the environment. Physical alterations include changes in water temperature, water flow and light availability. Chemical alterations include changes in the loading rates of biostimulatory nutrients, oxygen-consuming materials, and toxins. Biological alterations include over-harvesting of commercial species and the introduction of exotic species. Human populations can impose excessive stresses on aquatic ecosystems. Climate change driven by anthropogenic activities can harm aquatic ecosystems by disrupting current distribution patterns of plants and animals. It has negatively impacted deep sea biodiversity, coastal fish diversity, crustaceans, coral reefs, and other biotic components of these ecosystems. Human-made aquatic ecosystems, such as ditches, aquaculture ponds, and irrigation channels, may also cause harm to naturally occurring ecosystems by trading off biodiversity with their intended purposes. For instance, ditches are primarily used for drainage, but their presence also negatively affects biodiversity. There are many examples of excessive stresses with negative consequences. The environmental history of the Great Lakes of North America illustrates this problem, particularly how multiple stresses, such as water pollution, over-harvesting and invasive species can combine. The Norfolk Broadlands in England illustrate similar decline with pollution and invasive species. Lake Pontchartrain along the Gulf of Mexico illustrates the negative effects of different stresses including levee construction, logging of swamps, invasive species and salt water intrusion. See also Ocean - one of the founders of aquatic ecosystem science References Aquatic ecology Ecosystems Aquatic plants Fisheries science Systems ecology Water
0.78115
0.996391
0.778331
Naturalisation (biology)
Naturalisation (or naturalization) is the ecological phenomenon through which a species, taxon, or population of exotic (as opposed to native) origin integrates into a given ecosystem, becoming capable of reproducing and growing in it, and proceeds to disseminate spontaneously. In some instances, the presence of a species in a given ecosystem is so ancient that it cannot be presupposed whether it is native or introduced. Generally, any introduced species may (in the wild) either go extinct or naturalise in its new environment. Some populations do not sustain themselves reproductively, but exist because of continued influx from elsewhere. Such a non-sustaining population, or the individuals within it, are said to be adventive. Cultivated plants, sometimes called nativars, are a major source of adventive populations. Botany In botany, naturalisation is the situation in which an exogenous plant reproduces and disperses on its own in a new environment. For example, northern white cedar is naturalised in the United Kingdom, where it reproduces on its own, while it is not in France, where human intervention via cuttings or seeds are essential for its dissemination. Two categories of naturalisation are defined from two distinct parameters: one, archaeonaturalised, refers to introduction before a given time (introduced over a hundred years ago), while the second, amphinaturalised or eurynaturalised, implies a notion of spatial extension (taxon assimilated indigenous and present over a vast space, opposed to stenonaturalised). Degrees of naturalisation The degrees of naturalisation are defined in relation to the status of nativity or introduction of taxons or species: Accidental taxon: non-native taxon growing spontaneously, which appears sporadically as a result of accidental introduction due to human activities (as opposed to intentional introductions) Subspontaneous taxon: taxon naturalised following an introduction of accidental origin (fortuitous introduction linked to human activities) or unknown, and which, after acclimatization, can reproduce like native plants but is still poorly established Spontaneous taxon: native or non-native taxon growing and reproducing naturally, without intentional human intervention in the territory considered, and is well established (mixes with local flora or fauna) Zoology Animal naturalisation is mainly carried out through breeding and by commensalism following human migrations. The concerned species are thus: either introduced voluntarily into an ecosystem where they are not native; either accidentally introduced or become feral; or by naturally following human migratory flows by commensalism (eg: arrival of house sparrow in Western Europe following Huns, and previously in Eastern Europe from Asia Minor in Antiquity). It sometimes happens that a naturalised species hybridizes with a native. Introduction and origin areas The introduction site or introduction area is the place or, in a broadlier way, the new environment where the candidate species for naturalisation takes root. It is generally opposed to the origin area, where this same species is native. There is also a more ambiguous notion that is the "natural distribution area" or "natural distribution range", particularly when it comes to anthropophilic species or some species benefiting from anthropogenic land settlement (canals, bridges, deforestation, etc.) that have connected two previously isolated areas (e.g. the Suez Canal, which causes Lessepsian migration). Impact on the ecosystem Naturalisation is sometimes done with human help in order to replace another species having suffered directly or indirectly from anthropogenic activities, or deemed less profitable for human use. Some naturalised species eventually become invasive. For example, the European rabbit, native to Europe and which abounds in Australia; or the Japanese knotweed which is invading Europe and America where it is considered to be amongst the one hundred most invasive species in the 21st century. Apart from direct competition between native and introduced populations, genetic pollution by hybridization can add up cumulatively to environmental effects that compromise the conservation of native populations. Some naturalised species, such as palms, can act as ecosystem engineers, by changing the habitat and creating new niches that can sometimes have positive effects on an ecosystem. Potential and/or perceived positive impacts of naturalised species are less studied than potential and/or perceived negative impacts. However, the impact on local species is not easy to assess in a short period. For instance, the African sacred ibis (Threskiornis aethiopicus) escaped in 1990 from an animal park in Morbihan (France), gave rise to an eradication campaign in 2008. In 2013, however, the CNRS stated that this bird species is not a threat in France, and may even promote Eurasian spoonbill and limit the development of the invasive Louisiana crayfish. Naturalised species may become invasive species if they become sufficiently abundant to have an adverse effect on native species (e.g. microbes affected by invasive plants) or on biotope. See also Adventitious plant Adventive species Colonisation (biology) Cosmopolitan distribution Endemism Hemerochory Indigenous (ecology) References Ecological processes Ecology terminology .
0.787163
0.98877
0.778323
Superorganism
A superorganism, or supraorganism, is a group of synergetically-interacting organisms of the same species. A community of synergetically-interacting organisms of different species is called a holobiont. Concept The term superorganism is used most often to describe a social unit of eusocial animals in which division of labour is highly specialised and individuals cannot survive by themselves for extended periods. Ants are the best-known example of such a superorganism. A superorganism can be defined as "a collection of agents which can act in concert to produce phenomena governed by the collective", phenomena being any activity "the hive wants" such as ants collecting food and avoiding predators, or bees choosing a new nest site. In challenging environments, micro organisms collaborate and evolve together to process unlikely sources of nutrients such as methane. This process called syntrophy ("eating together") might be linked to the evolution of eukaryote cells and involved in the emergence or maintenance of life forms in challenging environments on Earth and possibly other planets. Superorganisms tend to exhibit homeostasis, power law scaling, persistent disequilibrium and emergent behaviours. The term was coined in 1789 by James Hutton, the "father of geology", to refer to Earth in the context of geophysiology. The Gaia hypothesis of James Lovelock, and Lynn Margulis as well as the work of Hutton, Vladimir Vernadsky and Guy Murchie, have suggested that the biosphere itself can be considered a superorganism, but that has been disputed. This view relates to systems theory and the dynamics of a complex system. The concept of a superorganism raises the question of what is to be considered an individual. Toby Tyrrell's critique of the Gaia hypothesis argues that Earth's climate system does not resemble an animal's physiological system. Planetary biospheres are not tightly regulated in the same way that animal bodies are: "planets, unlike animals, are not products of evolution. Therefore we are entitled to be highly skeptical (or even outright dismissive) about whether to expect something akin to a 'superorganism'". He concludes that "the superorganism analogy is unwarranted". Some scientists have suggested that individual human beings can be thought of as "superorganisms"; as a typical human digestive system contains 1013 to 1014 microorganisms whose collective genome, the microbiome studied by the Human Microbiome Project, contains at least 100 times as many genes as the human genome itself. Salvucci wrote that superorganism is another level of integration that is observed in nature. These levels include the genomic, the organismal and the ecological levels. The genomic structure of organisms reveals the fundamental role of integration and gene shuffling along evolution. In social theory The 19th-century thinker Herbert Spencer coined the term super-organic to focus on social organization (the first chapter of his Principles of Sociology is entitled "Super-organic Evolution"), though this was apparently a distinction between the organic and the social, not an identity: Spencer explored the holistic nature of society as a social organism while distinguishing the ways in which society did not behave like an organism. For Spencer, the super-organic was an emergent property of interacting organisms, that is, human beings. And, as has been argued by D. C. Phillips, there is a "difference between emergence and reductionism". The economist Carl Menger expanded upon the evolutionary nature of much social growth but never abandoned methodological individualism. Many social institutions arose, Menger argued, not as "the result of socially teleological causes, but the unintended result of innumerable efforts of economic subjects pursuing 'individual' interests". Both Spencer and Menger argued that because individuals choose and act, any social whole should be considered less than an organism, but Menger emphasized that more strongly. Spencer used the idea to engage in extended analysis of social structure and conceded that it was primarily an analogy. For Spencer, the idea of the super-organic best designated a distinct level of social reality above that of biology and psychology, not a one-to-one identity with an organism. Nevertheless, Spencer maintained that "every organism of appreciable size is a society", which has suggested to some that the issue may be terminological. The term superorganic was adopted by the anthropologist Alfred L. Kroeber in 1917. Social aspects of the superorganism concept are analysed by Alan Marshall in his 2002 book "The Unity of Nature". Finally, recent work in social psychology has offered the superorganism metaphor as a unifying framework to understand diverse aspects of human sociality, such as religion, conformity, and social identity processes. In cybernetics Superorganisms are important in cybernetics, particularly biocybernetics, since they are capable of the so-called "distributed intelligence", a system composed of individual agents that have limited intelligence and information. They can pool resources and so can complete goals that are beyond reach of the individuals on their own. Existence of such behavior in organisms has many implications for military and management applications and is being actively researched. Superorganisms are also considered dependent upon cybernetic governance and processes. This is based on the idea that a biological system – in order to be effective – needs a sub-system of cybernetic communications and control. This is demonstrated in the way a mole rat colony uses functional synergy and cybernetic processes together. Joël de Rosnay also introduced a concept called "cybionte" to describe cybernetic superorganism. The notion associates superorganism with chaos theory, multimedia technology, and other new developments. See also Collective intelligence Group mind (science fiction) Holobiont Organismic computing Quorum sensing, collective behaviour of bacteria Stigmergy Siphonophorae Gaia hypothesis References Literature Jürgen Tautz, Helga R. Heilmann: The Buzz about Bees – Biology of a Superorganism, Springer-Verlag 2008. Bert Hölldobler, E. O. Wilson: "The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies", W.W. Norton, 2008. External links People Are Human-Bacteria Hybrid, Wired Magazine, October 11, 2004 First Bees and Ants, and Now People: This Evolutionary Transition Might Be Coming for Humanity, Haaretz Magazine, November 19, 2022 Biocybernetics Collective intelligence Cybernetics Holism Biological classification Emergence Management cybernetics
0.783541
0.993233
0.778238
Complex adaptive system
A complex adaptive system is a system that is complex in that it is a dynamic network of interactions, but the behavior of the ensemble may not be predictable according to the behavior of the components. It is adaptive in that the individual and collective behavior mutate and self-organize corresponding to the change-initiating micro-event or collection of events. It is a "complex macroscopic collection" of relatively "similar and partially connected micro-structures" formed in order to adapt to the changing environment and increase their survivability as a macro-structure. The Complex Adaptive Systems approach builds on replicator dynamics. The study of complex adaptive systems, a subset of nonlinear dynamical systems, is an interdisciplinary matter that attempts to blend insights from the natural and social sciences to develop system-level models and insights that allow for heterogeneous agents, phase transition, and emergent behavior. Overview The term complex adaptive systems, or complexity science, is often used to describe the loosely organized academic field that has grown up around the study of such systems. Complexity science is not a single theory—it encompasses more than one theoretical framework and is interdisciplinary, seeking the answers to some fundamental questions about living, adaptable, changeable systems. Complex adaptive systems may adopt hard or softer approaches. Hard theories use formal language that is precise, tend to see agents as having tangible properties, and usually see objects in a behavioral system that can be manipulated in some way. Softer theories use natural language and narratives that may be imprecise, and agents are subjects having both tangible and intangible properties. Examples of hard complexity theories include Complex Adaptive Systems (CAS) and Viability Theory, and a class of softer theory is Viable System Theory. Many of the propositional consideration made in hard theory are also of relevance to softer theory. From here on, interest will now center on CAS. The study of CAS focuses on complex, emergent and macroscopic properties of the system. John H. Holland said that CAS "are systems that have a large numbers of components, often called agents, that interact and adapt or learn." Typical examples of complex adaptive systems include: climate; cities; firms; markets; governments; industries; ecosystems; social networks; power grids; animal swarms; traffic flows; social insect (e.g. ant) colonies; the brain and the immune system; and the cell and the developing embryo. Human social group-based endeavors, such as political parties, communities, geopolitical organizations, war, and terrorist networks are also considered CAS. The internet and cyberspace—composed, collaborated, and managed by a complex mix of human–computer interactions, is also regarded as a complex adaptive system. CAS can be hierarchical, but more often exhibit aspects of "self-organization". The term complex adaptive system was coined in 1968 by sociologist Walter F. Buckley who proposed a model of cultural evolution which regards psychological and socio-cultural systems as analogous with biological species. In the modern context, complex adaptive system is sometimes linked to memetics, or proposed as a reformulation of memetics. Michael D. Cohen and Robert Axelrod however argue the approach is not social Darwinism or sociobiology because, even though the concepts of variation, interaction and selection can be applied to modelling 'populations of business strategies', for example, the detailed evolutionary mechanisms are often distinctly unbiological. As such, complex adaptive system is more similar to Richard Dawkins's idea of replicators. General properties What distinguishes a CAS from a pure multi-agent system (MAS) is the focus on top-level properties and features like self-similarity, complexity, emergence and self-organization. A MAS is defined as a system composed of multiple interacting agents; whereas in CAS, the agents as well as the system are adaptive and the system is self-similar. A CAS is a complex, self-similar collectivity of interacting, adaptive agents. Complex Adaptive Systems are characterized by a high degree of adaptive capacity, giving them resilience in the face of perturbation. Other important properties are adaptation (or homeostasis), communication, cooperation, specialization, spatial and temporal organization, and reproduction. They can be found on all levels: cells specialize, adapt and reproduce themselves just like larger organisms do. Communication and cooperation take place on all levels, from the agent to the system level. The forces driving co-operation between agents in such a system, in some cases, can be analyzed with game theory. Characteristics Some of the most important characteristics of complex adaptive systems are: The number of elements is sufficiently large that conventional descriptions (e.g. a system of differential equations) are not only impractical, but cease to assist in understanding the system. Moreover, the elements interact dynamically, and the interactions can be physical or involve the exchange of information. Such interactions are rich, i.e. any element or sub-system in the system is affected by and affects several other elements or sub-systems. The interactions are non-linear: small changes in inputs, physical interactions or stimuli can cause large effects or very significant changes in outputs. Interactions are primarily but not exclusively with immediate neighbours and the nature of the influence is modulated. Any interaction can feed back onto itself directly or after a number of intervening stages. Such feedback can vary in quality. This is known as recurrency. The overall behavior of the system of elements is not predicted by the behavior of the individual elements Such systems may be open and it may be difficult or impossible to define system boundaries Complex systems operate under far from equilibrium conditions. There has to be a constant flow of energy to maintain the organization of the system Agents in the system are adaptive. They update their strategies in response to input from other agents, and the system itself. Elements in the system may be ignorant of the behaviour of the system as a whole, responding only to the information or physical stimuli available to them locally Robert Axelrod & Michael D. Cohen identify a series of key terms from a modeling perspective: Strategy, a conditional action pattern that indicates what to do in which circumstances Artifact, a material resource that has definite location and can respond to the action of agents Agent, a collection of properties, strategies & capabilities for interacting with artifacts & other agents Population, a collection of agents, or, in some situations, collections of strategies System, a larger collection, including one or more populations of agents and possibly also artifacts Type, all the agents (or strategies) in a population that have some characteristic in common Variety, the diversity of types within a population or system Interaction pattern, the recurring regularities of contact among types within a system Space (physical), location in geographical space & time of agents and artifacts Space (conceptual), "location" in a set of categories structured so that "nearby" agents will tend to interact Selection, processes that lead to an increase or decrease in the frequency of various types of agent or strategies Success criteria or performance measures, a "score" used by an agent or designer in attributing credit in the selection of relatively successful (or unsuccessful) strategies or agents Turner and Baker synthesized the characteristics of complex adaptive systems from the literature and tested these characteristics in the context of creativity and innovation. Each of these eight characteristics had been shown to be present in the creativity and innovative processes: Path dependent: Systems tend to be sensitive to their initial conditions. The same force might affect systems differently. Systems have a history: The future behavior of a system depends on its initial starting point and subsequent history. Non-linearity: React disproportionately to environmental perturbations. Outcomes differ from those of simple systems. Emergence: Each system's internal dynamics affect its ability to change in a manner that might be quite different from other systems. Irreducible: Irreversible process transformations cannot be reduced back to its original state. Adaptive/Adaptability: Systems that are simultaneously ordered and disordered are more adaptable and resilient. Operates between order and chaos: Adaptive tension emerges from the energy differential between the system and its environment. Self-organizing: Systems are composed of interdependency, interactions of its parts, and diversity in the system. Modeling and simulation CAS are occasionally modeled by means of agent-based models and complex network-based models. Agent-based models are developed by means of various methods and tools primarily by means of first identifying the different agents inside the model. Another method of developing models for CAS involves developing complex network models by means of using interaction data of various CAS components. In 2013 SpringerOpen/BioMed Central launched an online open-access journal on the topic of complex adaptive systems modeling (CASM). Publication of the journal ceased in 2020. Evolution of complexity Living organisms are complex adaptive systems. Although complexity is hard to quantify in biology, evolution has produced some remarkably complex organisms. This observation has led to the common misconception of evolution being progressive and leading towards what are viewed as "higher organisms". If this were generally true, evolution would possess an active trend towards complexity. As shown below, in this type of process the value of the most common amount of complexity would increase over time. Indeed, some artificial life simulations have suggested that the generation of CAS is an inescapable feature of evolution. However, the idea of a general trend towards complexity in evolution can also be explained through a passive process. This involves an increase in variance but the most common value, the mode, does not change. Thus, the maximum level of complexity increases over time, but only as an indirect product of there being more organisms in total. This type of random process is also called a bounded random walk. In this hypothesis, the apparent trend towards more complex organisms is an illusion resulting from concentrating on the small number of large, very complex organisms that inhabit the right-hand tail of the complexity distribution and ignoring simpler and much more common organisms. This passive model emphasizes that the overwhelming majority of species are microscopic prokaryotes, which comprise about half the world's biomass and constitute the vast majority of Earth's biodiversity. Therefore, simple life remains dominant on Earth, and complex life appears more diverse only because of sampling bias. If there is a lack of an overall trend towards complexity in biology, this would not preclude the existence of forces driving systems towards complexity in a subset of cases. These minor trends would be balanced by other evolutionary pressures that drive systems towards less complex states. See also Artificial life Chaos theory Cognitive science Command and Control Research Program Complex system Computational sociology Dual-phase evolution Econophysics Enterprise systems engineering Generative sciences Mean-field game theory Open system (systems theory) Santa Fe Institute Simulated reality Sociology and complexity science Super wicked problem Swarm Development Group Universal Darwinism References Literature ; commissioned as a report by the UK government's Foresight Programme. Dooley, K., Complexity in Social Science glossary a research training project of the European Commission. , M.C. (online). Looking to systems theory for a reductive explanation of phenomenal experience and evolutionary foundations for higher order thought Retrieved 15 January 2008. Hobbs, George & Scheepers, Rens (2010),"Agility in Information Systems: Enabling Capabilities for the IT Function," Pacific Asia Journal of the Association for Information Systems: Vol. 2: Iss. 4, Article 2. Link External links Complex Adaptive Systems Group loosely coupled group of scientists and software engineers interested in complex adaptive systems DNA Wales Research Group Current Research in Organisational change CAS/CES related news and free research data. Also linked to the Business Doctor & BBC documentary series A description of complex adaptive systems on the Principia Cybernetica Web. Quick reference single-page description of the 'world' of complexity and related ideas hosted by the Center for the Study of Complex Systems at the University of Michigan. Complex systems research network The Open Agent-Based Modeling Consortium TEDxRotterdam – Igor Nikolic – Complex adaptive systems, and The emergence of universal consciousness: Brendan Hughes at TEDxPretoria . Talks discussing various practical examples of complex adaptive systems, including Wikipedia, star galaxies, genetic mutation, and other examples Cybernetics Systems science Complex systems theory Management cybernetics
0.784571
0.991751
0.778098
Evolutionary dynamics
Evolutionary dynamics is the study of the mathematical principles according to which biological organisms as well as cultural ideas evolve and evolved. This is mostly achieved through the mathematical discipline of population genetics, along with evolutionary game theory. Most population genetics considers changes in the frequencies of alleles at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, one derives quantitative genetics. Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic. In evolutionary game theory, developed first by John Maynard Smith, evolutionary biology concepts may take a deterministic mathematical form, with selection acting directly on inherited phenotypes. These same models can be applied to studying the evolution of human preferences and ideologies. Many variants on these models have been developed, which incorporate weak selection, mutual population structure, stochasticity, etc. These models have relevance also to the generation and maintenance of tissues in mammals, since an understanding of tissue cell kinetics, architecture, and development from adult stem cells has important implications for aging and cancer. References External links Evolutionary Game Dynamics from Clay Mathematics Institute Evolutionary biology
0.817935
0.951169
0.777995
Behavioral ecology
Behavioral ecology, also spelled behavioural ecology, is the study of the evolutionary basis for animal behavior due to ecological pressures. Behavioral ecology emerged from ethology after Niko Tinbergen outlined four questions to address when studying animal behaviors: What are the proximate causes, ontogeny, survival value, and phylogeny of a behavior? If an organism has a trait that provides a selective advantage (i.e., has adaptive significance) in its environment, then natural selection favors it. Adaptive significance refers to the expression of a trait that affects fitness, measured by an individual's reproductive success. Adaptive traits are those that produce more copies of the individual's genes in future generations. Maladaptive traits are those that leave fewer. For example, if a bird that can call more loudly attracts more mates, then a loud call is an adaptive trait for that species because a louder bird mates more frequently than less loud birds—thus sending more loud-calling genes into future generations. Conversely, loud calling birds may attract the attention of predators more often, decreasing their presence in the gene pool. Individuals are always in competition with others for limited resources, including food, territories, and mates. Conflict occurs between predators and prey, between rivals for mates, between siblings, mates, and even between parents and offspring. Competing for resources The value of a social behavior depends in part on the social behavior of an animal's neighbors. For example, the more likely a rival male is to back down from a threat, the more value a male gets out of making the threat. The more likely, however, that a rival will attack if threatened, the less useful it is to threaten other males. When a population exhibits a number of interacting social behaviors such as this, it can evolve a stable pattern of behaviors known as an evolutionarily stable strategy (or ESS). This term, derived from economic game theory, became prominent after John Maynard Smith (1982) recognized the possible application of the concept of a Nash equilibrium to model the evolution of behavioral strategies. Evolutionarily stable strategy In short, evolutionary game theory asserts that only strategies that, when common in the population, cannot be "invaded" by any alternative (mutant) strategy is an ESS, and thus maintained in the population. In other words, at equilibrium every player should play the best strategic response to each other. When the game is two player and symmetric, each player should play the strategy that provides the response best for it. Therefore, the ESS is considered the evolutionary end point subsequent to the interactions. As the fitness conveyed by a strategy is influenced by what other individuals are doing (the relative frequency of each strategy in the population), behavior can be governed not only by optimality but the frequencies of strategies adopted by others and are therefore frequency dependent (frequency dependence). Behavioral evolution is therefore influenced by both the physical environment and interactions between other individuals. An example of how changes in geography can make a strategy susceptible to alternative strategies is the parasitization of the African honey bee, A. m. scutellata. Resource defense The term economic defendability was first introduced by Jerram Brown in 1964. Economic defendability states that defense of a resource have costs, such as energy expenditure or risk of injury, as well as benefits of priority access to the resource. Territorial behavior arises when benefits are greater than the costs. Studies of the golden-winged sunbird have validated the concept of economic defendability. Comparing the energetic costs a sunbird expends in a day to the extra nectar gained by defending a territory, researchers showed that birds only became territorial when they were making a net energetic profit. When resources are at low density, the gains from excluding others may not be sufficient to pay for the cost of territorial defense. In contrast, when resource availability is high, there may be so many intruders that the defender would have no time to make use of the resources made available by defense. Sometimes the economics of resource competition favors shared defense. An example is the feeding territories of the white wagtail. The white wagtails feed on insects washed up by the river onto the bank, which acts as a renewing food supply. If any intruders harvested their territory then the prey would quickly become depleted, but sometimes territory owners tolerate a second bird, known as a satellite. The two sharers would then move out of phase with one another, resulting in decreased feeding rate but also increased defense, illustrating advantages of group living. Ideal free distribution One of the major models used to predict the distribution of competing individuals amongst resource patches is the ideal free distribution model. Within this model, resource patches can be of variable quality, and there is no limit to the number of individuals that can occupy and extract resources from a particular patch. Competition within a particular patch means that the benefit each individual receives from exploiting a patch decreases logarithmically with increasing number of competitors sharing that resource patch. The model predicts that individuals will initially flock to higher-quality patches until the costs of crowding bring the benefits of exploiting them in line with the benefits of being the only individual on the lesser-quality resource patch. After this point has been reached, individuals will alternate between exploiting the higher-quality patches and the lower-quality patches in such a way that the average benefit for all individuals in both patches is the same. This model is ideal in that individuals have complete information about the quality of a resource patch and the number of individuals currently exploiting it, and free in that individuals are freely able to choose which resource patch to exploit. An experiment by Manfred Malinski in 1979 demonstrated that feeding behavior in three-spined sticklebacks follows an ideal free distribution. Six fish were placed in a tank, and food items were dropped into opposite ends of the tank at different rates. The rate of food deposition at one end was set at twice that of the other end, and the fish distributed themselves with four individuals at the faster-depositing end and two individuals at the slower-depositing end. In this way, the average feeding rate was the same for all of the fish in the tank. Mating strategies and tactics As with any competition of resources, species across the animal kingdom may also engage in competitions for mating. If one considers mates or potentials mates as a resource, these sexual partners can be randomly distributed amongst resource pools within a given environment. Following the ideal free distribution model, suitors distribute themselves amongst the potential mates in an effort to maximize their chances or the number of potential matings. For all competitors, males of a species in most cases, there are variations in both the strategies and tactics used to obtain matings. Strategies generally refer to the genetically determined behaviors that can be described as conditional. Tactics refer to the subset of behaviors within a given genetic strategy. Thus it is not difficult for a great many variations in mating strategies to exist in a given environment or species. An experiment conducted by Anthony Arak, where playback of synthetic calls from male natterjack toads was used to manipulate behavior of the males in a chorus, the difference between strategies and tactics is clear. While small and immature, male natterjack toads adopted a satellite tactic to parasitize larger males. Though large males on average still retained greater reproductive success, smaller males were able to intercept matings. When the large males of the chorus were removed, smaller males adopted a calling behavior, no longer competing against the loud calls of larger males. When smaller males got larger, and their calls more competitive, then they started calling and competing directly for mates. Sexual selection Mate choice by resources In many sexually reproducing species, such as mammals, birds, and amphibians, females are able to bear offspring for a certain time period, during which the males are free to mate with other available females, and therefore can father many more offspring to pass on their genes. The fundamental difference between male and female reproduction mechanisms determines the different strategies each sex employs to maximize their reproductive success. For males, their reproductive success is limited by access to females, while females are limited by their access to resources. In this sense, females can be much choosier than males because they have to bet on the resources provided by the males to ensure reproductive success. Resources usually include nest sites, food and protection. In some cases, the males provide all of them (e.g. sedge warblers). The females dwell in their chosen males' territories for access to these resources. The males gain ownership to the territories through male–male competition that often involves physical aggression. Only the largest and strongest males manage to defend the best quality nest sites. Females choose males by inspecting the quality of different territories or by looking at some male traits that can indicate the quality of resources. One example of this is with the grayling butterfly (Hipparchia semele), where males engage in complex flight patterns to decide who defends a particular territory. The female grayling butterfly chooses a male based on the most optimal location for oviposition. Sometimes, males leave after mating. The only resource that a male provides is a nuptial gift, such as protection or food, as seen in Drosophila subobscura. The female can evaluate the quality of the protection or food provided by the male so as to decide whether to mate or not or how long she is willing to copulate. Mate choice by genes When males' only contribution to offspring is their sperm, females are particularly choosy. With this high level of female choice, sexual ornaments are seen in males, where the ornaments reflect the male's social status. Two hypotheses have been proposed to conceptualize the genetic benefits from female mate choice. First, the good genes hypothesis suggests that female choice is for higher genetic quality and that this preference is favored because it increases fitness of the offspring. This includes Zahavi's handicap hypothesis and Hamilton and Zuk's host and parasite arms race. Zahavi's handicap hypothesis was proposed within the context of looking at elaborate male sexual displays. He suggested that females favor ornamented traits because they are handicaps and are indicators of the male's genetic quality. Since these ornamented traits are hazards, the male's survival must be indicative of his high genetic quality in other areas. In this way, the degree that a male expresses his sexual display indicates to the female his genetic quality. Zuk and Hamilton proposed a hypothesis after observing disease as a powerful selective pressure on a rabbit population. They suggested that sexual displays were indicators of resistance of disease on a genetic level. Such 'choosiness' from the female individuals can be seen in wasp species too, especially among Polistes dominula wasps. The females tend to prefer males with smaller, more elliptically shaped spots than those with larger and more irregularly shaped spots. Those males would have reproductive superiority over males with irregular spots. In marbled newts, females show preference to mates with larger crests. This however, is not considered a handicap as it does not negatively affect males' chances of survival. It is simply a trait females show preference for when choosing their mate as it is an indication of health and fitness. Fisher's hypothesis of runaway sexual selection suggests that female preference is genetically correlated with male traits and that the preference co-evolves with the evolution of that trait, thus the preference is under indirect selection. Fisher suggests that female preference began because the trait indicated the male's quality. The female preference spread, so that the females' offspring now benefited from the higher quality from specific trait but also greater attractiveness to mates. Eventually, the trait only represents attractiveness to mates, and no longer represents increased survival. An example of mate choice by genes is seen in the cichlid fish Tropheus moorii where males provide no parental care. An experiment found that a female T. moorii is more likely to choose a mate with the same color morph as her own. In another experiment, females have been shown to share preferences for the same males when given two to choose from, meaning some males get to reproduce more often than others. Sensory bias The sensory bias hypothesis states that the preference for a trait evolves in a non-mating context, and is then exploited by one sex to obtain more mating opportunities. The competitive sex evolves traits that exploit a pre-existing bias that the choosy sex already possesses. This mechanism is thought to explain remarkable trait differences in closely related species because it produces a divergence in signaling systems, which leads to reproductive isolation. Sensory bias has been demonstrated in guppies, freshwater fish from Trinidad and Tobago. In this mating system, female guppies prefer to mate with males with more orange body coloration. However, outside of a mating context, both sexes prefer animate orange objects, which suggests that preference originally evolved in another context, like foraging. Orange fruits are a rare treat that fall into streams where the guppies live. The ability to find these fruits quickly is an adaptive quality that has evolved outside of a mating context. Sometime after the affinity for orange objects arose, male guppies exploited this preference by incorporating large orange spots to attract females. Another example of sensory exploitation is in the water mite Neumania papillator, an ambush predator that hunts copepods (small crustaceans) passing by in the water column. When hunting, N. papillator adopts a characteristic stance termed the 'net stance' - their first four legs are held out into the water column, with their four hind legs resting on aquatic vegetation; this allows them to detect vibrational stimuli produced by swimming prey and use this to orient towards and clutch at prey. During courtship, males actively search for females - if a male finds a female, he slowly circles around the female whilst trembling his first and second leg near her. Male leg trembling causes females (who were in the 'net stance') to orient towards often clutch the male. This did not damage the male or deter further courtship; the male then deposited spermatophores and began to vigorously fan and jerk his fourth pair of legs over the spermatophore, generating a current of water that passed over the spermatophores and towards the female. Sperm packet uptake by the female would sometimes follow. Heather Proctor hypothesised that the vibrations trembling male legs made were done to mimic the vibrations that females detect from swimming prey - this would trigger the female prey-detection responses causing females to orient and then clutch at males, mediating courtship. If this was true and males were exploiting female predation responses, then hungry females should be more receptive to male trembling – Proctor found that unfed captive females did orient and clutch at males significantly more than fed captive females did, consistent with the sensory exploitation hypothesis. Other examples for the sensory bias mechanism include traits in auklets, wolf spiders, and manakins. Further experimental work is required to reach a fuller understanding of the prevalence and mechanisms of sensory bias. Sexual conflict Sexual conflict, in some form or another, may very well be inherent in the ways most animals reproduce. Females invest more in offspring prior to mating, due to the differences in gametes in species that exhibit anisogamy, and often invest more in offspring after mating. This unequal investment leads, on one hand, to intense competition between males for mates and, on the other hand, to females choosing among males for better access to resources and good genes. Because of differences in mating goals, males and females may have very different preferred outcomes to mating. Sexual conflict occurs whenever the preferred outcome of mating is different for the male and female. This difference, in theory, should lead to each sex evolving adaptations that bias the outcome of reproduction towards its own interests. This sexual competition leads to sexually antagonistic coevolution between males and females, resulting in what has been described as an evolutionary arms race between males and females. Conflict over mating Males' reproductive successes are often limited by access to mates, whereas females' reproductive successes are more often limited by access to resources. Thus, for a given sexual encounter, it benefits the male to mate, but benefits the female to be choosy and resist. For example, male small tortoiseshell butterfly compete to gain the best territory to mate. Another example of this conflict can be found in the Eastern carpenter bee, Xylocopa virginica. Males of this species are limited in reproduction primarily by access to mates, so they claim a territory and wait for a female to pass through. Big males are, therefore, more successful in mating because they claim territories near the female nesting sites that are more sought after. Smaller males, on the other hand, monopolize less competitive sites in foraging areas so that they may mate with reduced conflict. Another example of this is Sepsis cynipsea, where males of the species mount females to guard them from other males and remain on the female, attempting to copulate, until the female either shakes them off or consents to mating. Similarly the neriid fly Derocephalus angusticollis demonstrates mate guarding by using their long limbs to hold onto the female as well as push other males away during copulation. Extreme manifestations of this conflict are seen throughout nature. For example, the male Panorpa scorpionflies attempt to force copulation. Male scorpionflies usually acquire mates by presenting them with edible nuptial gifts in the forms of salivary secretions or dead insects. However, some males attempt to force copulation by grabbing females with a specialized abdominal organ without offering a gift. Forced copulation is costly to the female as she does not receive the food from the male and has to search for food herself (costing time and energy), while it is beneficial for the male as he does not need to find a nuptial gift. In other cases, however, it pays for the female to gain more matings and her social mate to prevent these so as to guard paternity. For example, in many socially monogamous birds, males follow females closely during their fertile periods and attempt to chase away any other males to prevent extra-pair matings. The female may attempt to sneak off to achieve these extra matings. In species where males are incapable of constant guarding, the social male may frequently copulate with the female so as to swamp rival males' sperm. Sexual conflict after mating has also been shown to occur in both males and females. Males employ a diverse array of tactics to increase their success in sperm competition. These can include removing other male's sperm from females, displacing other male's sperm by flushing out prior inseminations with large amounts of their own sperm, creating copulatory plugs in females' reproductive tracts to prevent future matings with other males, spraying females with anti-aphrodisiacs to discourage other males from mating with the female, and producing sterile parasperm to protect fertile eusperm in the female's reproductive tract. For example, the male spruce bud moth (Zeiraphera canadensis) secretes an accessory gland protein during mating that makes them unattractive to other males and thus prevents females from future copulation. The Rocky Mountain parnassian also exhibits this type of sexual conflict when the male butterflies deposit a waxy genital plug onto the tip of the female's abdomen that physically prevents the female from mating again. Males can also prevent future mating by transferring an anti-Aphrodiasic to the female during mating. This behavior is seen in butterfly species such as Heliconius melpomene, where males transfer a compound that causes the female to smell like a male butterfly and thus deter any future potential mates. Furthermore, males may control the strategic allocation of sperm, producing more sperm when females are more promiscuous. All these methods are meant to ensure that females are more likely to produce offspring belonging to the males who uses the method. Females also control the outcomes of matings, and there exists the possibility that females choose sperm (cryptic female choice). A dramatic example of this is the feral fowl Gallus gallus. In this species, females prefer to copulate with dominant males, but subordinate males can force matings. In these cases, the female is able to eject the subordinate male's sperm using cloacal contractions. Parental care and family conflicts Parental care is the investment a parent puts into their offspring—which includes protecting and feeding the young, preparing burrows or nests, and providing eggs with yolk. There is great variation in parental care in the animal kingdom. In some species, the parents may not care for their offspring at all, while in others the parents exhibit single-parental or even bi-parental care. As with other topics in behavioral ecology, interactions within a family involve conflicts. These conflicts can be broken down into three general types: sexual (male–female) conflict, parent–offspring conflict, and sibling conflict. Types of parental care There are many different patterns of parental care in the animal kingdom. The patterns can be explained by physiological constraints or ecological conditions, such as mating opportunities. In invertebrates, there is no parental care in most species because it is more favorable for parents to produce a large number of eggs whose fate is left to chance than to protect a few individual young. In other cases, parental care is indirect, manifested via actions taken before the offspring is produced, but nonetheless essential for their survival; for example, female Lasioglossum figueresi sweat bees excavate a nest, construct brood cells, and stock the cells with pollen and nectar before they lay their eggs, so when the larvae hatch they are sheltered and fed, but the females die without ever interacting with their brood. In birds, biparental care is the most common, because reproductive success directly depends on the parents' ability to feed their chicks. Two parents can feed twice as many young, so it is more favorable for birds to have both parents delivering food. In mammals, female-only care is the most common. This is most likely because females are internally fertilized and so are holding the young inside for a prolonged period of gestation, which provides males with the opportunity to desert. Females also feed the young through lactation after birth, so males are not required for feeding. Male parental care is only observed in species where they contribute to feeding or carrying of the young, such as in marmosets. In fish there is no parental care in 79% of bony fish. In fish with parental care, it usually limited to selecting, preparing, and defending a nest, as seen in sockeye salmon, for example. Also, parental care in fish, if any, is primarily done by males, as seen in gobies and redlip blennies. The cichlid fish V. moorii exhibits biparental care. In species with internal fertilization, the female is usually the one to take care of the young. In cases where fertilization is external the male becomes the main caretaker. Familial conflict Familial conflict is a result of trade-offs as a function of lifetime parental investment. Parental investment was defined by Robert Trivers in 1972 as "any investment by the parent in an individual offspring that increases the offspring's chance of surviving at the cost of the parent's ability to invest in other offspring". Parental investment includes behaviors like guarding and feeding. Each parent has a limited amount of parental investment over the course of their lifetime. Investment trade-offs in offspring quality and quantity within a brood and trade offs between current and future broods leads to conflict over how much parental investment to provide and to whom parents should invest in. There are three major types of familial conflict: sexual, parent–offspring, and sibling–sibling conflict. Sexual conflict There is conflict among parents as to who should provide the care as well as how much care to provide. Each parent must decide whether or not to stay and care for their offspring, or to desert their offspring. This decision is best modeled by game theoretic approaches to evolutionarily stable strategies (ESS) where the best strategy for one parent depends on the strategy adopted by the other parent. Recent research has found response matching in parents who determine how much care to invest in their offspring. Studies found that parent great tits match their partner's increased care-giving efforts with increased provisioning rates of their own. This cued parental response is a type of behavioral negotiation between parents that leads to stabilized compensation. Sexual conflicts can give rise to antagonistic co-evolution between the sexes to try to get the other sex to care more for offspring. For example, in the waltzing fly Prochyliza xanthostoma, ejaculate feeding maximizes female reproductive success and minimizes the female's chance of mating multiply. Evidence suggests that the sperm evolved to prevent female waltzing flies from mating multiply in order to ensure the male's paternity. Parent–offspring conflict According to Robert Trivers's theory on relatedness, each offspring is related to itself by 1, but is only 0.5 related to their parents and siblings. Genetically, offspring are predisposed to behave in their own self-interest while parents are predisposed to behave equally to all their offspring, including both current and future ones. Offspring selfishly try to take more than their fair shares of parental investment, while parents try to spread out their parental investment equally amongst their present young and future young. There are many examples of parent–offspring conflict in nature. One manifestation of this is asynchronous hatching in birds. A behavioral ecology hypothesis is known as Lack's brood reduction hypothesis (named after David Lack). Lack's hypothesis posits an evolutionary and ecological explanation as to why birds lay a series of eggs with an asynchronous delay leading to nestlings of mixed age and weights. According to Lack, this brood behavior is an ecological insurance that allows the larger birds to survive in poor years and all birds to survive when food is plentiful. We also see sex-ratio conflict between the queen and her workers in social hymenoptera. Because of haplodiploidy, the workers (offspring) prefer a 3:1 female to male sex allocation while the queen prefers a 1:1 sex ratio. Both the queen and the workers try to bias the sex ratio in their favor. In some species, the workers gain control of the sex ratio, while in other species, like B. terrestris, the queen has a considerable amount of control over the colony sex ratio. Lastly, there has been recent evidence regarding genomic imprinting that is a result of parent–offspring conflict. Paternal genes in offspring demand more maternal resources than maternal genes in the same offspring and vice versa. This has been shown in imprinted genes like insulin-like growth factor-II. Parent–offspring conflict resolution Parents need an honest signal from their offspring that indicates their level of hunger or need, so that the parents can distribute resources accordingly. Offspring want more than their fair share of resources, so they exaggerate their signals to wheedle more parental investment. However, this conflict is countered by the cost of excessive begging. Not only does excessive begging attract predators, but it also retards chick growth if begging goes unrewarded. Thus, the cost of increased begging enforces offspring honesty. Another resolution for parent–offspring conflict is that parental provisioning and offspring demand have actually coevolved, so that there is no obvious underlying conflict. Cross-fostering experiments in great tits (Parus major) have shown that offspring beg more when their biological mothers are more generous. Therefore, it seems that the willingness to invest in offspring is co-adapted to offspring demand. Sibling–sibling conflict The lifetime parental investment is the fixed amount of parental resources available for all of a parent's young, and an offspring wants as much of it as possible. Siblings in a brood often compete for parental resources by trying to gain more than their fair share of what their parents can offer. Nature provides numerous examples in which sibling rivalry escalates to such an extreme that one sibling tries to kill off broodmates to maximize parental investment (See Siblicide). In the Galápagos fur seal, the second pup of a female is usually born when the first pup is still suckling. This competition for the mother's milk is especially fierce during periods of food shortage such as an El Niño year, and this usually results in the older pup directly attacking and killing the younger one. In some bird species, sibling rivalry is also abetted by the asynchronous hatching of eggs. In the blue-footed booby, for example, the first egg in a nest is hatched four days before the second one, resulting in the elder chick having a four-day head start in growth. When the elder chick falls 20-25% below its expected weight threshold, it attacks its younger sibling and drives it from the nest. Sibling relatedness in a brood also influences the level of sibling–sibling conflict. In a study on passerine birds, it was found that chicks begged more loudly in species with higher levels of extra-pair paternity. Brood parasitism Some animals deceive other species into providing all parental care. These brood parasites selfishly exploit their hosts' parents and host offspring. The common cuckoo is a well known example of a brood parasite. Female cuckoos lay a single egg in the nest of the host species and when the cuckoo chick hatches, it ejects all the host eggs and young. Other examples of brood parasites include honeyguides, cowbirds, and the large blue butterfly. Brood parasite offspring have many strategies to induce their host parents to invest parental care. Studies show that the common cuckoo uses vocal mimicry to reproduce the sound of multiple hungry host young to solicit more food. Other cuckoos use visual deception with their wings to exaggerate the begging display. False gapes from brood parasite offspring cause host parents to collect more food. Another example of a brood parasite is Phengaris butterflies such as Phengaris rebeli and Phengaris arion, which differ from the cuckoo in that the butterflies do not oviposit directly in the nest of the host, an ant species Myrmica schencki. Rather, the butterfly larvae release chemicals that deceive the ants into believing that they are ant larvae, causing the ants to bring the butterfly larvae back to their own nests to feed them. Other examples of brood parasites are Polistes sulcifer, a paper wasp that has lost the ability to build its own nests so females lay their eggs in the nest of a host species, Polistes dominula, and rely on the host workers to take care of their brood, as well as Bombus bohemicus, a bumblebee that relies on host workers of various other Bombus species. Similarly, in Eulaema meriana, some Leucospidae wasps exploit the brood cells and nest for shelter and food from the bees. Vespula austriaca is another wasp in which the females force the host workers to feed and take care of the brood. In particular, Bombus hyperboreus, an Arctic bee species, is also classified as a brood parasite in that it attacks and enslaves other species within their subgenus, Alpinobombus to propagate their population. Mating systems Various types of mating systems include monogamy, polygyny, polyandry, and promiscuity. Each is differentiated by the sexual behavior between mates, such as which males mate with certain females. An influential paper by Stephen Emlen and Lewis Oring (1977) argued that two main factors of animal behavior influence the diversity of mating systems: the relative accessibility that each sex has to mates, and the parental desertion by either sex. Mating systems with no male parental care In a system that does not have male parental care, resource dispersion, predation, and the effects of social living primarily influence female dispersion, which in turn influences male dispersion. Since males' primary concern is female acquisition, the males either indirectly or directly compete for the females. In direct competition, the males are directly focused on the females. Blue-headed wrasse demonstrate the behavior in which females follow resources—such as good nest sites—and males follow the females. Conversely, species with males that exemplify indirectly competitive behavior tend towards the males' anticipation of the resources desired by females and their subsequent effort to control or acquire these resources, which helps them to achieve success with females. Grey-sided voles demonstrate indirect male competition for females. The males were experimentally observed to home in on the sites with the best food in anticipation of females settling in these areas. Males of Euglossa imperialis, a non-social bee species, also demonstrate indirect competitive behavior by forming aggregations of territories, which can be considered leks, to defend fragrant-rich primary territories. The purpose of these aggregations is largely only facultative, since the more suitable fragrant-rich sites there are, the more habitable territories there are to inhabit, giving females of this species a large selection of males with whom to potentially mate. Leks and choruses have also been deemed another behavior among the phenomena of male competition for females. Due to the resource-poor nature of the territories that lekking males often defend, it is difficult to categorize them as indirect competitors. For example, the ghost moth males display in leks to attract a female mate. Additionally, it is difficult to classify them as direct competitors seeing as they put a great deal of effort into their defense of their territories before females arrive, and upon female arrival they put for the great mating displays to attract the females to their individual sites. These observations make it difficult to determine whether female or resource dispersion primarily influences male aggregation, especially in lieu of the apparent difficulty that males may have defending resources and females in such densely populated areas. Because the reason for male aggregation into leks is unclear, five hypotheses have been proposed. These postulates propose the following as reasons for male lekking: hotspot, predation reduction, increased female attraction, hotshot males, facilitation of female choice. With all of the mating behaviors discussed, the primary factors influencing differences within and between species are ecology, social conflicts, and life history differences. In some other instances, neither direct nor indirect competition is seen. Instead, in species like the Edith's checkerspot butterfly, males' efforts are directed at acquisition of females and they exhibit indiscriminate mate location behavior, where, given the low cost of mistakes, they blindly attempt to mate both correctly with females and incorrectly with other objects. Mating systems with male parental care Monogamy Monogamy is the mating system in 90% of birds, possibly because each male and female has a greater number of offspring if they share in raising a brood. In obligate monogamy, males feed females on the nest, or share in incubation and chick-feeding. In some species, males and females form lifelong pair bonds. Monogamy may also arise from limited opportunities for polygamy, due to strong competition among males for mates, females suffering from loss of male help, and female–female aggression. Polygyny In birds, polygyny occurs when males indirectly monopolize females by controlling resources. In species where males normally do not contribute much to parental care, females suffer relatively little or not at all. In other species, however, females suffer through the loss of male contribution, and the cost of having to share resources that the male controls, such as nest sites or food. In some cases, a polygynous male may control a high-quality territory so for the female, the benefits of polygyny may outweigh the costs. Polyandry threshold There also seems to be a "polyandry threshold" where males may do better by agreeing to share a female instead of maintaining a monogamous mating system. Situations that may lead to cooperation among males include when food is scarce, and when there is intense competition for territories or females. For example, male lions sometimes form coalitions to gain control of a pride of females. In some populations of Galapagos hawks, groups of males would cooperate to defend one breeding territory. The males would share matings with the female and share paternity with the offspring. Female desertion and sex role reversal In birds, desertion often happens when food is abundant, so the remaining partner is better able to raise the young unaided. Desertion also occurs if there is a great chance of a parent to gain another mate, which depends on environmental and populational factors. Some birds, such as the phalaropes, have reversed sex roles where the female is larger and more brightly colored, and compete for males to incubate their clutches. In jacanas, the female is larger than the male and her territory could overlap the multiple territories of up to four males. In the frog species P. bibronii, the female is fertilizes multiple nests, and the male is left to tend to each nest while the female moves on. Social behaviors Animals cooperate with each other to increase their own fitness. These altruistic, and sometimes spiteful behaviors can be explained by Hamilton's rule, which states that rB-C > 0 where r= relatedness, B= benefits, and C= costs. Kin selection Kin selection refers to evolutionary strategies where an individual acts to favor the reproductive success of relatives, or kin, even if the action incurs some cost to the organism's own survival and ability to procreate. John Maynard Smith coined the term in 1964, although the concept was referred to by Charles Darwin who cited that helping relatives would be favored by group selection. Mathematical descriptions of kin selection were initially offered by R. A. Fisher in 1930 and J. B. S. Haldane in 1932. and 1955. W. D. Hamilton popularized the concept later, including the mathematical treatment by George Price in 1963 and 1964. Kin selection predicts that individuals will harbor personal costs in favor of one or multiple individuals because this can maximize their genetic contribution to future generations. For example, an organism may be inclined to expend great time and energy in parental investment to rear offspring since this future generation may be better suited for propagating genes that are highly shared between the parent and offspring. Ultimately, the initial actor performs apparent altruistic actions for kin to enhance its own reproductive fitness. In particular, organisms are hypothesized to act in favor of kin depending on their genetic relatedness. So, individuals are inclined to act altruistically for siblings, grandparents, cousins, and other relatives, but to differing degrees. Inclusive fitness Inclusive fitness describes the component of reproductive success in both a focal individual and their relatives. Importantly, the measure embodies the sum of direct and indirect fitness and the change in their reproductive success based on the actor's behavior. That is, the effect an individual's behaviors have on: being personally better-suited to reproduce offspring, and aiding descendant and non-descendant relatives in their reproductive efforts. Natural selection is predicted to push individuals to behave in ways that maximize their inclusive fitness. Studying inclusive fitness is often done using predictions from Hamilton's rule. Kin recognition Genetic cues One possible method of kin selection is based on genetic cues that can be recognized phenotypically. Genetic recognition has been exemplified in a species that is usually not thought of as a social creature: amoebae. Social amoebae form fruiting bodies when starved for food. These amoebae preferentially formed slugs and fruiting bodies with members of their own lineage, which is clonally related. The genetic cue comes from variable lag genes, which are involved in signaling and adhesion between cells. Kin can also be recognized a genetically determined odor, as studied in the primitively social sweat bee, Lasioglossum zephyrus. These bees can even recognize relatives they have never met and roughly determine relatedness. The Brazilian stingless bee Schwarziana quadripunctata uses a distinct combination of chemical hydrocarbons to recognize and locate kin. Each chemical odor, emitted from the organism's epicuticles, is unique and varies according to age, sex, location, and hierarchical position. Similarly, individuals of the stingless bee species Trigona fulviventris can distinguish kin from non-kin through recognition of a number of compounds, including hydrocarbons and fatty acids that are present in their wax and floral oils from plants used to construct their nests. In the species, Osmia rufa, kin selection has also been associated with mating selection. Females, specifically, select males for mating with whom they are genetically more related to. Environmental cues There are two simple rules that animals follow to determine who is kin. These rules can be exploited, but exist because they are generally successful. The first rule is 'treat anyone in my home as kin.' This rule is readily seen in the reed warbler, a bird species that only focuses on chicks in their own nest. If its own kin is placed outside of the nest, a parent bird ignores that chick. This rule can sometimes lead to odd results, especially if there is a parasitic bird that lays eggs in the reed warbler nest. For example, an adult cuckoo may sneak its egg into the nest. Once the cuckoo hatches, the reed warbler parent feeds the invading bird like its own child. Even with the risk for exploitation, the rule generally proves successful. The second rule, named by Konrad Lorenz as 'imprinting,' states that those who you grow up with are kin. Several species exhibit this behavior, including, but not limited to the Belding's ground squirrel. Experimentation with these squirrels showed that regardless of true genetic relatedness, those that were reared together rarely fought. Further research suggests that there is partially some genetic recognition going on as well, as siblings that were raised apart were less aggressive toward one another compared to non-relatives reared apart. Another way animals may recognize their kin include the interchange of unique signals. While song singing is often considered a sexual trait between males and females, male–male song singing also occurs. For example, male vinegar flies Zaprionus tuberculatus can recognize each other by song. Cooperation Cooperation is broadly defined as behavior that provides a benefit to another individual that specifically evolved for that benefit. This excludes behavior that has not been expressly selected for to provide a benefit for another individual, because there are many commensal and parasitic relationships where the behavior one individual (which has evolved to benefit that individual and no others) is taken advantage of by other organisms. Stable cooperative behavior requires that it provide a benefit to both the actor and recipient, though the benefit to the actor can take many different forms. Within species Within species cooperation occurs among members of the same species. Examples of intraspecific cooperation include cooperative breeding (such as in weeper capuchins) and cooperative foraging (such as in wolves). There are also forms of cooperative defense mechanisms, such as the "fighting swarm" behavior used by the stingless bee Tetragonula carbonaria. Much of this behavior occurs due to kin selection. Kin selection allows cooperative behavior to evolve where the actor receives no direct benefits from the cooperation. Cooperation (without kin selection) must evolve to provide benefits to both the actor and recipient of the behavior. This includes reciprocity, where the recipient of the cooperative behavior repays the actor at a later time. This may occur in vampire bats but it is uncommon in non-human animals. Cooperation can occur willingly between individuals when both benefit directly as well. Cooperative breeding, where one individual cares for the offspring of another, occurs in several species, including wedge-capped capuchin monkeys. Cooperative behavior may also be enforced, where their failure to cooperate results in negative consequences. One of the best examples of this is worker policing, which occurs in social insect colonies. The cooperative pulling paradigm is a popular experimental design used to assess if and under which conditions animals cooperate. It involves two or more animals pulling rewards towards themselves via an apparatus they can not successfully operate alone. Between species Cooperation can occur between members of different species. For interspecific cooperation to be evolutionarily stable, it must benefit individuals in both species. Examples include pistol shrimp and goby fish, nitrogen fixing microbes and legumes, ants and aphids. In ants and aphids, aphids secrete a sugary liquid called honeydew, which ants eat. The ants provide protection to the aphids against predators, and, in some instances, raise the aphid eggs and larvae inside the ant colony. This behavior is analogous to human domestication. The genus of goby fish, Elacatinus also demonstrate cooperation by removing and feeding on ectoparasites of their clients. The species of wasp Polybia rejecta and ants Azteca chartifex show a cooperative behavior protecting one another's nests from predators. Market economics often govern the details of the cooperation: e.g. the amount exchanged between individual animals follow the rules of supply and demand. Spite Hamilton's rule can also predict spiteful behaviors between non-relatives. A spiteful behavior is one that is harmful to both the actor and to the recipient. Spiteful behavior is favored if the actor is less related to the recipient than to the average member of the population making r negative and if rB-C is still greater than zero. Spite can also be thought of as a type of altruism because harming a non-relative, by taking his resources for example, could also benefit a relative, by allowing him access to those resources. Furthermore, certain spiteful behaviors may provide harmful short term consequences to the actor but also give long term reproductive benefits. Many behaviors that are commonly thought of as spiteful are actually better explained as being selfish, that is benefiting the actor and harming the recipient, and true spiteful behaviors are rare in the animal kingdom. An example of spite is the sterile soldiers of the polyembryonic parasitoid wasp. A female wasp lays a male and a female egg in a caterpillar. The eggs divide asexually, creating many genetically identical male and female larvae. Sterile soldier wasps also develop and attack the relatively unrelated brother larvae so that the genetically identical sisters have more access to food. Another example is bacteria that release bacteriocins. The bacteria that releases the bacteriocin may have to die to do so, but most of the harm is to unrelated individuals who are killed by the bacteriocin. This is because the ability to produce and release the bacteriocin is linked to an immunity to it. Therefore, close relatives to the releasing cell are less likely to die than non-relatives. Altruism and conflict in social insects Many insect species of the order Hymenoptera (bees, ants, wasps) are eusocial. Within the nests or hives of social insects, individuals engage in specialized tasks to ensure the survival of the colony. Dramatic examples of these specializations include changes in body morphology or unique behaviors, such as the engorged bodies of the honeypot ant Myrmecocystus mexicanus or the waggle dance of honey bees and a wasp species, Vespula vulgaris. In many, but not all social insects, reproduction is monopolized by the queen of the colony. Due to the effects of a haplodiploid mating system, in which unfertilized eggs become male drones and fertilized eggs become worker females, average relatedness values between sister workers can be higher than those seen in humans or other eutherian mammals. This has led to the suggestion that kin selection may be a driving force in the evolution of eusociality, as individuals could provide cooperative care that establishes a favorable benefit to cost ratio (rB-c > 0). However, not all social insects follow this rule. In the social wasp Polistes dominula, 35% of the nest mates are unrelated. In many other species, unrelated individuals only help the queen when no other options are present. In this case, subordinates work for unrelated queens even when other options may be present. No other social insect submits to unrelated queens in this way. This seemingly unfavorable behavior parallels some vertebrate systems. It is thought that this unrelated assistance is evidence of altruism in P. dominula. Cooperation in social organisms has numerous ecological factors that can determine the benefits and costs associated with this form of organization. One suggested benefit is a type of "life insurance" for individuals who participate in the care of the young. In this instance, individuals may have a greater likelihood of transmitting genes to the next generation when helping in a group compared to individual reproduction. Another suggested benefit is the possibility of "fortress defense", where soldier castes threaten or attack intruders, thus protecting related individuals inside the territory. Such behaviors are seen in the snapping shrimp Synalpheus regalis and gall-forming aphid Pemphigus spyrothecae. A third ecological factor that is posited to promote eusociality is the distribution of resources: when food is sparse and concentrated in patches, eusociality is favored. Evidence supporting this third factor comes from studies of naked mole-rats and Damaraland mole-rats, which have communities containing a single pair of reproductive individuals. Conflicts in social insects Although eusociality has been shown to offer many benefits to the colony, there is also potential for conflict. Examples include the sex-ratio conflict and worker policing seen in certain species of social Hymenoptera such as Dolichovespula media, Dolichovespula sylvestris, Dolichovespula norwegica and Vespula vulgaris. The queen and the worker wasps either indirectly kill the laying-workers' offspring by neglecting them or directly condemn them by cannibalizing and scavenging. The sex-ratio conflict arises from a relatedness asymmetry, which is caused by the haplodiploidy nature of Hymenoptera. For instance, workers are most related to each other because they share half of the genes from the queen and inherit all of the father's genes. Their total relatedness to each other would be 0.5+ (0.5 x 0.5) = 0.75. Thus, sisters are three-fourths related to each other. On the other hand, males arise from unfertilized larva, meaning they only inherit half of the queen's genes and none from the father. As a result, a female is related to her brother by 0.25, because 50% of her genes that come from her father have no chance of being shared with a brother. Her relatedness to her brother would therefore be 0.5 x 0.5=0.25. According to Trivers and Hare's population-level sex-investment ratio theory, the ratio of relatedness between sexes determines the sex investment ratios. As a result, it has been observed that there is a tug-of-war between the queen and the workers, where the queen would prefer a 1:1 female to male ratio because she is equally related to her sons and daughters (r=0.5 in each case). However, the workers would prefer a 3:1 female to male ratio because they are 0.75 related to each other and only 0.25 related to their brothers. Allozyme data of a colony may indicate who wins this conflict. Conflict can also arise between workers in colonies of social insects. In some species, worker females retain their ability to mate and lay eggs. The colony's queen is related to her sons by half of her genes and a quarter to the sons of her worker daughters. Workers, however, are related to their sons by half of their genes and to their brothers by a quarter. Thus, the queen and her worker daughters would compete for reproduction to maximize their own reproductive fitness. Worker reproduction is limited by other workers who are more related to the queen than their sisters, a situation occurring in many polyandrous hymenopteran species. Workers police the egg-laying females by engaging in oophagy or directed acts of aggression. The monogamy hypothesis The monogamy hypothesis states that the presence of monogamy in insects is crucial for eusociality to occur. This is thought to be true because of Hamilton's rule that states that rB-C>0. By having a monogamous mating system, all of the offspring have high relatedness to each other. This means that it is equally beneficial to help out a sibling, as it is to help out an offspring. If there were many fathers the relatedness of the colony would be lowered. This monogamous mating system has been observed in insects such as termites, ants, bees and wasps. In termites the queen commits to a single male when founding a nest. In ants, bees and wasps the queens have a functional equivalent to lifetime monogamy. The male can even die before the founding of the colony. The queen can store and use the sperm from a single male throughout their lifetime, sometimes up to 30 years. In an experiment looking at the mating of 267 hymenopteran species, the results were mapped onto a phylogeny. It was found that monogamy was the ancestral state in all the independent transitions to eusociality. This indicates that monogamy is the ancestral, likely to be crucial state for the development of eusociality. In species where queens mated with multiple mates, it was found that these were developed from lineages where sterile castes already evolved, so the multiple mating was secondary. In these cases, multiple mating is likely to be advantageous for reasons other than those important at the origin of eusociality. Most likely reasons are that a diverse worker pool attained by multiple mating by the queen increases disease resistance and may facilitate a division of labor among workers Communication and signaling Communication is varied at all scales of life, from interactions between microscopic organisms to those of large groups of people. Nevertheless, the signals used in communication abide by a fundamental property: they must be a quality of the receiver that can transfer information to a receiver that is capable of interpreting the signal and modifying its behavior accordingly. Signals are distinct from cues in that evolution has selected for signalling between both parties, whereas cues are merely informative to the observer and may not have originally been used for the intended purpose. The natural world is replete with examples of signals, from the luminescent flashes of light from fireflies, to chemical signaling in red harvester ants to prominent mating displays of birds such as the Guianan cock-of-the-rock, which gather in leks, the pheromones released by the corn earworm moth, the dancing patterns of the blue-footed booby, or the alarm sound Synoeca cyanea make by rubbing their mandibles against their nest. Yet other examples are the cases of the grizzled skipper and Spodoptera littoralis where pheromones are released as a sexual recognition mechanism that drives evolution. In a type of mating signal, male orb-weaving spiders of the species Zygiella x-notata pluck the signal thread of a female's web with their forelegs. This performance conveys vibratory signals informing the female spider of the male's presence. The nature of communication poses evolutionary concerns, such as the potential for deceit or manipulation on the part of the sender. In this situation, the receiver must be able to anticipate the interests of the sender and act appropriately to a given signal. Should any side gain advantage in the short term, evolution would select against the signal or the response. The conflict of interests between the sender and the receiver results in an evolutionarily stable state only if both sides can derive an overall benefit. Although the potential benefits of deceit could be great in terms of mating success, there are several possibilities for how dishonesty is controlled, which include indices, handicaps, and common interests. Indices are reliable indicators of a desirable quality, such as overall health, fertility, or fighting ability of the organism. Handicaps, as the term suggests, place a restrictive cost on the organisms that own them, and thus lower quality competitors experience a greater relative cost compared to their higher quality counterparts. In the common interest situation, it is beneficial to both sender and receiver to communicate honestly such that the benefit of the interaction is maximized. Signals are often honest, but there are exceptions. Prime examples of dishonest signals include the luminescent lure of the anglerfish, which is used to attract prey, or the mimicry of non-poisonous butterfly species, like the Batesian mimic Papilio polyxenes of the poisonous model Battus philenor. Although evolution should normally favor selection against the dishonest signal, in these cases it appears that the receiver would benefit more on average by accepting the signal. See also Autonomous foraging Behavioral plasticity Evolutionary models of food sharing Gene-centered view of evolution Human behavioral ecology Life history theory Marginal value theorem Optimization Mating effort Parental effort Phylogenetic comparative methods Selection Balancing selection Directional selection Disruptive selection Stabilizing selection r/K selection theory Somatic effort References Further reading Alcock, J. (2009). Animal Behavior: An Evolutionary Approach (9th edition). Sinauer Associates Inc. Sunderland, MA. Bateson, P. (2017) Behaviour, Development and Evolution. Open Book Publishers, Danchin, É., Girladeau, L.-A. and Cézilly, F. (2008). Behavioural Ecology: An Evolutionary Perspective on Behaviour. Oxford University Press, Oxford. Krebs, J.R. and Davies, N. An Introduction to Behavioural Ecology, Krebs, J.R. and Davies, N. Behavioural Ecology: An Evolutionary Approach, Wajnberg, E., Bernstein E. and van Alphen, E. (2008). Behavioral Ecology of Insect Parasitoids – From Theoretical Approaches to Field Applications, Blackwell Publishing. External links
0.792667
0.981475
0.777983
Evolution of cells
Evolution of cells refers to the evolutionary origin and subsequent evolutionary development of cells. Cells first emerged at least 3.8 billion years ago approximately 750 million years after Earth was formed. The first cells The initial development of the cell marked the passage from prebiotic chemistry to partitioned units resembling modern cells. The final transition to living entities that fulfill all the definitions of modern cells depended on the ability to evolve effectively by natural selection. This transition has been called the Darwinian transition. If life is viewed from the point of view of replicator molecules, cells satisfy two fundamental conditions: protection from the outside environment and confinement of biochemical activity. The former condition is needed to keep complex molecules stable in a varying and sometimes aggressive environment; the latter is fundamental for the evolution of biocomplexity. If freely floating molecules that code for enzymes are not enclosed in cells, the enzymes will automatically benefit neighboring replicator molecules as well. Thus, the consequences of diffusion in non-partitioned lifeforms would result in "parasitism by default." Therefore, the selection pressure on replicator molecules will be lower, as the 'lucky' molecule that produces the better enzyme does not fully leverage its advantage over its close neighbors. In contrast, if the molecule is enclosed in a cell membrane, the enzymes coded will be available only to itself. That molecule will uniquely benefit from the enzymes it codes for, increasing individuality and thus accelerating natural selection. Partitioning may have begun from cell-like spheroids formed by proteinoids, which are observed by heating amino acids with phosphoric acid as a catalyst. They bear much of the basic features provided by cell membranes. Proteinoid-based protocells enclosing RNA molecules could have been the first cellular life forms on Earth. Another possibility is that the shores of the ancient coastal waters may have been a suitable environment for the initial development of cells. Waves breaking on the shore create a delicate foam composed of bubbles. Shallow coastal waters also tend to be warmer, further concentrating the molecules through evaporation. While bubbles made mostly of water tend to burst quickly, oily bubbles are much more stable. The phospholipid, the primary material of cell membranes, is an example of a common oily compound prevalent in the prebiotic seas. Both of these options require the presence of massive amounts of chemicals and organic material in order to form cells. A large gathering of organic molecules most likely came from what scientists now call the prebiotic soup. The prebiotic soup refers to the collection of every organic compound that appeared on Earth after it was formed. This soup would have most likely contained the compounds necessary to form early cells. Phospholipids are composed of a hydrophilic head on one end and a hydrophobic tail on the other. They can come together to form a bilayer membrane. A lipid monolayer bubble can only contain oil and is not conducive to harboring water-soluble organic molecules. On the other hand, a lipid bilayer bubble can contain water and was a likely precursor to the modern cell membrane. If a protein was introduced that increased the integrity of its parent bubble, then that bubble had an advantage. Primitive reproduction may have occurred when the bubbles burst, releasing the results of the experiment into the surrounding medium. Once enough of the right compounds were released into the medium, the development of the first prokaryotes, eukaryotes, and multi-cellular organisms could be achieved. However, the first cell membrane could not have been composed of phospholipids due its low permeability, as ions would not able to pass through the membrane. Rather it is suggested they were composed of fatty acids, as they can freely exchange ions, allowing geochemically sustained proton gradients at alkaline hydrothermal vents that might lead to prebiotic chemical reactions via CO2 fixation. Community metabolism The common ancestor of the now existing cellular lineages (eukaryotes, bacteria, and archaea) may have been a community of organisms that readily exchanged components and genes. It would have contained: Autotrophs that produced organic compounds from CO2, either photosynthetically or by inorganic chemical reactions; Heterotrophs that obtained organics from leakage of other organisms Saprotrophs that absorbed nutrients from decaying organisms Phagotrophs that were sufficiently complex to envelop and digest particulate nutrients, including other organisms. The eukaryotic cell seems to have evolved from a symbiotic community of prokaryotic cells. DNA-bearing organelles like mitochondria and chloroplasts are remnants of ancient symbiotic oxygen-breathing bacteria and cyanobacteria, respectively, where at least part of the rest of the cell may have been derived from an ancestral archaean prokaryote cell. The archean prokaryote cell concept is often termed as the endosymbiotic theory. There is still debate about whether organelles like the hydrogenosome predated the origin of mitochondria, or vice versa: see the hydrogen hypothesis for the origin of eukaryotic cells. How the current lineages of microbes evolved from this postulated community is currently unsolved, but subject to intense research by biologists, stimulated by the great flow of new discoveries in genome science. Genetic code and the RNA world Modern evidence suggests that early cellular evolution occurred in a biological realm radically distinct from modern biology. It is thought that in this ancient realm, the current genetic role of DNA was largely filled by RNA, and catalysis was also largely mediated by RNA (that is, by ribozyme counterparts of enzymes). This concept is known as the RNA world hypothesis. According to this hypothesis, the ancient RNA world transitioned into the modern cellular world via the evolution of protein synthesis, followed by replacement of many cellular ribozyme catalysts by protein-based enzymes. Proteins are much more flexible in catalysis than RNA due to the existence of diverse amino acid side chains with distinct chemical characteristics. The RNA record in existing cells appears to preserve some 'molecular fossils' from this RNA world. These RNA fossils include the ribosome itself (in which RNA catalyzes peptide-bond formation), the modern ribozyme catalyst RNase P, and RNAs. The nearly universal genetic code preserves some evidence for the RNA world. For instance, recent studies of transfer RNAs, the enzymes that charge them with amino acids (the first step in protein synthesis) and the way these components recognize and exploit the genetic code, have been used to suggest that the universal genetic code emerged before the evolution of the modern amino acid activation method for protein synthesis. The first RNA polymers probably emerged prior to 4.17 Gya if life originated at freshwater environments similar to Darwin's warm little pond. Sexual reproduction The evolution of sexual reproduction may be a primordial and fundamental characteristic of eukaryotes, including single cell eukaryotes. Based on a phylogenetic analysis, Dacks and Roger proposed that facultative sex was present in the common ancestor of all eukaryotes. Hofstatter and Lehr reviewed evidence supporting the hypothesis that all eukaryotes can be regarded as sexual, unless proven otherwise. Sexual reproduction may have arisen in early protocells with RNA genomes (RNA world). Initially, each protocell would likely have contained one RNA genome (rather than multiple) since this maximizes the growth rate. However, the occurrence of damages to the RNA which block RNA replication or interfere with ribozyme function would make it advantageous to fuse periodically with another protocell to restore reproductive ability. This early, simple form of genetic recovery is similar to that occurring in extant segmented single-stranded RNA viruses (see influenza A virus). As duplex DNA became the predominant form of the genetic material, the mechanism of genetic recovery evolved into the more complex process of meiotic recombination, found today in most species. It thus appears likely that sexual reproduction arose early in the evolution of cells and has had a continuous evolutionary history. Horizontal gene transfer Horizontal gene transfer (HGT) is the movement of genetic information between different organisms of the same species mainly being bacteria. This is not the movement of genetic information between a parent and their offspring but by other factors. In contrast to how animals reproduce and evolve from sexual reproduction, bacteria evolve by sharing DNA with other bacteria or their environment. There are three common mechanisms of transferring genetic material by HGT: Transformation: The bacteria assimilates DNA from the environment into their own Conjugation: Bacteria directly transfer genes from one cell to another Transduction: Bacteriophages (virus) move genes from one bacterial cell to another Once one of these mechanisms has occurred the bacteria will continue to multiply and grow resistance and evolve by natural selection. HGT is the main cause of the assimilation of certain genetic material and the passing down of antibiotic resistance genes (ARGs). Canonical patterns Although the evolutionary origins of the major lineages of modern cells are disputed, the primary distinctions between the three major lineages of cellular life (called domains) are firmly established. In each of these three domains, DNA replication, transcription, and translation all display distinctive features. There are three versions of ribosomal RNAs, and generally three versions of each ribosomal protein, one for each domain of life. These three versions of the protein synthesis apparatus are called the canonical patterns, and the existence of these canonical patterns provides the basis for a definition of the three domains - Bacteria, Archaea, and Eukarya (or Eukaryota) - of currently existing cells. Using genomics to infer early lines of evolution Instead of relying on a single gene such as the small-subunit ribosomal RNA (SSU rRNA) gene to reconstruct early evolution, or a few genes, scientific effort has shifted to analyzing complete genome sequences. Evolutionary trees based only on SSU rRNA alone do not capture the events of early eukaryote evolution accurately, and the progenitors of the first nucleated cells are still uncertain. For instance, analysis of the complete genome of the eukaryote yeast shows that many of its genes are more closely related to bacterial genes than they are to archaea, and it is now clear that archaea were not the simple progenitors of the eukaryotes, in contradiction to earlier findings based on SSU rRNA and limited samples of other genes. One hypothesis is that the first nucleated cell arose from two distinctly different ancient prokaryotic (non-nucleated) species that had formed a symbiotic relationship with one another to carry out different aspects of metabolism. One partner of this symbiosis is proposed to be a bacterial cell, and the other an archaeal cell. It is postulated that this symbiotic partnership progressed via the cellular fusion of the partners to generate a chimeric or hybrid cell with a membrane bound internal structure that was the forerunner of the nucleus. The next stage in this scheme was transfer of both partner genomes into the nucleus and their fusion with one another. Several variations of this hypothesis for the origin of nucleated cells have been suggested. Other biologists dispute this conception and emphasize the community metabolism theme, the idea that early living communities would comprise many different entities to extant cells, and would have shared their genetic material more extensively than current microbes. Quotes "The First Cell arose in the previously prebiotic world with the coming together of several entities that gave a single vesicle the unique chance to carry out three essential and quite different life processes. These were: (a) to copy informational macromolecules, (b) to carry out specific catalytic functions, and (c) to couple energy from the environment into usable chemical forms. These would foster subsequent cellular evolution and metabolism. Each of these three essential processes probably originated and was lost many times prior to The First Cell, but only when these three occurred together was life jump-started and Darwinian evolution of organisms began." (Koch and Silver, 2005) "The evolution of modern cells is arguably the most challenging and important problem the field of Biology has ever faced. In Darwin's day the problem could hardly be imagined. For much of the 20th century it was intractable. In any case, the problem lay buried in the catch-all rubric "origin of life"--where, because it is a biological not a (bio)chemical problem, it was effectively ignored. Scientific interest in cellular evolution started to pick up once the universal phylogenetic tree, the framework within which the problem had to be addressed, was determined. But it was not until microbial genomics arrived on the scene that biologists could actually do much about the problem of cellular evolution." (Carl Woese, 2002) References Further reading External links Life on Earth The universal nature of biochemistry Endosymbiosis and The Origin of Eukaryotes Origins of the Eukarya. Cell biology Evolutionary biology
0.7907
0.983827
0.777912
CMA-ES
Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions, denoted as ) are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, individuals with better and better -values are generated over the generation sequence. In an evolution strategy, new candidate solutions are usually sampled according to a multivariate normal distribution in . Recombination amounts to selecting a new mean value for the distribution. Mutation amounts to adding a random vector, a perturbation with zero mean. Pairwise dependencies between the variables in the distribution are represented by a covariance matrix. The covariance matrix adaptation (CMA) is a method to update the covariance matrix of this distribution. This is particularly useful if the function is ill-conditioned. Adaptation of the covariance matrix amounts to learning a second order model of the underlying objective function similar to the approximation of the inverse Hessian matrix in the quasi-Newton method in classical optimization. In contrast to most classical methods, fewer assumptions on the underlying objective function are made. Because only a ranking (or, equivalently, sorting) of candidate solutions is exploited, neither derivatives nor even an (explicit) objective function is required by the method. For example, the ranking could come about from pairwise competitions between the candidate solutions in a Swiss-system tournament. Principles Two main principles for the adaptation of parameters of the search distribution are exploited in the CMA-ES algorithm. First, a maximum-likelihood principle, based on the idea to increase the probability of successful candidate solutions and search steps. The mean of the distribution is updated such that the likelihood of previously successful candidate solutions is maximized. The covariance matrix of the distribution is updated (incrementally) such that the likelihood of previously successful search steps is increased. Both updates can be interpreted as a natural gradient descent. Also, in consequence, the CMA conducts an iterated principal components analysis of successful search steps while retaining all principal axes. Estimation of distribution algorithms and the Cross-Entropy Method are based on very similar ideas, but estimate (non-incrementally) the covariance matrix by maximizing the likelihood of successful solution points instead of successful search steps. Second, two paths of the time evolution of the distribution mean of the strategy are recorded, called search or evolution paths. These paths contain significant information about the correlation between consecutive steps. Specifically, if consecutive steps are taken in a similar direction, the evolution paths become long. The evolution paths are exploited in two ways. One path is used for the covariance matrix adaptation procedure in place of single successful search steps and facilitates a possibly much faster variance increase of favorable directions. The other path is used to conduct an additional step-size control. This step-size control aims to make consecutive movements of the distribution mean orthogonal in expectation. The step-size control effectively prevents premature convergence yet allowing fast convergence to an optimum. Algorithm In the following the most commonly used (μ/μw, λ)-CMA-ES is outlined, where in each iteration step a weighted combination of the μ best out of λ new candidate solutions is used to update the distribution parameters. The main loop consists of three main parts: 1) sampling of new solutions, 2) re-ordering of the sampled solutions based on their fitness, 3) update of the internal state variables based on the re-ordered samples. A pseudocode of the algorithm looks as follows. set // number of samples per iteration, at least two, generally > 4 initialize , , , , // initialize state variables while not terminate do // iterate for in do // sample new solutions and evaluate them sample_multivariate_normal(mean, covariance_matrix) ← with // sort solutions // we need later and ← update_m // move mean to better solutions ← update_ps // update isotropic evolution path ← update_pc // update anisotropic evolution path ← update_C // update covariance matrix ← update_sigma // update step-size using isotropic path length return or The order of the five update assignments is relevant: must be updated first, and must be updated before , and must be updated last. The update equations for the five state variables are specified in the following. Given are the search space dimension and the iteration step . The five state variables are , the distribution mean and current favorite solution to the optimization problem, , the step-size, , a symmetric and positive-definite covariance matrix with and , two evolution paths, initially set to the zero vector. The iteration starts with sampling candidate solutions from a multivariate normal distribution , i.e. for The second line suggests the interpretation as unbiased perturbation (mutation) of the current favorite solution vector (the distribution mean vector). The candidate solutions are evaluated on the objective function to be minimized. Denoting the -sorted candidate solutions as the new mean value is computed as where the positive (recombination) weights sum to one. Typically, and the weights are chosen such that . The only feedback used from the objective function here and in the following is an ordering of the sampled candidate solutions due to the indices . The step-size is updated using cumulative step-size adaptation (CSA), sometimes also denoted as path length control. The evolution path (or search path) is updated first. where is the backward time horizon for the evolution path and larger than one ( is reminiscent of an exponential decay constant as where is the associated lifetime and the half-life), is the variance effective selection mass and by definition of , is the unique symmetric square root of the inverse of , and is the damping parameter usually close to one. For or the step-size remains unchanged. The step-size is increased if and only if is larger than the expected value and decreased if it is smaller. For this reason, the step-size update tends to make consecutive steps -conjugate, in that after the adaptation has been successful . Finally, the covariance matrix is updated, where again the respective evolution path is updated first. where denotes the transpose and is the backward time horizon for the evolution path and larger than one, and the indicator function evaluates to one iff or, in other words, , which is usually the case, makes partly up for the small variance loss in case the indicator is zero, is the learning rate for the rank-one update of the covariance matrix and is the learning rate for the rank- update of the covariance matrix and must not exceed . The covariance matrix update tends to increase the likelihood for and for to be sampled from . This completes the iteration step. The number of candidate samples per iteration, , is not determined a priori and can vary in a wide range. Smaller values, for example , lead to more local search behavior. Larger values, for example with default value , render the search more global. Sometimes the algorithm is repeatedly restarted with increasing by a factor of two for each restart. Besides of setting (or possibly instead, if for example is predetermined by the number of available processors), the above introduced parameters are not specific to the given objective function and therefore not meant to be modified by the user. Example code in MATLAB/Octave function xmin=purecmaes % (mu/mu_w, lambda)-CMA-ES % -------------------- Initialization -------------------------------- % User defined input parameters (need to be edited) strfitnessfct = 'frosenbrock'; % name of objective/fitness function N = 20; % number of objective variables/problem dimension xmean = rand(N,1); % objective variables initial point sigma = 0.3; % coordinate wise standard deviation (step size) stopfitness = 1e-10; % stop if fitness < stopfitness (minimization) stopeval = 1e3*N^2; % stop after stopeval number of function evaluations % Strategy parameter setting: Selection lambda = 4+floor(3*log(N)); % population size, offspring number mu = lambda/2; % number of parents/points for recombination weights = log(mu+1/2)-log(1:mu)'; % muXone array for weighted recombination mu = floor(mu); weights = weights/sum(weights); % normalize recombination weights array mueff=sum(weights)^2/sum(weights.^2); % variance-effectiveness of sum w_i x_i % Strategy parameter setting: Adaptation cc = (4+mueff/N) / (N+4 + 2*mueff/N); % time constant for cumulation for C cs = (mueff+2) / (N+mueff+5); % t-const for cumulation for sigma control c1 = 2 / ((N+1.3)^2+mueff); % learning rate for rank-one update of C cmu = min(1-c1, 2 * (mueff-2+1/mueff) / ((N+2)^2+mueff)); % and for rank-mu update damps = 1 + 2*max(0, sqrt((mueff-1)/(N+1))-1) + cs; % damping for sigma % usually close to 1 % Initialize dynamic (internal) strategy parameters and constants pc = zeros(N,1); ps = zeros(N,1); % evolution paths for C and sigma B = eye(N,N); % B defines the coordinate system D = ones(N,1); % diagonal D defines the scaling C = B * diag(D.^2) * B'; % covariance matrix C invsqrtC = B * diag(D.^-1) * B'; % C^-1/2 eigeneval = 0; % track update of B and D chiN=N^0.5*(1-1/(4*N)+1/(21*N^2)); % expectation of % ||N(0,I)|| == norm(randn(N,1)) % -------------------- Generation Loop -------------------------------- counteval = 0; % the next 40 lines contain the 20 lines of interesting code while counteval < stopeval % Generate and evaluate lambda offspring for k=1:lambda arx(:,k) = xmean + sigma * B * (D .* randn(N,1)); % m + sig * Normal(0,C) arfitness(k) = feval(strfitnessfct, arx(:,k)); % objective function call counteval = counteval+1; end % Sort by fitness and compute weighted mean into xmean [arfitness, arindex] = sort(arfitness); % minimization xold = xmean; xmean = arx(:,arindex(1:mu))*weights; % recombination, new mean value % Cumulation: Update evolution paths ps = (1-cs)*ps ... + sqrt(cs*(2-cs)*mueff) * invsqrtC * (xmean-xold) / sigma; hsig = norm(ps)/sqrt(1-(1-cs)^(2*counteval/lambda))/chiN < 1.4 + 2/(N+1); pc = (1-cc)*pc ... + hsig * sqrt(cc*(2-cc)*mueff) * (xmean-xold) / sigma; % Adapt covariance matrix C artmp = (1/sigma) * (arx(:,arindex(1:mu))-repmat(xold,1,mu)); C = (1-c1-cmu) * C ... % regard old matrix + c1 * (pc*pc' ... % plus rank one update + (1-hsig) * cc*(2-cc) * C) ... % minor correction if hsig==0 + cmu * artmp * diag(weights) * artmp'; % plus rank mu update % Adapt step size sigma sigma = sigma * exp((cs/damps)*(norm(ps)/chiN - 1)); % Decomposition of C into B*diag(D.^2)*B' (diagonalization) if counteval - eigeneval > lambda/(c1+cmu)/N/10 % to achieve O(N^2) eigeneval = counteval; C = triu(C) + triu(C,1)'; % enforce symmetry [B,D] = eig(C); % eigen decomposition, B==normalized eigenvectors D = sqrt(diag(D)); % D is a vector of standard deviations now invsqrtC = B * diag(D.^-1) * B'; end % Break, if fitness is good enough or condition exceeds 1e14, better termination methods are advisable if arfitness(1) <= stopfitness || max(D) > 1e7 * min(D) break; end end % while, end generation loop xmin = arx(:, arindex(1)); % Return best point of last iteration. % Notice that xmean is expected to be even % better. end % --------------------------------------------------------------- function f=frosenbrock(x) if size(x,1) < 2 error('dimension must be greater one'); end f = 100*sum((x(1:end-1).^2 - x(2:end)).^2) + sum((x(1:end-1)-1).^2); end Theoretical foundations Given the distribution parameters—mean, variances and covariances—the normal probability distribution for sampling new candidate solutions is the maximum entropy probability distribution over , that is, the sample distribution with the minimal amount of prior information built into the distribution. More considerations on the update equations of CMA-ES are made in the following. Variable metric The CMA-ES implements a stochastic variable-metric method. In the very particular case of a convex-quadratic objective function the covariance matrix adapts to the inverse of the Hessian matrix , up to a scalar factor and small random fluctuations. More general, also on the function , where is strictly increasing and therefore order preserving, the covariance matrix adapts to , up to a scalar factor and small random fluctuations. For selection ratio (and hence population size ), the selected solutions yield an empirical covariance matrix reflective of the inverse-Hessian even in evolution strategies without adaptation of the covariance matrix. This result has been proven for on a static model, relying on the quadratic approximation. Maximum-likelihood updates The update equations for mean and covariance matrix maximize a likelihood while resembling an expectation–maximization algorithm. The update of the mean vector maximizes a log-likelihood, such that where denotes the log-likelihood of from a multivariate normal distribution with mean and any positive definite covariance matrix . To see that is independent of remark first that this is the case for any diagonal matrix , because the coordinate-wise maximizer is independent of a scaling factor. Then, rotation of the data points or choosing non-diagonal are equivalent. The rank- update of the covariance matrix, that is, the right most summand in the update equation of , maximizes a log-likelihood in that for (otherwise is singular, but substantially the same result holds for ). Here, denotes the likelihood of from a multivariate normal distribution with zero mean and covariance matrix . Therefore, for and , is the above maximum-likelihood estimator. See estimation of covariance matrices for details on the derivation. Natural gradient descent in the space of sample distributions Akimoto et al. and Glasmachers et al. discovered independently that the update of the distribution parameters resembles the descent in direction of a sampled natural gradient of the expected objective function value (to be minimized), where the expectation is taken under the sample distribution. With the parameter setting of and , i.e. without step-size control and rank-one update, CMA-ES can thus be viewed as an instantiation of Natural Evolution Strategies (NES). The natural gradient is independent of the parameterization of the distribution. Taken with respect to the parameters of the sample distribution , the gradient of can be expressed as where depends on the parameter vector . The so-called score function, , indicates the relative sensitivity of w.r.t. , and the expectation is taken with respect to the distribution . The natural gradient of , complying with the Fisher information metric (an informational distance measure between probability distributions and the curvature of the relative entropy), now reads where the Fisher information matrix is the expectation of the Hessian of and renders the expression independent of the chosen parameterization. Combining the previous equalities we get A Monte Carlo approximation of the latter expectation takes the average over samples from where the notation from above is used and therefore are monotonically decreasing in . Ollivier et al. finally found a rigorous derivation for the weights, , as they are defined in the CMA-ES. The weights are an asymptotically consistent estimator of the CDF of at the points of the th order statistic , as defined above, where , composed with a fixed monotonically decreasing transformation , that is, . These weights make the algorithm insensitive to the specific -values. More concisely, using the CDF estimator of instead of itself let the algorithm only depend on the ranking of -values but not on their underlying distribution. This renders the algorithm invariant to strictly increasing -transformations. Now we define such that is the density of the multivariate normal distribution . Then, we have an explicit expression for the inverse of the Fisher information matrix where is fixed and for and, after some calculations, the updates in the CMA-ES turn out as and where mat forms the proper matrix from the respective natural gradient sub-vector. That means, setting , the CMA-ES updates descend in direction of the approximation of the natural gradient while using different step-sizes (learning rates 1 and ) for the orthogonal parameters and respectively. More recent versions allow a different learning rate for the mean as well. The most recent version of CMA-ES also use a different function for and with negative values only for the latter (so-called active CMA). Stationarity or unbiasedness It is comparatively easy to see that the update equations of CMA-ES satisfy some stationarity conditions, in that they are essentially unbiased. Under neutral selection, where , we find that and under some mild additional assumptions on the initial conditions and with an additional minor correction in the covariance matrix update for the case where the indicator function evaluates to zero, we find Invariance Invariance properties imply uniform performance on a class of objective functions. They have been argued to be an advantage, because they allow to generalize and predict the behavior of the algorithm and therefore strengthen the meaning of empirical results obtained on single functions. The following invariance properties have been established for CMA-ES. Invariance under order-preserving transformations of the objective function value , in that for any the behavior is identical on for all strictly increasing . This invariance is easy to verify, because only the -ranking is used in the algorithm, which is invariant under the choice of . Scale-invariance, in that for any the behavior is independent of for the objective function given and . Invariance under rotation of the search space in that for any and any the behavior on is independent of the orthogonal matrix , given . More general, the algorithm is also invariant under general linear transformations when additionally the initial covariance matrix is chosen as . Any serious parameter optimization method should be translation invariant, but most methods do not exhibit all the above described invariance properties. A prominent example with the same invariance properties is the Nelder–Mead method, where the initial simplex must be chosen respectively. Convergence Conceptual considerations like the scale-invariance property of the algorithm, the analysis of simpler evolution strategies, and overwhelming empirical evidence suggest that the algorithm converges on a large class of functions fast to the global optimum, denoted as . On some functions, convergence occurs independently of the initial conditions with probability one. On some functions the probability is smaller than one and typically depends on the initial and . Empirically, the fastest possible convergence rate in for rank-based direct search methods can often be observed (depending on the context denoted as linear convergence or log-linear or exponential convergence). Informally, we can write for some , and more rigorously or similarly, This means that on average the distance to the optimum decreases in each iteration by a "constant" factor, namely by . The convergence rate is roughly , given is not much larger than the dimension . Even with optimal and , the convergence rate cannot largely exceed , given the above recombination weights are all non-negative. The actual linear dependencies in and are remarkable and they are in both cases the best one can hope for in this kind of algorithm. Yet, a rigorous proof of convergence is missing. Interpretation as coordinate-system transformation Using a non-identity covariance matrix for the multivariate normal distribution in evolution strategies is equivalent to a coordinate system transformation of the solution vectors, mainly because the sampling equation can be equivalently expressed in an "encoded space" as The covariance matrix defines a bijective transformation (encoding) for all solution vectors into a space, where the sampling takes place with identity covariance matrix. Because the update equations in the CMA-ES are invariant under linear coordinate system transformations, the CMA-ES can be re-written as an adaptive encoding procedure applied to a simple evolution strategy with identity covariance matrix. This adaptive encoding procedure is not confined to algorithms that sample from a multivariate normal distribution (like evolution strategies), but can in principle be applied to any iterative search method. Performance in practice In contrast to most other evolutionary algorithms, the CMA-ES is, from the user's perspective, quasi-parameter-free. The user has to choose an initial solution point, , and the initial step-size, . Optionally, the number of candidate samples λ (population size) can be modified by the user in order to change the characteristic search behavior (see above) and termination conditions can or should be adjusted to the problem at hand. The CMA-ES has been empirically successful in hundreds of applications and is considered to be useful in particular on non-convex, non-separable, ill-conditioned, multi-modal or noisy objective functions. One survey of Black-Box optimizations found it outranked 31 other optimization algorithms, performing especially strongly on "difficult functions" or larger-dimensional search spaces. The search space dimension ranges typically between two and a few hundred. Assuming a black-box optimization scenario, where gradients are not available (or not useful) and function evaluations are the only considered cost of search, the CMA-ES method is likely to be outperformed by other methods in the following conditions: on low-dimensional functions, say , for example by the downhill simplex method or surrogate-based methods (like kriging with expected improvement); on separable functions without or with only negligible dependencies between the design variables in particular in the case of multi-modality or large dimension, for example by differential evolution; on (nearly) convex-quadratic functions with low or moderate condition number of the Hessian matrix, where BFGS or NEWUOA or SLSQP are typically at least ten times faster; on functions that can already be solved with a comparatively small number of function evaluations, say no more than , where CMA-ES is often slower than, for example, NEWUOA or Multilevel Coordinate Search (MCS). On separable functions, the performance disadvantage is likely to be most significant in that CMA-ES might not be able to find at all comparable solutions. On the other hand, on non-separable functions that are ill-conditioned or rugged or can only be solved with more than function evaluations, the CMA-ES shows most often superior performance. Variations and extensions The (1+1)-CMA-ES generates only one candidate solution per iteration step which becomes the new distribution mean if it is better than the current mean. For the (1+1)-CMA-ES is a close variant of Gaussian adaptation. Some Natural Evolution Strategies are close variants of the CMA-ES with specific parameter settings. Natural Evolution Strategies do not utilize evolution paths (that means in CMA-ES setting ) and they formalize the update of variances and covariances on a Cholesky factor instead of a covariance matrix. The CMA-ES has also been extended to multiobjective optimization as MO-CMA-ES. Another remarkable extension has been the addition of a negative update of the covariance matrix with the so-called active CMA. Using the additional active CMA update is considered as the default variant nowadays. See also References Bibliography Hansen N, Ostermeier A (2001). Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2) pp. 159–195. Hansen N, Müller SD, Koumoutsakos P (2003). Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1) pp. 1–18. Hansen N, Kern S (2004). Evaluating the CMA evolution strategy on multimodal test functions. In Xin Yao et al., editors, Parallel Problem Solving from Nature – PPSN VIII, pp. 282–291, Springer. Igel C, Hansen N, Roth S (2007). Covariance Matrix Adaptation for Multi-objective Optimization. Evolutionary Computation, 15(1) pp. 1–28. External links A short introduction to CMA-ES by N. Hansen The CMA Evolution Strategy: A Tutorial CMA-ES source code page Evolutionary algorithms Stochastic optimization Optimization algorithms and methods fr:Stratégie d'évolution#CMA-ES
0.786989
0.988438
0.77789
Biostasis
Biostasis is the ability of an organism to tolerate environmental changes without having to actively adapt to them. Biostasis is found in organisms that live in habitats that likely encounter unfavorable living conditions, such as drought, freezing temperatures, change in pH levels, pressure, or temperature. Insects undergo a type of dormancy to survive these conditions, called diapause. Diapause may be obligatory for these insects to survive. The insect may also be able to undergo change prior to the arrival of the initiating event. Microorganisms Biostasis in this context is also synonymous for viable but nonculturable state. In the past when bacteria were no longer growing on culture media it was assumed that they were dead. Now we can understand that there are many instances where bacteria cells may go into biostasis or suspended animation, fail to grow on media, and on resuscitation are again culturable. VBNC state differs from 'starvation survival state' (where a cell just reduces metabolism significantly). Bacteria cells may enter the VBNC state as a result of some outside stressor such as "starvation, incubation outside the temperature range of growth, elevated osmotic concentrations (seawater), oxygen concentrations, or exposure to white light". Any of these instances could very easily mean death for the bacteria if it was not able to enter this state of dormancy. It has also been observed that in may instances where it was thought that bacteria had been destroyed (pasteurization of milk) and later caused spoilage or harmful effects to consumers because the bacteria had entered the VBNC state. Effects on cells entering the VBNC state include "dwarfing, changes in metabolic activity, reduced nutrient transport, respiration rates and macromolecular synthesis". Yet biosynthesis continues, and shock proteins are made. Most importantly has been observed that ATP levels and generation remain high, completely contrary to dying cells which show rapid decreases in generation and retention. Changes to the cell walls of bacteria in the VBNC state have also been observed. In Escherichia coli a large amount of cross-linking was observed in the peptidoglycan. The autolytic capability was also observed to be much higher in VBNC cells than those who were in the growth state. It is far easier to induce bacteria to the VBNC state and once bacteria cells have entered the VBNC state it is very hard to return them to a culturable state. "They examined nonculturability and resuscitation in Legionella pneumophila and while entry into this state was easily induced by nutrient starvation, resuscitation could only be demonstrated following co-incubation of the VBNC cells with the amoeba, Acanthamoeba Castellani" Fungistasis or mycostasis a naturally occurring VBNC (viable but nonculturable) state found in fungi in soil. Watson and Ford defined fungistasis as "when viable fungal propagules, which are not subject to endogenous or constitutive dormancy do not germinate in soil at their favorable temperature or moisture conditions or growth of fungal hyphae is retarded or terminated by conditions of the soil environment other than temperature or moisture". Essentially (and mostly observed naturally occurring in soil) several types of fungi have been found to enter the VBNC state resulting from outside stressors (temperature, available nutrients, oxygen availability etc.) or from no observable stressors at all. Current research On March 1, 2018, the Defense Advanced Research Projects Agency (DARPA) announced their new Biostasis program under the direction of Dr. Tristan McClure-Begley. The aim of the Biostasis program is to develop new possibilities for extending the golden hour in patients who suffered a traumatic injury by slowing down the human body at the cellular level, addressing the need for additional time in continuously operating biological systems faced with catastrophic, life-threatening events. By leveraging molecular biology, the program aims to control the speed at which living systems operate and figure out a way to "slow life to save life." On March 20, 2018, the Biostasis team held a Webinar which, along with a Broad Agency Announcement (BAA), solicited five-year research proposals from outside organizations. The full proposals were due on May 22, 2018. Possible approaches In their Webinar, DARPA outlined a number of possible research approaches for the Biostasis project. These approaches are based on research into diapause in tardigrades and wood frogs which suggests that selective stabilization of intracellular machinery occurs at the protein level. Protein chaperoning In molecular biology, molecular chaperones are proteins that assist in the folding, unfolding, assembly, or disassembly of other macromolecular structures. Under typical conditions, molecular chaperones facilitate changes in shape (conformational change) of macromolecules in response to changes in environmental factors like temperature, pH, and voltage. By reducing conformational flexibility, scientists can constrain the function of certain proteins. Recent research has shown that proteins are promiscuous, or able to do jobs in addition to the ones they evolved to carry out. Additionally, protein promiscuity plays a key role in the adaptation of species to new environments. It is possible that finding a way to control conformational change in promiscuous proteins could allow scientists to induce biostasis in living organisms. Intracellular crowding The crowdedness of cells is a critical aspect of biological systems. Intracellular crowding refers to the fact that protein function and interaction with water is constrained when the interior of the cell is overcrowded. Intracellular organelles are either membrane-bound vesicles or membrane-less compartments that compartmentalize the cell and enable spatiotemporal control of biological reactions. By introducing these intracellular polymers to a biological system and manipulating the crowdedness of a cell, scientists may be able to slow down the rate of biological reactions in the system. Tardigrade-disordered proteins Tardigrades are microscopic animals that are able to enter a state of diapause and survive a remarkable array of environmental stressors, including freezing and desiccation. Research has shown that intrinsically disordered proteins in these organisms may work to stabilize cell function and protect against these extreme environmental stressors. By using peptide engineering, it is possible that scientists may be able to introduce intrinsically disordered proteins to the biological systems of larger animal organisms. This could allow larger animals to enter a state of biostasis similar to that of tardigrades under extreme biological stress. References Oliver, James D. "The viable but nonculturable state in bacteria." The Journal of Microbiology 43.1 (2005): 93-100. Fungistasis and general soil biostasis A new synthesis Paolina Garbeva, W.H. Gera Holb, Aad J. Termorshuizenc, George A. Kowalchuka, Wietse de Boer Watson, A.G., Ford E.J. 1972 Soil Fungistasis—a reappraisal. Annual Review of Phytopathology 10, 327. Ecology Physiology
0.783885
0.992343
0.777883
Nature-based solutions
Nature-based solutions (or nature-based systems, and abbreviated as NBS or NbS) describe the development and use of nature (biodiversity) and natural processes to address diverse socio-environmental issues. These issues include climate change mitigation and adaptation, human security issues such as water security and food security, and disaster risk reduction. The aim is that resilient ecosystems (whether natural, managed, or newly created) provide solutions for the benefit of both societies and biodiversity. The 2019 UN Climate Action Summit highlighted nature-based solutions as an effective method to combat climate change. For example, nature-based systems for climate change adaptation can include natural flood management, restoring natural coastal defences, and providing local cooling. The concept of NBS is related to the concept of ecological engineering and ecosystem-based adaptation. NBS are also related, conceptually to the practice of ecological restoration. The sustainable management approach is a key aspect of NBS development and implementation. Mangrove restoration efforts along coastlines provide an example of a nature-based solution that can achieve multiple goals. Mangroves moderate the impact of waves and wind on coastal settlements or cities, and they sequester carbon. They also provide nursery zones for marine life which is important for sustaining fisheries. Additionally, mangrove forests can help to control coastal erosion resulting from sea level rise. Green roofs, blue roofs and green walls (as part of green infrastructure) are also nature-based solutions that can be implemented in urban areas. They can reduce the effects of urban heat islands, capture stormwater, abate pollution, and act as carbon sinks. At the same time, they can enhance local biodiversity. NBS systems and solutions are forming an increasing part of national and international policies on climate change. They are included in climate change policy, infrastructure investment, and climate finance mechanisms. The European Commission has paid increasing attention to NBS since 2013. This is reflected in the majority of global NBS case studies reviewed by Debele et al (2023) being located in Europe. While there is much scope for scaling-up nature-based systems and solutions globally, they frequently encounter numerous challenges during planning and implementation. The IPCC pointed out that the term is "the subject of ongoing debate, with concerns that it may lead to the misunderstanding that NbS on its own can provide a global solution to climate change". To clarify this point further, the IPCC also stated that "nature-based systems cannot be regarded as an alternative to, or a reason to delay, deep cuts in GHG emissions". Definition The International Union for Conservation of Nature (IUCN) defines NBS as "actions to protect, sustainably manage, and restore natural or modified ecosystems, that address societal challenges effectively and adaptively, simultaneously providing human well-being and biodiversity benefits". Societal challenges of relevance here include climate change, food security, disaster risk reduction, water security. In other words: "Nature-based solutions are interventions that use the natural functions of healthy ecosystems to protect the environment but also provide numerous economic and social benefits." They are used both in the context of climate change mitigation as well as adaptation. The European Commission's definition of NBS states that these solutions are "inspired and supported by nature, which are cost-effective, simultaneously provide environmental, social and economic benefits and help build resilience. Such solutions bring more, and more diverse, nature and natural features and processes into cities, landscapes, and seascapes, through locally adapted, resource-efficient and systemic interventions". In 2020, the EC definition was updated to further emphasise that "Nature-based solutions must benefit biodiversity and support the delivery of a range of ecosystem services." The IPCC Sixth Assessment Report pointed out that the term nature-based solutions is "widely but not universally used in the scientific literature". As of 2017, the term NBS was still regarded as "poorly defined and vague". The term ecosystem-based adaptation (EbA) is a subset of nature-based solutions and "aims to maintain and increase the resilience and reduce the vulnerability of ecosystems and people in the face of the adverse effects of climate change". History of the term The term nature-based solutions was put forward by practitioners in the late 2000s. At that time it was used by international organisations such as the International Union for Conservation of Nature and the World Bank in the context of finding new solutions to mitigate and adapt to climate change effects by working with natural ecosystems rather than relying purely on engineering interventions. Many indigenous peoples have recognised the natural environment as playing an important role in human well-being as part of their traditional knowledge systems, but this idea did not enter into modern scientific literature until the 1970's with the concept of ecosystem services. The IUCN referred to NBS in a position paper for the United Nations Framework Convention on Climate Change. The term was also adopted by European policymakers, in particular by the European Commission, in a report stressing that NBS can offer innovative means to create jobs and growth as part of a green economy. The term started to make appearances in the mainstream media around the time of the Global Climate Action Summit in California in September 2018. Objectives and framing Nature-based solutions stress the sustainable use of nature in solving coupled environmental-social-economic challenges. NBS go beyond traditional biodiversity conservation and management principles by "re-focusing" the debate on humans and specifically integrating societal factors such as human well-being and poverty reduction, socio-economic development, and governance principles. The general objective of NBS is clear, namely the sustainable management and use of Nature for tackling societal challenges. However, different stakeholders view NBS from a variety of perspectives. For instance, the IUCN puts the need for well-managed and restored ecosystems at the heart of NBS, with the overarching goal of "Supporting the achievement of society's development goals and safeguard human well-being in ways that reflect cultural and societal values and enhance the resilience of ecosystems, their capacity for renewal and the provision of services". The European Commission underlines that NBS can transform environmental and societal challenges into innovation opportunities, by turning natural capital into a source for green growth and sustainable development. Within this viewpoint, nature-based solutions to societal challenges "bring more, and more diverse, nature and natural features and processes into cities, landscapes and seascapes, through locally adapted, resource-efficient and systemic interventions". As a result, NBS has been suggested as a means of implementing the nature-positive goal to halt and reverse nature loss by 2030, and achieve full nature recovery by 2050. Categories The IUCN proposes to consider NBS as an umbrella concept. Categories and examples of NBS approaches according to the IUCN include: Types Scientists have proposed a typology to characterise NBS along two gradients: "How much engineering of biodiversity and ecosystems is involved in NBS", and "How many ecosystem services and stakeholder groups are targeted by a given NBS". The typology highlights that NBS can involve very different actions on ecosystems (from protection, to management, or even the creation of new ecosystems) and is based on the assumption that the higher the number of services and stakeholder groups targeted, the lower the capacity to maximise the delivery of each service and simultaneously fulfil the specific needs of all stakeholder groups. As such, three types of NBS are distinguished (hybrid solutions exist along this gradient both in space and time. For instance, at a landscape scale, mixing protected and managed areas could be required to fulfill multi-functionality and sustainability goals): Type 1 – Minimal intervention in ecosystems Type 1 consists of no or minimal intervention in ecosystems, with the objectives of maintaining or improving the delivery of a range of ecosystem services both inside and outside of these conserved ecosystems. Examples include the protection of mangroves in coastal areas to limit risks associated with extreme weather conditions; and the establishment of marine protected areas to conserve biodiversity within these areas while exporting fish and other biomass into fishing grounds. This type of NBS is connected to, for example, the concept of biosphere reserves. Type 2 – Some interventions in ecosystems and landscapes Type 2 corresponds to management approaches that develop sustainable and multifunctional ecosystems and landscapes (extensively or intensively managed). These types improve the delivery of selected ecosystem services compared to what would be obtained through a more conventional intervention. Examples include innovative planning of agricultural landscapes to increase their multi-functionality; using existing agrobiodiversity to increase biodiversity, connectivity, and resilience in landscapes; and approaches for enhancing tree species and genetic diversity to increase forest resilience to extreme events. This type of NBS is strongly connected to concepts like agroforestry. Type 3 – Managing ecosystems in extensive ways Type 3 consists of managing ecosystems in very extensive ways or even creating new ecosystems (e.g., artificial ecosystems with new assemblages of organisms for green roofs and walls to mitigate city warming and clean polluted air). Type 3 is linked to concepts like green and blue infrastructures and objectives like restoration of heavily degraded or polluted areas and greening cities. Constructed wetlands are one example for a Type 3 NBS. Applications Climate change mitigation and adaptation The 2019 UN Climate Action Summit highlighted nature-based solutions as an effective method to combat climate change. For example, NBS in the context of climate action can include natural flood management, restoring natural coastal defences, providing local cooling, restoring natural fire regimes. The Paris Agreement calls on all Parties to recognise the role of natural ecosystems in providing services such as that of carbon sinks. Article 5.2 encourages Parties to adopt conservation and management as a tool for increasing carbon stocks and Article 7.1 encourages Parties to build the resilience of socioeconomic and ecological systems through economic diversification and sustainable management of natural resources. The Agreement refers to nature (ecosystems, natural resources, forests) in 13 distinct places. An in-depth analysis of all Nationally Determined Contributions submitted to UNFCCC, revealed that around 130 NDCs or 65% of signatories commit to nature-based solutions in their climate pledges. This suggests a broad consensus for the role of nature in helping to meet climate change goals. However, high-level commitments rarely translate into robust, measurable actions on-the-ground. A global systemic map of evidence was produced to determine and illustrate the effectiveness of NBS for climate change adaptation. After sorting through 386 case studies with computer programs, the study found that NBS were just as, if not more, effective than traditional or alternative flood management strategies. 66% of cases evaluated reported positive ecological outcomes, 24% did not identify a change in ecological conditions and less than 1% reported negative impacts. Furthermore, NBS always had better social and climate change mitigation impacts. In the 2019 UN Climate Action Summit, nature-based solutions were one of the main topics covered, and were discussed as an effective method to combat climate change. A "Nature-Based Solution Coalition" was created, including dozens of countries, led by China and New Zealand. Urban areas Since around 2017, many studies have proposed ways of planning and implementing nature-based solutions in urban areas. It is crucial that grey infrastructures continue to be used with green infrastructure. Multiple studies recognise that while NBS is very effective and improves flood resilience, it is unable to act alone and must be in coordination with grey infrastructure. Using green infrastructure alone or grey infrastructure alone are less effective than when the two are used together. When NBS is used alongside grey infrastructure the benefits transcend flood management and improve social conditions, increase carbon sequestration and prepare cities for planning for resilience. In the 1970s a popular approach in the U.S. was that of Best Management Practices (BMP) for using nature as a model for infrastructure and development while the UK had a model for flood management called "sustainable drainage systems". Another framework called "Water Sensitive Urban Design" (WSUD) came out of Australia in the 1990s while Low Impact Development (LID) came out of the U.S.  Eventually New Zealand reframed LID to create "Low Impact Urban Design and Development" (LIUDD) with a focus on using diverse stakeholders as a foundation. Then in the 2000s the western hemisphere largely adopted "Green Infrastructure" for stormwater management as well as enhancing social, economic and environmental conditions for sustainability. In a Chinese National Government program, the Sponge Cities Program, planners are using green grey infrastructure in 30 Chinese cities as a way to manage pluvial flooding and climate change risk after rapid urbanization. Water management aspects With respect to water issues, NBS can achieve the following: Use natural processes to enhance water availability (e.g., soil moisture retention, groundwater recharge), Improve water quality (e.g., natural wetlands and constructed wetlands to treat wastewater; riparian buffer strips), and Reduce risks associated with water‐related disasters and climate change (e.g., floodplain restoration, green roofs). The UN has also tried to promote a shift in perspective towards NBS: the theme for World Water Day 2018 was "Nature for Water", while UN-Water's accompanying UN World Water Development Report was titled "Nature-based Solutions for Water". For example, the Lancaster Environment Centre has implemented catchments at different scales on flood basins in conjunction with modelling software that allows observers to calculate the factor by which the floodplain expanded during two storm events. The idea is to divert higher floods flows into expandable areas of storage in the landscape. Forest restoration for multiple benefits Forest restoration can benefit both biodiversity and human livelihoods (eg. providing food, timber and medicinal products). Diverse, native tree species are also more likely to be resilient to climate change than plantation forests. Agricultural expansion has been the main driver of deforestation globally. Forest loss has been estimated at around 4.7 million ha per year in 2010–2020. Over the same period, Asia had the highest net gain of forest area followed by Oceania and Europe. Forest restoration, as part of national development strategies, can help countries achieve sustainable development goals. For example, in Rwanda, the Rwanda Natural Resources Authority, World Resources Institute and IUCN began a program in 2015 for forest landscape restoration as a national priority. NBS approaches used were ecological restoration and ecosystem-based mitigation and the program was meant to address the following societal issues: food security, water security, disaster risk reduction. The Great Green Wall, a joint campaign among African countries to combat desertification launched in 2007. Implementation Guidance for effective implementation A number of studies and reports have proposed principles and frameworks to guide effective and appropriate implementation. One primary principle, for example, is that NBS seek to embrace, rather than replace, nature conservation norms. NBS can be implemented alone or in an integrated manner along with other solutions to societal challenges (e.g. technological and engineering solutions) and are applied at the landscape scale. Researchers have pointed out that "instead of framing NBS as an alternative to engineered approaches, we should focus on finding synergies among different solutions". The concept of NBS is gaining acceptance outside the conservation community (e.g. urban planning) and is now on its way to be mainstreamed into policies and programmes (climate change policy, law, infrastructure investment, and financing mechanisms), although NBS still face many implementation barriers and challenges. Multiple case studies have demonstrated that NBS can be more economically viable than traditional technological infrastructures. Implementation of NBS requires measures like adaptation of economic subsidy schemes, and the creation of opportunities for conservation finance, to name a few. Using geographic information systems (GIS) NBS are also determined by site-specific natural and cultural contexts that include traditional, local and scientific knowledge. Geographic information systems (GIS) can be used as an analysis tool to determine sites that may succeed as NBS. GIS can function in such a way that site conditions including slope gradients, water bodies, land use and soils are taken into account in analyzing for suitability. The resulting maps are often used in conjunction with historic flood maps to determine the potential of floodwater storage capacity on specific sites using 3D modeling tools. Projects supported by the European Union Since 2016, the EU has supported a multi-stakeholder dialogue platform (ThinkNature) to promote the co-design, testing, and deployment of improved and innovative NBS in an integrated way. The creation of such science-policy-business-society interfaces could promote market uptake of NBS. The project was part of the EU’s Horizon 2020 Research and Innovation programme, and ran for 3 years. In 2017, as part of the Presidency of the Estonian Republic of the Council of the European Union, a conference called "Nature-based Solutions: From Innovation to Common-use" was organised by the Ministry of the Environment of Estonia and the University of Tallinn. This conference aimed to strengthen synergies among various recent initiatives and programs related to NBS, focusing on policy and governance of NBS, research, and innovation. Concerns The Indigenous Environmental Network has stated that "Nature-based solutions (NBS) is a greenwashing tool that does not address the root causes of climate change." and "The legacy of colonial power continues through nature-based solutions." For example, NBS activities can involve converting non-forest land into forest plantations (for climate change mitigation) but this carries risks of climate injustice through taking land away from smallholders and pastoralists. However, the IPCC pointed out that the term is "the subject of ongoing debate, with concerns that it may lead to the misunderstanding that NbS on its own can provide a global solution to climate change". To clarify this point further, the IPCC also stated that "nature-based systems cannot be regarded as an alternative to, or a reason to delay, deep cuts in GHG emissions". The majority of case studies and examples of NBS are from the Global North, resulting in a lack of data for many medium- and low-income nations. Consequently, many ecosystems and climates are excluded from existing studies as well as cost analyses in these locations. Further research needs to be conducted in the Global South to determine the efficacy of NBS on climate, social and ecological standards. Related concepts NBS is closely related to concepts like ecosystem approaches and ecological engineering. This includes concepts such as ecosystem-based adaptation and green infrastructure. For instance, ecosystem-based approaches are increasingly promoted for climate change adaptation and mitigation by organisations like the United Nations Environment Programme and non-governmental organisations such as The Nature Conservancy. These organisations refer to "policies and measures that take into account the role of ecosystem services in reducing the vulnerability of society to climate change, in a multi-sectoral and multi-scale approach". See also References External links Nature-based solutions in the context of climate change: Nature-based Solutions Initiative - interdisciplinary programme of research, education and policy advice based in the Departments of Biology and Geography at the University of Oxford An Introduction to Nature-based Solutions (by weADAPT) Shortfilm by Greta Thunberg and George Monbiot: Nature Now 2020 Q&A: Can ‘nature-based solutions’ help address climate change? by CarbonBrief. 2021. Nature-based solutions in other contexts: Sustainable cities: Nature-based solutions in urban design (The Nature Conservancy): https://vimeo.com/155849692 Video: Think Nature: A guide to using nature-based solutions (IUCN) Nature Ecosystems Social issues Biodiversity Ecology Climate change adaptation
0.789385
0.985333
0.777807
On the Origin of Species
On the Origin of Species (or, more completely, On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life) is a work of scientific literature by Charles Darwin that is considered to be the foundation of evolutionary biology. It was published on 24 November 1859. Darwin's book introduced the scientific theory that populations evolve over the course of generations through a process of natural selection although Lamarckism was also included as a mechanism of lesser importance. The book presented a body of evidence that the diversity of life arose by common descent through a branching pattern of evolution. Darwin included evidence that he had collected on the Beagle expedition in the 1830s and his subsequent findings from research, correspondence, and experimentation. Various evolutionary ideas had already been proposed to explain new findings in biology. There was growing support for such ideas among dissident anatomists and the general public, but during the first half of the 19th century the English scientific establishment was closely tied to the Church of England, while science was part of natural theology. Ideas about the transmutation of species were controversial as they conflicted with the beliefs that species were unchanging parts of a designed hierarchy and that humans were unique, unrelated to other animals. The political and theological implications were intensely debated, but transmutation was not accepted by the scientific mainstream. The book was written for non-specialist readers and attracted widespread interest upon its publication. Darwin was already highly regarded as a scientist, so his findings were taken seriously and the evidence he presented generated scientific, philosophical, and religious discussion. The debate over the book contributed to the campaign by T. H. Huxley and his fellow members of the X Club to secularise science by promoting scientific naturalism. Within two decades, there was widespread scientific agreement that evolution, with a branching pattern of common descent, had occurred, but scientists were slow to give natural selection the significance that Darwin thought appropriate. During "the eclipse of Darwinism" from the 1880s to the 1930s, various other mechanisms of evolution were given more credit. With the development of the modern evolutionary synthesis in the 1930s and 1940s, Darwin's concept of evolutionary adaptation through natural selection became central to modern evolutionary theory, and it has now become the unifying concept of the life sciences. Summary of Darwin's theory Darwin's theory of evolution is based on key facts and the inferences drawn from them, which biologist Ernst Mayr summarised as follows: Every species is fertile enough that if all offspring survived to reproduce, the population would grow (fact). Despite periodic fluctuations, populations remain roughly the same size (fact). Resources such as food are limited and are relatively stable over time (fact). A struggle for survival ensues (inference). Individuals in a population vary significantly from one another (fact). Much of this variation is heritable (fact). Individuals less suited to the environment are less likely to survive and less likely to reproduce; individuals more suited to the environment are more likely to survive and more likely to reproduce and leave their heritable traits to future generations, which produces the process of natural selection (fact). This slowly effected process results in populations changing to adapt to their environments, and ultimately, these variations accumulate over time to form new species (inference). Background Developments before Darwin's theory In later editions of the book, Darwin traced evolutionary ideas as far back as Aristotle; the text he cites is a summary by Aristotle of the ideas of the earlier Greek philosopher Empedocles. Early Christian Church Fathers and Medieval European scholars interpreted the Genesis creation narrative allegorically rather than as a literal historical account; organisms were described by their mythological and heraldic significance as well as by their physical form. Nature was widely believed to be unstable and capricious, with monstrous births from union between species, and spontaneous generation of life. The Protestant Reformation inspired a literal interpretation of the Bible, with concepts of creation that conflicted with the findings of an emerging science seeking explanations congruent with the mechanical philosophy of René Descartes and the empiricism of the Baconian method. After the turmoil of the English Civil War, the Royal Society wanted to show that science did not threaten religious and political stability. John Ray developed an influential natural theology of rational order; in his taxonomy, species were static and fixed, their adaptation and complexity designed by God, and varieties showed minor differences caused by local conditions. In God's benevolent design, carnivores caused mercifully swift death, but the suffering caused by parasitism was a puzzling problem. The biological classification introduced by Carl Linnaeus in 1735 also viewed species as fixed according to the divine plan, but did recognize the hierarchical nature of different taxa. In 1766, Georges Buffon suggested that some similar species, such as horses and asses, or lions, tigers, and leopards, might be varieties descended from a common ancestor. The Ussher chronology of the 1650s had calculated creation at 4004 BC, but by the 1780s geologists assumed a much older world. Wernerians thought strata were deposits from shrinking seas, but James Hutton proposed a self-maintaining infinite cycle, anticipating uniformitarianism. Charles Darwin's grandfather Erasmus Darwin outlined a hypothesis of transmutation of species in the 1790s, and French naturalist Jean-Baptiste Lamarck published a more developed theory in 1809. Both envisaged that spontaneous generation produced simple forms of life that progressively developed greater complexity, adapting to the environment by inheriting changes in adults caused by use or disuse. This process was later called Lamarckism. Lamarck thought there was an inherent progressive tendency driving organisms continuously towards greater complexity, in parallel but separate lineages with no perceptible extinction. Geoffroy contended that embryonic development recapitulated transformations of organisms in past eras when the environment acted on embryos, and that animal structures were determined by a constant plan as demonstrated by homologies. Georges Cuvier strongly disputed such ideas, holding that unrelated, fixed species showed similarities that reflected a design for functional needs. His palæontological work in the 1790s had established the reality of extinction, which he explained by local catastrophes, followed by repopulation of the affected areas by other species. In Britain, William Paley's Natural Theology saw adaptation as evidence of beneficial "design" by the Creator acting through natural laws. All naturalists in the two English universities (Oxford and Cambridge) were Church of England clergymen, and science became a search for these laws. Geologists adapted catastrophism to show repeated worldwide annihilation and creation of new fixed species adapted to a changed environment, initially identifying the most recent catastrophe as the biblical flood. Some anatomists such as Robert Grant were influenced by Lamarck and Geoffroy, but most naturalists regarded their ideas of transmutation as a threat to divinely appointed social order. Inception of Darwin's theory Darwin went to Edinburgh University in 1825 to study medicine. In his second year he neglected his medical studies for natural history and spent four months assisting Robert Grant's research into marine invertebrates. Grant revealed his enthusiasm for the transmutation of species, but Darwin rejected it. Starting in 1827, at Cambridge University, Darwin learnt science as natural theology from botanist John Stevens Henslow, and read Paley, John Herschel and Alexander von Humboldt. Filled with zeal for science, he studied catastrophist geology with Adam Sedgwick. In December 1831, he joined the Beagle expedition as a gentleman naturalist and geologist. He read Charles Lyell's Principles of Geology and from the first stop ashore, at St. Jago, found Lyell's uniformitarianism a key to the geological history of landscapes. Darwin discovered fossils resembling huge armadillos, and noted the geographical distribution of modern species in hope of finding their "centre of creation". The three Fuegian missionaries the expedition returned to Tierra del Fuego were friendly and civilised, yet to Darwin their relatives on the island seemed "miserable, degraded savages", and he no longer saw an unbridgeable gap between humans and animals. As the Beagle neared England in 1836, he noted that species might not be fixed. Richard Owen showed that fossils of extinct species Darwin found in South America were allied to living species on the same continent. In March 1837, ornithologist John Gould announced that Darwin's rhea was a separate species from the previously described rhea (though their territories overlapped), that mockingbirds collected on the Galápagos Islands represented three separate species each unique to a particular island, and that several distinct birds from those islands were all classified as finches. Darwin began speculating, in a series of notebooks, on the possibility that "one species does change into another" to explain these findings, and around July sketched a genealogical branching of a single evolutionary tree, discarding Lamarck's independent lineages progressing to higher forms. Unconventionally, Darwin asked questions of fancy pigeon and animal breeders as well as established scientists. At the zoo he had his first sight of an ape, and was profoundly impressed by how human the orangutan seemed. In late September 1838, he started reading Thomas Malthus's An Essay on the Principle of Population with its statistical argument that human populations, if unrestrained, breed beyond their means and struggle to survive. Darwin related this to the struggle for existence among wildlife and botanist de Candolle's "warring of the species" in plants; he immediately envisioned "a force like a hundred thousand wedges" pushing well-adapted variations into "gaps in the economy of nature", so that the survivors would pass on their form and abilities, and unfavourable variations would be destroyed. By December 1838, he had noted a similarity between the act of breeders selecting traits and a Malthusian Nature selecting among variants thrown up by "chance" so that "every part of newly acquired structure is fully practical and perfected". Darwin now had the basic framework of his theory of natural selection, but he was fully occupied with his career as a geologist and held back from compiling it until his book on The Structure and Distribution of Coral Reefs was completed. As he recalled in his autobiography, he had "at last got a theory by which to work", but it was only in June 1842 that he allowed himself "the satisfaction of writing a very brief abstract of my theory in pencil". Further development Darwin continued to research and extensively revise his theory while focusing on his main work of publishing the scientific results of the Beagle voyage. He tentatively wrote of his ideas to Lyell in January 1842; then in June he roughed out a 35-page "Pencil Sketch" of his theory. Darwin began correspondence about his theorising with the botanist Joseph Dalton Hooker in January 1844, and by July had rounded out his "sketch" into a 230-page "Essay", to be expanded with his research results and published if he died prematurely. In November 1844, the anonymously published popular science book Vestiges of the Natural History of Creation, written by Scottish journalist Robert Chambers, widened public interest in the concept of transmutation of species. Vestiges used evidence from the fossil record and embryology to support the claim that living things had progressed from the simple to the more complex over time. But it proposed a linear progression rather than the branching common descent theory behind Darwin's work in progress, and it ignored adaptation. Darwin read it soon after publication, and scorned its amateurish geology and zoology, but he carefully reviewed his own arguments after leading scientists, including Adam Sedgwick, attacked its morality and scientific errors. Vestiges had significant influence on public opinion, and the intense debate helped to pave the way for the acceptance of the more scientifically sophisticated Origin by moving evolutionary speculation into the mainstream. While few naturalists were willing to consider transmutation, Herbert Spencer became an active proponent of Lamarckism and progressive development in the 1850s. Hooker was persuaded to take away a copy of the "Essay" in January 1847, and eventually sent a page of notes giving Darwin much-needed feedback. Reminded of his lack of expertise in taxonomy, Darwin began an eight-year study of barnacles, becoming the leading expert on their classification. Using his theory, he discovered homologies showing that slightly changed body parts served different functions to meet new conditions, and he found an intermediate stage in the evolution of distinct sexes. Darwin's barnacle studies convinced him that variation arose constantly and not just in response to changed circumstances. In 1854, he completed the last part of his Beagle-related writing and began working full-time on evolution. He now realised that the branching pattern of evolutionary divergence was explained by natural selection working constantly to improve adaptation. His thinking changed from the view that species formed in isolated populations only, as on islands, to an emphasis on speciation without isolation; that is, he saw increasing specialisation within large stable populations as continuously exploiting new ecological niches. He conducted empirical research focusing on difficulties with his theory. He studied the developmental and anatomical differences between different breeds of many domestic animals, became actively involved in fancy pigeon breeding, and experimented (with the help of his young son Francis) on ways that plant seeds and animals might disperse across oceans to colonise distant islands. By 1856, his theory was much more sophisticated, with a mass of supporting evidence. Publication Time taken to publish In his autobiography, Darwin said he had "gained much by my delay in publishing from about 1839, when the theory was clearly conceived, to 1859; and I lost nothing by it". On the first page of his 1859 book he noted that, having begun work on the topic in 1837, he had drawn up "some short notes" after five years, had enlarged these into a sketch in 1844, and "from that period to the present day I have steadily pursued the same object." Various biographers have proposed that Darwin avoided or delayed making his ideas public for personal reasons. Reasons suggested have included fear of religious persecution or social disgrace if his views were revealed, and concern about upsetting his clergymen naturalist friends or his pious wife Emma. Charles Darwin's illness caused repeated delays. His paper on Glen Roy had proved embarrassingly wrong, and he may have wanted to be sure he was correct. David Quammen has suggested all these factors may have contributed, and notes Darwin's large output of books and busy family life during that time. A more recent study by science historian John van Wyhe has determined that the idea that Darwin delayed publication only dates back to the 1940s, and Darwin's contemporaries thought the time he took was reasonable. Darwin always finished one book before starting another. While he was researching, he told many people about his interest in transmutation without causing outrage. He firmly intended to publish, but it was not until September 1854 that he could work on it full-time. His 1846 estimate that writing his "big book" would take five years proved optimistic. Events leading to publication: "big book" manuscript An 1855 paper on the "introduction" of species, written by Alfred Russel Wallace, claimed that patterns in the geographical distribution of living and fossil species could be explained if every new species always came into existence near an already existing, closely related species. Charles Lyell recognised the implications of Wallace's paper and its possible connection to Darwin's work, although Darwin did not, and in a letter written on 1–2 May 1856 Lyell urged Darwin to publish his theory to establish priority. Darwin was torn between the desire to set out a full and convincing account and the pressure to quickly produce a short paper. He met Lyell, and in correspondence with Joseph Dalton Hooker affirmed that he did not want to expose his ideas to review by an editor as would have been required to publish in an academic journal. He began a "sketch" account on 14 May 1856, and by July had decided to produce a full technical treatise on species as his "big book" on Natural Selection. His theory including the principle of divergence was complete by 5 September 1857 when he sent Asa Gray a brief but detailed abstract of his ideas. Joint publication of papers by Wallace and Darwin Darwin was hard at work on the manuscript for his "big book" on Natural Selection, when on 18 June 1858 he received a parcel from Wallace, who stayed on the Maluku Islands (Ternate and Gilolo). It enclosed twenty pages describing an evolutionary mechanism, a response to Darwin's recent encouragement, with a request to send it on to Lyell if Darwin thought it worthwhile. The mechanism was similar to Darwin's own theory. Darwin wrote to Lyell that "your words have come true with a vengeance, ... forestalled" and he would "of course, at once write and offer to send [it] to any journal" that Wallace chose, adding that "all my originality, whatever it may amount to, will be smashed". Lyell and Hooker agreed that a joint publication putting together Wallace's pages with extracts from Darwin's 1844 Essay and his 1857 letter to Gray should be presented at the Linnean Society, and on 1 July 1858, the papers entitled On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection, by Wallace and Darwin respectively, were read out but drew little reaction. While Darwin considered Wallace's idea to be identical to his concept of natural selection, historians have pointed out differences. Darwin described natural selection as being analogous to the artificial selection practised by animal breeders, and emphasised competition between individuals; Wallace drew no comparison to selective breeding, and focused on ecological pressures that kept different varieties adapted to local conditions. Some historians have suggested that Wallace was actually discussing group selection rather than selection acting on individual variation. Abstract of Species book Soon after the meeting, Darwin decided to write "an abstract of my whole work" in the form of one or more papers to be published by the Linnean Society, but was concerned about "how it can be made scientific for a Journal, without giving facts, which would be impossible." He asked Hooker how many pages would be available, but "If the Referees were to reject it as not strictly scientific I would, perhaps publish it as pamphlet." He began his "abstract of Species book" on 20 July 1858, while on holiday at Sandown, and wrote parts of it from memory, while sending the manuscripts to his friends for checking. By early October, he began to "expect my abstract will run into a small volume, which will have to be published separately." Over the same period, he continued to collect information and write large fully detailed sections of the manuscript for his "big book" on Species, Natural Selection. Murray as publisher; choice of title By mid-March 1859 Darwin's abstract had reached the stage where he was thinking of early publication; Lyell suggested the publisher John Murray, and met with him to find if he would be willing to publish. On 28 March Darwin wrote to Lyell asking about progress, and offering to give Murray assurances "that my Book is not more un-orthodox, than the subject makes inevitable." He enclosed a draft title sheet proposing An abstract of an Essay on the Origin of Species and Varieties Through natural selection, with the year shown as "1859". Murray's response was favourable, and a very pleased Darwin told Lyell on 30 March that he would "send shortly a large bundle of M.S. but unfortunately I cannot for a week, as the three first chapters are in three copyists' hands". He bowed to Murray's objection to "abstract" in the title, though he felt it excused the lack of references, but wanted to keep "natural selection" which was "constantly used in all works on Breeding", and hoped "to retain it with Explanation, somewhat as thus",— Through Natural Selection or the preservation of favoured races. On 31 March Darwin wrote to Murray in confirmation, and listed headings of the 12 chapters in progress: he had drafted all except "XII. Recapitulation & Conclusion". Murray responded immediately with an agreement to publish the book on the same terms as he published Lyell, without even seeing the manuscript: he offered Darwin ⅔ of the profits. Darwin promptly accepted with pleasure, insisting that Murray would be free to withdraw the offer if, having read the chapter manuscripts, he felt the book would not sell well (eventually Murray paid £180 to Darwin for the first edition and by Darwin's death in 1882 the book was in its sixth edition, earning Darwin nearly £3000). On 5 April, Darwin sent Murray the first three chapters, and a proposal for the book's title. An early draft title page suggests On the Mutability of Species. Murray cautiously asked Whitwell Elwin to review the chapters. At Lyell's suggestion, Elwin recommended that, rather than "put forth the theory without the evidence", the book should focus on observations upon pigeons, briefly stating how these illustrated Darwin's general principles and preparing the way for the larger work expected shortly: "Every body is interested in pigeons." Darwin responded that this was impractical: he had only the last chapter still to write. In September the main title still included "An essay on the origin of species and varieties", but Darwin now proposed dropping "varieties". With Murray's persuasion, the title was eventually agreed as On the Origin of Species, with the title page adding by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. In this extended title (and elsewhere in the book) Darwin used the biological term "races" interchangeably with "varieties", meaning varieties within a species. He used the term broadly, and as well as discussions of "the several races, for instance, of the cabbage" and "the hereditary varieties or races of our domestic animals and plants", there are three instances in the book where the phrase "races of man" is used, referring to races of humans. Publication and subsequent editions On the Origin of Species was first published on Thursday 24 November 1859, priced at fifteen shillings with a first printing of 1250 copies. The book had been offered to booksellers at Murray's autumn sale on Tuesday 22 November, and all available copies had been taken up immediately. In total, 1,250 copies were printed but after deducting presentation and review copies, and five for Stationers' Hall copyright, around 1,170 copies were available for sale. Significantly, 500 were taken by Mudie's Library, ensuring that the book promptly reached a large number of subscribers to the library. The second edition of 3,000 copies was quickly brought out on 7 January 1860, and incorporated numerous corrections as well as a response to religious objections by the addition of a new epigraph on page ii, a quotation from Charles Kingsley, and the phrase "by the Creator" added to the closing sentence. During Darwin's lifetime the book went through six editions, with cumulative changes and revisions to deal with counter-arguments raised. The third edition came out in 1861, with a number of sentences rewritten or added and an introductory appendix, An Historical Sketch of the Recent Progress of Opinion on the Origin of Species. In response to objections that the origin of life was unexplained, Darwin pointed to acceptance of Newton's law even though the cause of gravity was unknown, and Leibnitz had accused Newton of introducing "occult qualities & miracles". The fourth edition in 1866 had further revisions. The fifth edition, published on 10 February 1869, incorporated more changes and for the first time included the phrase "survival of the fittest", which had been coined by the philosopher Herbert Spencer in his Principles of Biology (1864). In January 1871, George Jackson Mivart's On the Genesis of Species listed detailed arguments against natural selection, and claimed it included false metaphysics. Darwin made extensive revisions to the sixth edition of the Origin (this was the first edition in which he used the word "evolution" which had commonly been associated with embryological development, though all editions concluded with the word "evolved"), and added a new chapter VII, Miscellaneous objections, to address Mivart's arguments. The sixth edition was published by Murray on 19 February 1872 as The Origin of Species, with "On" dropped from the title. Darwin had told Murray of working men in Lancashire clubbing together to buy the fifth edition at 15 shillings and wanted it made more widely available; the price was halved to 7s 6d by printing in a smaller font. It includes a glossary compiled by W.S. Dallas. Book sales increased from 60 to 250 per month. Publication outside Great Britain In the United States, botanist Asa Gray, an American colleague of Darwin, negotiated with a Boston publisher for publication of an authorised American version, but learnt that two New York publishing firms were already planning to exploit the absence of international copyright to print Origin. Darwin was delighted by the popularity of the book, and asked Gray to keep any profits. Gray managed to negotiate a 5% royalty with Appleton's of New York, who got their edition out in mid-January 1860, and the other two withdrew. In a May letter, Darwin mentioned a print run of 2,500 copies, but it is not clear if this referred to the first printing only, as there were four that year. The book was widely translated in Darwin's lifetime, but problems arose with translating concepts and metaphors, and some translations were biased by the translator's own agenda. Darwin distributed presentation copies in France and Germany, hoping that suitable applicants would come forward, as translators were expected to make their own arrangements with a local publisher. He welcomed the distinguished elderly naturalist and geologist Heinrich Georg Bronn, but the German translation published in 1860 imposed Bronn's own ideas, adding controversial themes that Darwin had deliberately omitted. Bronn translated "favoured races" as "perfected races", and added essays on issues including the origin of life, as well as a final chapter on religious implications partly inspired by Bronn's adherence to Naturphilosophie. In 1862, Bronn produced a second edition based on the third English edition and Darwin's suggested additions, but then died of a heart attack. Darwin corresponded closely with Julius Victor Carus, who published an improved translation in 1867. Darwin's attempts to find a translator in France fell through, and the translation by Clémence Royer published in 1862 added an introduction praising Darwin's ideas as an alternative to religious revelation and promoting ideas anticipating social Darwinism and eugenics, as well as numerous explanatory notes giving her own answers to doubts that Darwin expressed. Darwin corresponded with Royer about a second edition published in 1866 and a third in 1870, but he had difficulty getting her to remove her notes and was troubled by these editions. He remained unsatisfied until a translation by Edmond Barbier was published in 1876. A Dutch translation by Tiberius Cornelis Winkler was published in 1860. By 1864, additional translations had appeared in Italian and Russian. In Darwin's lifetime, Origin was published in Swedish in 1871, Danish in 1872, Polish in 1873, Hungarian in 1873–1874, Spanish in 1877 and Serbian in 1878. By 1977, Origin had appeared in an additional 18 languages, including Chinese by Ma Chün-wu who added non-Darwinian ideas; he published the preliminaries and chapters 1–5 in 1902–1904, and his complete translation in 1920. Content Title pages and introduction Page ii contains quotations by William Whewell and Francis Bacon on the theology of natural laws, harmonising science and religion in accordance with Isaac Newton's belief in a rational God who established a law-abiding cosmos. In the second edition, Darwin added an epigraph from Joseph Butler affirming that God could work through scientific laws as much as through miracles, in a nod to the religious concerns of his oldest friends. The Introduction establishes Darwin's credentials as a naturalist and author, then refers to John Herschel's letter suggesting that the origin of species "would be found to be a natural in contradistinction to a miraculous process": WHEN on board HMS Beagle, as naturalist, I was much struck with certain facts in the distribution of the inhabitants of South America, and in the geological relations of the present to the past inhabitants of that continent. These facts seemed to me to throw some light on the origin of species—that mystery of mysteries, as it has been called by one of our greatest philosophers. Darwin refers specifically to the distribution of the species rheas, and to that of the Galápagos tortoises and mockingbirds. He mentions his years of work on his theory, and the arrival of Wallace at the same conclusion, which led him to "publish this Abstract" of his incomplete work. He outlines his ideas, and sets out the essence of his theory: As many more individuals of each species are born than can possibly survive; and as, consequently, there is a frequently recurring struggle for existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and modified form. Starting with the third edition, Darwin prefaced the introduction with a sketch of the historical development of evolutionary ideas. In that sketch he acknowledged that Patrick Matthew had, unknown to Wallace or himself, anticipated the concept of natural selection in an appendix to a book published in 1831; in the fourth edition he mentioned that William Charles Wells had done so as early as 1813. Variation under domestication and under nature Chapter I covers animal husbandry and plant breeding, going back to ancient Egypt. Darwin discusses contemporary opinions on the origins of different breeds under cultivation to argue that many have been produced from common ancestors by selective breeding. As an illustration of artificial selection, he describes fancy pigeon breeding, noting that "[t]he diversity of the breeds is something astonishing", yet all were descended from one species, the rock dove. Darwin saw two distinct kinds of variation: (1) rare abrupt changes he called "sports" or "monstrosities" (example: Ancon sheep with short legs), and (2) ubiquitous small differences (example: slightly shorter or longer bill of pigeons). Both types of hereditary changes can be used by breeders. However, for Darwin the small changes were most important in evolution. In this chapter Darwin expresses his erroneous belief that environmental change is necessary to generate variation. The opening two sentences of On The Origin demonstrate this point and also show that Darwin did not see the role of natural selection in stabilizing selection: When we look to the individuals of the same variety or sub-variety of our older cultivated plants and animals, one of the first points which strikes us, is, that they generally differ much more from each other, than do the individuals of any one species or variety in a state of nature. When we reflect on the vast diversity of the plants and animals which have been cultivated, and which have varied during all ages under the most different climates and treatment, I think we are driven to conclude that this greater variability is simply due to our domestic productions having been raised under conditions of life not so uniform as, and somewhat different from, those to which the parent-species have been exposed under nature. It can be seen here that Darwin attributes the greater variation amongst individuals of domestic varieties compared to their progenitor populations in nature as being due to their "conditions of life" (environment) being "not so uniform as", and "somewhat different from" those of the parent species". He later erroneously elaborates that changed "conditions of life" act on the reproductive organs to generate greater variability in the progeny. Even after 1860 when Darwin read the correct reason for greater variation in domestic varieties in Patrick Matthew's book, On Naval Timber and Arboriculture published in 1831, future editions of On the Origin of Species contained these first two sentences unchanged and continued to omit the correct explanation of stabilizing selection. In Chapter II, Darwin specifies that the distinction between species and varieties is arbitrary, with experts disagreeing and changing their decisions when new forms were found. He concludes that "a well-marked variety may be justly called an incipient species" and that "species are only strongly marked and permanent varieties". He argues for the ubiquity of variation in nature. Historians have noted that naturalists had long been aware that the individuals of a species differed from one another, but had generally considered such variations to be limited and unimportant deviations from the archetype of each species, that archetype being a fixed ideal in the mind of God. Darwin and Wallace made variation among individuals of the same species central to understanding the natural world. Struggle for existence, natural selection, and divergence In Chapter III, Darwin asks how varieties "which I have called incipient species" become distinct species, and in answer introduces the key concept he calls "natural selection"; in the fifth edition he adds, "But the expression often used by Mr. Herbert Spencer, of the Survival of the Fittest, is more accurate, and is sometimes equally convenient." Owing to this struggle for life, any variation, however slight and from whatever cause proceeding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring ... I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection, in order to mark its relation to man's power of selection. He notes that both A. P. de Candolle and Charles Lyell had stated that all organisms are exposed to severe competition. Darwin emphasizes that he used the phrase "struggle for existence" in "a large and metaphorical sense, including dependence of one being on another"; he gives examples ranging from plants struggling against drought to plants competing for birds to eat their fruit and disseminate their seeds. He describes the struggle resulting from population growth: "It is the doctrine of Malthus applied with manifold force to the whole animal and vegetable kingdoms." He discusses checks to such increase including complex ecological interdependencies, and notes that competition is most severe between closely related forms "which fill nearly the same place in the economy of nature". Chapter IV details natural selection under the "infinitely complex and close-fitting ... mutual relations of all organic beings to each other and to their physical conditions of life". Darwin takes as an example a country where a change in conditions led to extinction of some species, immigration of others and, where suitable variations occurred, descendants of some species became adapted to new conditions. He remarks that the artificial selection practised by animal breeders frequently produced sharp divergence in character between breeds, and suggests that natural selection might do the same, saying: But how, it may be asked, can any analogous principle apply in nature? I believe it can and does apply most efficiently, from the simple circumstance that the more diversified the descendants from any one species become in structure, constitution, and habits, by so much will they be better enabled to seize on many and widely diversified places in the polity of nature, and so be enabled to increase in numbers. Historians have remarked that here Darwin anticipated the modern concept of an ecological niche. He did not suggest that every favourable variation must be selected, nor that the favoured animals were better or higher, but merely more adapted to their surroundings. Darwin proposes sexual selection, driven by competition between males for mates, to explain sexually dimorphic features such as lion manes, deer antlers, peacock tails, bird songs, and the bright plumage of some male birds. He analysed sexual selection more fully in The Descent of Man, and Selection in Relation to Sex (1871). Natural selection was expected to work very slowly in forming new species, but given the effectiveness of artificial selection, he could "see no limit to the amount of change, to the beauty and infinite complexity of the coadaptations between all organic beings, one with another and with their physical conditions of life, which may be effected in the long course of time by nature's power of selection". Using a tree diagram and calculations, he indicates the "divergence of character" from original species into new species and genera. He describes branches falling off as extinction occurred, while new branches formed in "the great Tree of life ... with its ever branching and beautiful ramifications". Variation and heredity In Darwin's time there was no agreed-upon model of heredity; in Chapter I Darwin admitted, "The laws governing inheritance are quite unknown." He accepted a version of the inheritance of acquired characteristics (which after Darwin's death came to be called Lamarckism), and Chapter V discusses what he called the effects of use and disuse; he wrote that he thought "there can be little doubt that use in our domestic animals strengthens and enlarges certain parts, and disuse diminishes them; and that such modifications are inherited", and that this also applied in nature. Darwin stated that some changes that were commonly attributed to use and disuse, such as the loss of functional wings in some island-dwelling insects, might be produced by natural selection. In later editions of Origin, Darwin expanded the role attributed to the inheritance of acquired characteristics. Darwin also admitted ignorance of the source of inheritable variations, but speculated they might be produced by environmental factors. However, one thing was clear: whatever the exact nature and causes of new variations, Darwin knew from observation and experiment that breeders were able to select such variations and produce huge differences in many generations of selection. The observation that selection works in domestic animals is not destroyed by lack of understanding of the underlying hereditary mechanism. Breeding of animals and plants showed related varieties varying in similar ways, or tending to revert to an ancestral form, and similar patterns of variation in distinct species were explained by Darwin as demonstrating common descent. He recounted how Lord Morton's mare apparently demonstrated telegony, offspring inheriting characteristics of a previous mate of the female parent, and accepted this process as increasing the variation available for natural selection. More detail was given in Darwin's 1868 book on The Variation of Animals and Plants Under Domestication, which tried to explain heredity through his hypothesis of pangenesis. Although Darwin had privately questioned blending inheritance, he struggled with the theoretical difficulty that novel individual variations would tend to blend into a population. However, inherited variation could be seen, and Darwin's concept of selection working on a population with a range of small variations was workable. It was not until the modern evolutionary synthesis in the 1930s and 1940s that a model of heredity became completely integrated with a model of variation. This modern evolutionary synthesis had been dubbed Neo Darwinian Evolution because it encompasses Charles Darwin's theories of evolution with Gregor Mendel's theories of genetic inheritance. Difficulties for the theory Chapter VI begins by saying the next three chapters will address possible objections to the theory, the first being that often no intermediate forms between closely related species are found, though the theory implies such forms must have existed. As Darwin noted, "Firstly, why, if species have descended from other species by insensibly fine gradations, do we not everywhere see innumerable transitional forms? Why is not all nature in confusion, instead of the species being, as we see them, well defined?" Darwin attributed this to the competition between different forms, combined with the small number of individuals of intermediate forms, often leading to extinction of such forms. Another difficulty, related to the first one, is the absence or rarity of transitional varieties in time. Darwin commented that by the theory of natural selection "innumerable transitional forms must have existed," and wondered "why do we not find them embedded in countless numbers in the crust of the earth?" (For further discussion of these difficulties, see Speciation#Darwin's dilemma: Why do species exist? and Bernstein et al. and Michod.) The chapter then deals with whether natural selection could produce complex specialised structures, and the behaviours to use them, when it would be difficult to imagine how intermediate forms could be functional. Darwin said: Secondly, is it possible that an animal having, for instance, the structure and habits of a bat, could have been formed by the modification of some animal with wholly different habits? Can we believe that natural selection could produce, on the one hand, organs of trifling importance, such as the tail of a giraffe, which serves as a fly-flapper, and, on the other hand, organs of such wonderful structure, as the eye, of which we hardly as yet fully understand the inimitable perfection? His answer was that in many cases animals exist with intermediate structures that are functional. He presented flying squirrels, and flying lemurs as examples of how bats might have evolved from non-flying ancestors. He discussed various simple eyes found in invertebrates, starting with nothing more than an optic nerve coated with pigment, as examples of how the vertebrate eye could have evolved. Darwin concludes: "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case." In a section on "organs of little apparent importance", Darwin discusses the difficulty of explaining various seemingly trivial traits with no evident adaptive function, and outlines some possibilities such as correlation with useful features. He accepts that we "are profoundly ignorant of the causes producing slight and unimportant variations" which distinguish domesticated breeds of animals, and human races. He suggests that sexual selection might explain these variations: I might have adduced for this same purpose the differences between the races of man, which are so strongly marked; I may add that some little light can apparently be thrown on the origin of these differences, chiefly through sexual selection of a particular kind, but without here entering on copious details my reasoning would appear frivolous. Chapter VII (of the first edition) addresses the evolution of instincts. His examples included two he had investigated experimentally: slave-making ants and the construction of hexagonal cells by honey bees. Darwin noted that some species of slave-making ants were more dependent on slaves than others, and he observed that many ant species will collect and store the pupae of other species as food. He thought it reasonable that species with an extreme dependency on slave workers had evolved in incremental steps. He suggested that bees that make hexagonal cells evolved in steps from bees that made round cells, under pressure from natural selection to economise wax. Darwin concluded: Finally, it may not be a logical deduction, but to my imagination it is far more satisfactory to look at such instincts as the young cuckoo ejecting its foster-brothers, —ants making slaves, —the larvæ of ichneumonidæ feeding within the live bodies of caterpillars, —not as specially endowed or created instincts, but as small consequences of one general law, leading to the advancement of all organic beings, namely, multiply, vary, let the strongest live and the weakest die. Chapter VIII addresses the idea that species had special characteristics that prevented hybrids from being fertile in order to preserve separately created species. Darwin said that, far from being constant, the difficulty in producing hybrids of related species, and the viability and fertility of the hybrids, varied greatly, especially among plants. Sometimes what were widely considered to be separate species produced fertile hybrid offspring freely, and in other cases what were considered to be mere varieties of the same species could only be crossed with difficulty. Darwin concluded: "Finally, then, the facts briefly given in this chapter do not seem to me opposed to, but even rather to support the view, that there is no fundamental distinction between species and varieties." In the sixth edition Darwin inserted a new chapter VII (renumbering the subsequent chapters) to respond to criticisms of earlier editions, including the objection that many features of organisms were not adaptive and could not have been produced by natural selection. He said some such features could have been by-products of adaptive changes to other features, and that often features seemed non-adaptive because their function was unknown, as shown by his book on Fertilisation of Orchids that explained how their elaborate structures facilitated pollination by insects. Much of the chapter responds to George Jackson Mivart's criticisms, including his claim that features such as baleen filters in whales, flatfish with both eyes on one side and the camouflage of stick insects could not have evolved through natural selection because intermediate stages would not have been adaptive. Darwin proposed scenarios for the incremental evolution of each feature. Geological record Chapter IX deals with the fact that the geological record appears to show forms of life suddenly arising, without the innumerable transitional fossils expected from gradual changes. Darwin borrowed Charles Lyell's argument in Principles of Geology that the record is extremely imperfect as fossilisation is a very rare occurrence, spread over vast periods of time; since few areas had been geologically explored, there could only be fragmentary knowledge of geological formations, and fossil collections were very poor. Evolved local varieties which migrated into a wider area would seem to be the sudden appearance of a new species. Darwin did not expect to be able to reconstruct evolutionary history, but continuing discoveries gave him well-founded hope that new finds would occasionally reveal transitional forms. To show that there had been enough time for natural selection to work slowly, he cited the example of The Weald as discussed in Principles of Geology together with other observations from Hugh Miller, James Smith of Jordanhill and Andrew Ramsay. Combining this with an estimate of recent rates of sedimentation and erosion, Darwin calculated that erosion of The Weald had taken around 300 million years. The initial appearance of entire groups of well-developed organisms in the oldest fossil-bearing layers, now known as the Cambrian explosion, posed a problem. Darwin had no doubt that earlier seas had swarmed with living creatures, but stated that he had no satisfactory explanation for the lack of fossils. Fossil evidence of pre-Cambrian life has since been found, extending the history of life back for billions of years. Chapter X examines whether patterns in the fossil record are better explained by common descent and branching evolution through natural selection, than by the individual creation of fixed species. Darwin expected species to change slowly, but not at the same rate – some organisms such as Lingula were unchanged since the earliest fossils. The pace of natural selection would depend on variability and change in the environment. This distanced his theory from Lamarckian laws of inevitable progress. It has been argued that this anticipated the punctuated equilibrium hypothesis, but other scholars have preferred to emphasise Darwin's commitment to gradualism. He cited Richard Owen's findings that the earliest members of a class were a few simple and generalised species with characteristics intermediate between modern forms, and were followed by increasingly diverse and specialised forms, matching the branching of common descent from an ancestor. Patterns of extinction matched his theory, with related groups of species having a continued existence until extinction, then not reappearing. Recently extinct species were more similar to living species than those from earlier eras, and as he had seen in South America, and William Clift had shown in Australia, fossils from recent geological periods resembled species still living in the same area. Geographic distribution Chapter XI deals with evidence from biogeography, starting with the observation that differences in flora and fauna from separate regions cannot be explained by environmental differences alone; South America, Africa, and Australia all have regions with similar climates at similar latitudes, but those regions have very different plants and animals. The species found in one area of a continent are more closely allied with species found in other regions of that same continent than to species found on other continents. Darwin noted that barriers to migration played an important role in the differences between the species of different regions. The coastal sea life of the Atlantic and Pacific sides of Central America had almost no species in common even though the Isthmus of Panama was only a few miles wide. His explanation was a combination of migration and descent with modification. He went on to say: "On this principle of inheritance with modification, we can understand how it is that sections of genera, whole genera, and even families are confined to the same areas, as is so commonly and notoriously the case." Darwin explained how a volcanic island formed a few hundred miles from a continent might be colonised by a few species from that continent. These species would become modified over time, but would still be related to species found on the continent, and Darwin observed that this was a common pattern. Darwin discussed ways that species could be dispersed across oceans to colonise islands, many of which he had investigated experimentally. Chapter XII continues the discussion of biogeography. After a brief discussion of freshwater species, it returns to oceanic islands and their peculiarities; for example on some islands roles played by mammals on continents were played by other animals such as flightless birds or reptiles. The summary of both chapters says: ... I think all the grand leading facts of geographical distribution are explicable on the theory of migration (generally of the more dominant forms of life), together with subsequent modification and the multiplication of new forms. We can thus understand the high importance of barriers, whether of land or water, which separate our several zoological and botanical provinces. We can thus understand the localisation of sub-genera, genera, and families; and how it is that under different latitudes, for instance in South America, the inhabitants of the plains and mountains, of the forests, marshes, and deserts, are in so mysterious a manner linked together by affinity, and are likewise linked to the extinct beings which formerly inhabited the same continent ... On these same principles, we can understand, as I have endeavoured to show, why oceanic islands should have few inhabitants, but of these a great number should be endemic or peculiar; ... Classification, morphology, embryology, rudimentary organs Chapter XIII starts by observing that classification depends on species being grouped together in a Taxonomy, a multilevel system of groups and sub-groups based on varying degrees of resemblance. After discussing classification issues, Darwin concludes: All the foregoing rules and aids and difficulties in classification are explained, if I do not greatly deceive myself, on the view that the natural system is founded on descent with modification; that the characters which naturalists consider as showing true affinity between any two or more species, are those which have been inherited from a common parent, and, in so far, all true classification is genealogical; that community of descent is the hidden bond which naturalists have been unconsciously seeking, ... Darwin discusses morphology, including the importance of homologous structures. He says, "What can be more curious than that the hand of a man, formed for grasping, that of a mole for digging, the leg of the horse, the paddle of the porpoise, and the wing of the bat, should all be constructed on the same pattern, and should include the same bones, in the same relative positions?" This made no sense under doctrines of independent creation of species, as even Richard Owen had admitted, but the "explanation is manifest on the theory of the natural selection of successive slight modifications" showing common descent. He notes that animals of the same class often have extremely similar embryos. Darwin discusses rudimentary organs, such as the wings of flightless birds and the rudiments of pelvis and leg bones found in some snakes. He remarks that some rudimentary organs, such as teeth in baleen whales, are found only in embryonic stages. These factors also supported his theory of descent with modification. Concluding remarks The final chapter, "Recapitulation and Conclusion", reviews points from earlier chapters, and Darwin concludes by hoping that his theory might produce revolutionary changes in many fields of natural history. He suggests that psychology will be put on a new foundation and implies the relevance of his theory to the first appearance of humanity with the sentence that "Light will be thrown on the origin of man and his history." Darwin ends with a passage that became well known and much quoted: It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us ... Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved. Darwin added the phrase "by the Creator" from the 1860 second edition onwards, so that the ultimate sentence begins "There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one". Structure, style, and themes Nature and structure of Darwin's argument Darwin's aims were twofold: to show that species had not been separately created, and to show that natural selection had been the chief agent of change. He knew that his readers were already familiar with the concept of transmutation of species from Vestiges, and his introduction ridicules that work as failing to provide a viable mechanism. Therefore, the first four chapters lay out his case that selection in nature, caused by the struggle for existence, is analogous to the selection of variations under domestication, and that the accumulation of adaptive variations provides a scientifically testable mechanism for evolutionary speciation. Later chapters provide evidence that evolution has occurred, supporting the idea of branching, adaptive evolution without directly proving that selection is the mechanism. Darwin presents supporting facts drawn from many disciplines, showing that his theory could explain a myriad of observations from many fields of natural history that were inexplicable under the alternative concept that species had been individually created. The structure of Darwin's argument showed the influence of John Herschel, whose philosophy of science maintained that a mechanism could be called a vera causa (true cause) if three things could be demonstrated: its existence in nature, its ability to produce the effects of interest, and its ability to explain a wide range of observations. This reflected the influence of William Whewell's idea of a consilience of inductions, as explained in his work Philosophy of the Inductive Sciences, where if you could argue that a proposed mechanism successfully explained various phenomena you could then use those arguments as evidence for that mechanism. Literary style The Examiner review of 3 December 1859 commented, "Much of Mr. Darwin's volume is what ordinary readers would call 'tough reading;' that is, writing which to comprehend requires concentrated attention and some preparation for the task. All, however, is by no means of this description, and many parts of the book abound in information, easy to comprehend and both instructive and entertaining." While the book was readable enough to sell, its dryness ensured that it was seen as aimed at specialist scientists and could not be dismissed as mere journalism or imaginative fiction. Though Richard Owen did complain in the Quarterly Review that the style was too easy for a serious work of science. Unlike the still-popular Vestiges, it avoided the narrative style of the historical novel and cosmological speculation, though the closing sentence clearly hinted at cosmic progression. Darwin had long been immersed in the literary forms and practices of specialist science, and made effective use of his skills in structuring arguments. David Quammen has described the book as written in everyday language for a wide audience, but noted that Darwin's literary style was uneven: in some places he used convoluted sentences that are difficult to read, while in other places his writing was beautiful. Quammen advised that later editions were weakened by Darwin making concessions and adding details to address his critics, and recommended the first edition. James T. Costa said that because the book was an abstract produced in haste in response to Wallace's essay, it was more approachable than the big book on natural selection Darwin had been working on, which would have been encumbered by scholarly footnotes and much more technical detail. He added that some parts of Origin are dense, but other parts are almost lyrical, and the case studies and observations are presented in a narrative style unusual in serious scientific books, which broadened its audience. Human evolution From his early transmutation notebooks in the late 1830s onwards, Darwin considered human evolution as part of the natural processes he was investigating, and rejected divine intervention. In 1856, his "big book on species" titled Natural Selection was to include a "note on Man", but when Wallace enquired in December 1857, Darwin replied; "You ask whether I shall discuss 'man';—I think I shall avoid whole subject, as so surrounded with prejudices, though I fully admit that it is the highest & most interesting problem for the naturalist." On 28 March 1859, with his manuscript for the book well under way, Darwin wrote to Lyell offering the suggested publisher John Murray assurances "That I do not discuss origin of man". In the final chapter of On the Origin of Species, "Recapitulation and Conclusion", Darwin briefly highlights the human implications of his theory: "In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history." Discussing this in January 1860, Darwin assured Lyell that "by the sentence [Light will be thrown on the origin of man and his history] I show that I believe man is in same predicament with other animals. Many modern writers have seen this sentence as Darwin's only reference to humans in the book; Janet Browne describes it as his only discussion there of human origins, while noting that the book makes other references to humanity. Some other statements in the book are quietly effective at pointing out the implication that humans are simply another species, evolving through the same processes and principles affecting other organisms. For example, in Chapter III: "Struggle for Existence" Darwin includes "slow-breeding man" among other examples of Malthusian population growth. In his discussions on morphology, Darwin compares and comments on bone structures that are homologous between humans and other mammals. Darwin's early notebooks discussed how non-adaptive characteristics could be selected when animals or humans chose mates, with races of humans differing over ideas of beauty. In his 1856 notes responding to Robert Knox's The Races of Man: A Fragment, he called this effect sexual selection. He added notes on sexual selection to his "big book on species", and in mid-1857 he added a section heading "Theory applied to Races of Man", but did not add text on this topic. In On the Origin of Species, Chapter VI: "Difficulties on Theory", Darwin mentions this in the context of "slight and unimportant variations": I might have adduced for this same purpose the differences between the races of man, which are so strongly marked; I may add that some little light can apparently be thrown on the origin of these differences, chiefly through sexual selection of a particular kind, but without here entering on copious details my reasoning would appear frivolous." When Darwin published The Descent of Man, and Selection in Relation to Sex twelve years later, he said that he had not gone into detail on human evolution in the Origin as he thought that would "only add to the prejudices against my views". He had not completely avoided the topic: It seemed to me sufficient to indicate, in the first edition of my 'Origin of Species,' that by this work 'light would be thrown on the origin of man and his history;' and this implies that man must be included with other organic beings in any general conclusion respecting his manner of appearance on this earth.See also , Quote: "My Descent of Man was published in Feb. 1871. As soon as I had become, in the year 1837 or 1838, convinced that species were mutable productions, I could not avoid the belief that man must come under the same law. Accordingly I collected notes on the subject for my own satisfaction, and not for a long time with any intention of publishing. Although in the Origin of Species, the derivation of any particular species is never discussed, yet I thought it best, in order that no honourable man should accuse me of concealing my views, to add that by the work in question 'light would be thrown on the origin of man and his history.' It would have been useless and injurious to the success of the book to have paraded without giving any evidence my conviction with respect to his origin." He also said that he had "merely alluded" in that book to sexual selection differentiating human races. Reception The book aroused international interest and a widespread debate, with no sharp line between scientific issues and ideological, social and religious implications. Much of the initial reaction was hostile, in a large part because very few reviewers actually understood his theory, but Darwin had to be taken seriously as a prominent and respected name in science. Bishop Samuel Wilberforce wrote a review in Quarterly Review in 1860 where he disagreed with Darwin's 'argument'. There was much less controversy than had greeted the 1844 publication Vestiges of Creation, which had been rejected by scientists, but had influenced a wide public readership into believing that nature and human society were governed by natural laws. The Origin of Species as a book of wide general interest became associated with ideas of social reform. Its proponents made full use of a surge in the publication of review journals, and it was given more popular attention than almost any other scientific work, though it failed to match the continuing sales of Vestiges. Darwin's book legitimised scientific discussion of evolutionary mechanisms, and the newly coined term 'Darwinism' was used to cover the whole range of evolutionism, not just his own ideas. By the mid-1870s, evolutionism was triumphant. While Darwin had been somewhat coy about human origins, not identifying any explicit conclusion on the matter in his book, he had dropped enough hints about human's animal ancestry for the inference to be made, and the first review claimed it made a creed of the "men from monkeys" idea from Vestiges. Human evolution became central to the debate and was strongly argued by Huxley who featured it in his popular "working-men's lectures". Darwin did not publish his own views on this until 1871. The naturalism of natural selection conflicted with presumptions of purpose in nature and while this could be reconciled by theistic evolution, other mechanisms implying more progress or purpose were more acceptable. Herbert Spencer had already incorporated Lamarckism into his popular philosophy of progressive free market human society. He popularised the terms 'evolution' and 'survival of the fittest', and many thought Spencer was central to evolutionary thinking. Impact on the scientific community Scientific readers were already aware of arguments that species changed through processes that were subject to laws of nature, but the transmutational ideas of Lamarck and the vague "law of development" of Vestiges had not found scientific favour. Darwin presented natural selection as a scientifically testable mechanism while accepting that other mechanisms such as inheritance of acquired characters were possible. His strategy established that evolution through natural laws was worthy of scientific study, and by 1875, most scientists accepted that evolution occurred but few thought natural selection was significant. Darwin's scientific method was also disputed, with his proponents favouring the empiricism of John Stuart Mill's A System of Logic, while opponents held to the idealist school of William Whewell's Philosophy of the Inductive Sciences, in which investigation could begin with the intuitive idea that species were fixed objects created by design. Early support for Darwin's ideas came from the findings of field naturalists studying biogeography and ecology, including Joseph Dalton Hooker in 1860, and Asa Gray in 1862. Henry Walter Bates presented research in 1861 that explained insect mimicry using natural selection. Alfred Russel Wallace discussed evidence from his Malay Archipelago research, including an 1864 paper with an evolutionary explanation for the Wallace line. Evolution had less obvious applications to anatomy and morphology, and at first had little impact on the research of the anatomist Thomas Henry Huxley. Despite this, Huxley strongly supported Darwin on evolution; though he called for experiments to show whether natural selection could form new species, and questioned if Darwin's gradualism was sufficient without sudden leaps to cause speciation. Huxley wanted science to be secular, without religious interference, and his article in the April 1860 Westminster Review promoted scientific naturalism over natural theology, praising Darwin for "extending the domination of Science over regions of thought into which she has, as yet, hardly penetrated" and coining the term "Darwinism" as part of his efforts to secularise and professionalise science. Huxley gained influence, and initiated the X Club, which used the journal Nature to promote evolution and naturalism, shaping much of late-Victorian science. Later, the German morphologist Ernst Haeckel would convince Huxley that comparative anatomy and palaeontology could be used to reconstruct evolutionary genealogies. The leading naturalist in Britain was the anatomist Richard Owen, an idealist who had shifted to the view in the 1850s that the history of life was the gradual unfolding of a divine plan. Owen's review of the Origin in the April 1860 Edinburgh Review bitterly attacked Huxley, Hooker and Darwin, but also signalled acceptance of a kind of evolution as a teleological plan in a continuous "ordained becoming", with new species appearing by natural birth. Others that rejected natural selection, but supported "creation by birth", included the Duke of Argyll who explained beauty in plumage by design. Since 1858, Huxley had emphasised anatomical similarities between apes and humans, contesting Owen's view that humans were a separate sub-class. Their disagreement over human origins came to the fore at the British Association for the Advancement of Science meeting featuring the legendary 1860 Oxford evolution debate. In two years of acrimonious public dispute that Charles Kingsley satirised as the "Great Hippocampus Question" and parodied in The Water-Babies as the "great hippopotamus test", Huxley showed that Owen was incorrect in asserting that ape brains lacked a structure present in human brains. Others, including Charles Lyell and Alfred Russel Wallace, thought that humans shared a common ancestor with apes, but higher mental faculties could not have evolved through a purely material process. Darwin published his own explanation in the Descent of Man (1871). Impact outside Great Britain The German physiologist Emil du Bois-Reymond converted to Darwinism after reading an English copy of On the Origin of Species in the spring of 1860. Du Bois-Reymond was a committed supporter, securing Darwin an honorary degree from the University of Breslau, teaching his theory to students at the University of Berlin, and defending his name to paying audiences across Germany and The Netherlands. Du Bois-Reymond's exposition resembled Darwin's: he endorsed natural selection, rejected the inheritance of acquired characters, remained silent on the origin of variation, and identified "the altruism of bees, the regeneration of tissue, the effects of exercise, and the inheritance of disadvantageous traits" as puzzles presented by the theory. Evolutionary ideas, although not natural selection, were accepted by other German biologists accustomed to ideas of homology in morphology from Goethe's Metamorphosis of Plants and from their long tradition of comparative anatomy. Bronn's alterations in his German translation added to the misgivings of conservatives but encouraged political radicals. Ernst Haeckel was particularly ardent, aiming to synthesise Darwin's ideas with those of Lamarck and Goethe while still reflecting the spirit of Naturphilosophie. His ambitious programme to reconstruct the evolutionary history of life was joined by Huxley and supported by discoveries in palaeontology. Haeckel used embryology extensively in his recapitulation theory, which embodied a progressive, almost linear model of evolution. Darwin was cautious about such histories, and had already noted that von Baer's laws of embryology supported his idea of complex branching. Asa Gray promoted and defended Origin against those American naturalists with an idealist approach, notably Louis Agassiz, who viewed every species as a distinct fixed unit in the mind of the Creator, classifying as species what others considered merely varieties. Edward Drinker Cope and Alpheus Hyatt reconciled this view with evolutionism in a form of neo-Lamarckism involving recapitulation theory. French-speaking naturalists in several countries showed appreciation of the much-modified French translation by Clémence Royer, but Darwin's ideas had little impact in France, where any scientists supporting evolutionary ideas opted for a form of Lamarckism. The intelligentsia in Russia had accepted the general phenomenon of evolution for several years before Darwin had published his theory, and scientists were quick to take it into account, although the Malthusian aspects were felt to be relatively unimportant. The political economy of struggle was criticised as a British stereotype by Karl Marx and by Leo Tolstoy, who had the character Levin in his novel Anna Karenina voice sharp criticism of the morality of Darwin's views. Challenges to natural selection There were serious scientific objections to the process of natural selection as the key mechanism of evolution, including Carl Nägeli's insistence that a trivial characteristic with no adaptive advantage could not be developed by selection. Darwin conceded that these could be linked to adaptive characteristics. His estimate that the age of the Earth allowed gradual evolution was disputed by William Thomson (later awarded the title Lord Kelvin), who calculated that the Sun, and therefore life on Earth, was only about 100 million years old. Darwin accepted blending inheritance, but Fleeming Jenkin calculated that as it mixed traits, natural selection could not accumulate useful traits. Darwin tried to meet these objections in the fifth edition. Mivart supported directed evolution, and compiled scientific and religious objections to natural selection. In response, Darwin made considerable changes to the sixth edition. The problems of the age of the Earth and heredity were only resolved in the 20th century. By the mid-1870s, most scientists accepted evolution, but relegated natural selection to a minor role as they believed evolution was purposeful and progressive. The range of evolutionary theories during "the eclipse of Darwinism" included forms of "saltationism" in which new species were thought to arise through "jumps" rather than gradual adaptation, forms of orthogenesis claiming that species had an inherent tendency to change in a particular direction, and forms of neo-Lamarckism in which inheritance of acquired characteristics led to progress. The minority view of August Weismann, that natural selection was the only mechanism, was called neo-Darwinism. It was thought that the rediscovery of Mendelian inheritance invalidated Darwin's views. Impact on economic and political debates While some, like Spencer, used analogy from natural selection as an argument against government intervention in the economy to benefit the poor, others, including Alfred Russel Wallace, argued that action was needed to correct social and economic inequities to level the playing field before natural selection could improve humanity further. Some political commentaries, including Walter Bagehot's Physics and Politics (1872), attempted to extend the idea of natural selection to competition between nations and between human races. Such ideas were incorporated into what was already an ongoing effort by some working in anthropology to provide scientific evidence for the superiority of Caucasians over non-white races and justify European imperialism. Historians write that most such political and economic commentators had only a superficial understanding of Darwin's scientific theory, and were as strongly influenced by other concepts about social progress and evolution, such as the Lamarckian ideas of Spencer and Haeckel, as they were by Darwin's work. Darwin objected to his ideas being used to justify military aggression and unethical business practices as he believed morality was part of fitness in humans, and he opposed polygenism, the idea that human races were fundamentally distinct and did not share a recent common ancestry. Religious attitudes The book produced a wide range of religious responses at a time of changing ideas and increasing secularisation. The issues raised were complex and there was a large middle ground. Developments in geology meant that there was little opposition based on a literal reading of Genesis, but defence of the argument from design and natural theology was central to debates over the book in the English-speaking world. Natural theology was not a unified doctrine, and while some such as Louis Agassiz were strongly opposed to the ideas in the book, others sought a reconciliation in which evolution was seen as purposeful. In the Church of England, some liberal clergymen interpreted natural selection as an instrument of God's design, with the cleric Charles Kingsley seeing it as "just as noble a conception of Deity". In the second edition of January 1860, Darwin quoted Kingsley as "a celebrated cleric", and added the phrase "by the Creator" to the closing sentence, which from then on read "life, with its several powers, having been originally breathed by the Creator into a few forms or into one". While some commentators have taken this as a concession to religion that Darwin later regretted, Darwin's view at the time was of God creating life through the laws of nature, and even in the first edition there are several references to "creation". Baden Powell praised "Mr Darwin's masterly volume [supporting] the grand principle of the self-evolving powers of nature". In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in terms of theistic evolution, Natural Selection is not inconsistent with Natural Theology. Theistic evolution became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured over natural selection as being more compatible with purpose. Even though the book did not explicitly spell out Darwin's beliefs about human origins, it had dropped a number of hints about human's animal ancestry and quickly became central to the debate, as mental and moral qualities were seen as spiritual aspects of the immaterial soul, and it was believed that animals did not have spiritual qualities. This conflict could be reconciled by supposing there was some supernatural intervention on the path leading to humans, or viewing evolution as a purposeful and progressive ascent to mankind's position at the head of nature. While many conservative theologians accepted evolution, Charles Hodge argued in his 1874 critique "What is Darwinism?" that "Darwinism", defined narrowly as including rejection of design, was atheism though he accepted that Asa Gray did not reject design. Asa Gray responded that this charge misrepresented Darwin's text. By the early 20th century, four noted authors of The Fundamentals were explicitly open to the possibility that God created through evolution, but fundamentalism inspired the American creation–evolution controversy that began in the 1920s. Some conservative Roman Catholic writers and influential Jesuits opposed evolution in the late 19th and early 20th century, but other Catholic writers, starting with Mivart, pointed out that early Church Fathers had not interpreted Genesis literally in this area. The Vatican stated its official position in a 1950 papal encyclical, which held that evolution was not inconsistent with Catholic teaching. Modern influence Various alternative evolutionary mechanisms favoured during "the eclipse of Darwinism" became untenable as more was learned about inheritance and mutation. The full significance of natural selection was at last accepted in the 1930s and 1940s as part of the modern evolutionary synthesis. During that synthesis biologists and statisticians, including R. A. Fisher, Sewall Wright and J. B. S. Haldane, merged Darwinian selection with a statistical understanding of Mendelian genetics. Modern evolutionary theory continues to develop. Darwin's theory of evolution by natural selection, with its tree-like model of branching common descent, has become the unifying theory of the life sciences. The theory explains the diversity of living organisms and their adaptation to the environment. It makes sense of the geological record, biogeography, parallels in embryonic development, biological homologies, vestigiality, cladistics, phylogenetics and other fields, with unrivalled explanatory power; it has also become essential to applied sciences such as medicine and agriculture. Despite the scientific consensus, a religion-based political controversy has developed over how evolution is taught in schools, especially in the United States. Interest in Darwin's writings continues, and scholars have generated an extensive literature, the Darwin Industry, about his life and work. The text of Origin itself has been subject to much analysis including a variorum, detailing the changes made in every edition, first published in 1959, and a concordance, an exhaustive external index published in 1981. Worldwide commemorations of the 150th anniversary of the publication of On the Origin of Species and the bicentenary of Darwin's birth were scheduled for 2009. They celebrated the ideas which "over the last 150 years have revolutionised our understanding of nature and our place within it". In a survey conducted by a group of academic booksellers, publishers and librarians in advance of Academic Book Week in the United Kingdom, On the Origin of Species was voted the most influential academic book ever written. It was hailed as "the supreme demonstration of why academic books matter" and "a book which has changed the way we think about everything". See also On the Origin of Species – full text at Wikisource of the first edition, 1859 The Origin of Species – full text at Wikisource of the 6th edition, 1872 Charles Darwin bibliography History of biology History of evolutionary thought History of speciation Modern evolutionary synthesis The Complete Works of Charles Darwin Online The Descent of Man, and Selection in Relation to Sex, published in 1871; his second major book on evolutionary theory. Transmutation of species References Works cited . Published anonymously. Full image view Also available here . Published anonymously. Further reading (Vol. 2) Contemporary reviews . Published anonymously. . Extract from Proceedings of the American Academy of Arts and Sciences 4 (1860): 411–415. . . Published anonymously. . Published anonymously. . . Published anonymously. . Published anonymously. For further reviews, see External links The Complete Works of Charles Darwin Online: Table of contents, bibliography of On the Origin of Species – links to text and images of all six British editions of The Origin of Species, the 6th edition with additions and corrections (final text), the first American edition, and translations into Danish, Dutch, French, German, Polish, Russian and Spanish Online Variorum, showing every change between the six British editions On the Origin of Species eBook provided by Project Gutenberg On the Origin of Species, full text with embedded audio A collection of Victorian Science Texts Darwin Correspondence Project Home Page, University Library, Cambridge View online at the Biodiversity Heritage Library On the Origin of Species 1860 American edition, D Appleton and Company, New York, with front insert by H. E. Barker, Lincolniana Darwin's notes on the creation of On the Origin of Species digitised in Cambridge Digital Library 1859 non-fiction books 1859 in science Books about evolution Books by Charles Darwin English-language books English non-fiction books John Murray (publishing house) books Biology textbooks
0.778803
0.998702
0.777792
Zoology
Zoology ( , ) is the scientific study of animals. Its studies include the structure, embryology, classification, habits, and distribution of all animals, both living and extinct, and how they interact with their ecosystems. Zoology is one of the primary branches of biology. The term is derived from Ancient Greek , ('animal'), and , ('knowledge', 'study'). Although humans have always been interested in the natural history of the animals they saw around them, and used this knowledge to domesticate certain species, the formal study of zoology can be said to have originated with Aristotle. He viewed animals as living organisms, studied their structure and development, and considered their adaptations to their surroundings and the function of their parts. Modern zoology has its origins during the Renaissance and early modern period, with Carl Linnaeus, Antonie van Leeuwenhoek, Robert Hooke, Charles Darwin, Gregor Mendel and many others. The study of animals has largely moved on to deal with form and function, adaptations, relationships between groups, behaviour and ecology. Zoology has increasingly been subdivided into disciplines such as classification, physiology, biochemistry and evolution. With the discovery of the structure of DNA by Francis Crick and James Watson in 1953, the realm of molecular biology opened up, leading to advances in cell biology, developmental biology and molecular genetics. History The history of zoology traces the study of the animal kingdom from ancient to modern times. Prehistoric people needed to study the animals and plants in their environment to exploit them and survive. Cave paintings, engravings and sculptures in France dating back 15,000 years show bison, horses, and deer in carefully rendered detail. Similar images from other parts of the world illustrated mostly the animals hunted for food and the savage animals. The Neolithic Revolution, which is characterized by the domestication of animals, continued throughout Antiquity. Ancient knowledge of wildlife is illustrated by the realistic depictions of wild and domestic animals in the Near East, Mesopotamia, and Egypt, including husbandry practices and techniques, hunting and fishing. The invention of writing is reflected in zoology by the presence of animals in Egyptian hieroglyphics. Although the concept of zoology as a single coherent field arose much later, the zoological sciences emerged from natural history reaching back to the biological works of Aristotle and Galen in the ancient Greco-Roman world. In the fourth century BC, Aristotle looked at animals as living organisms, studying their structure, development and vital phenomena. He divided them into two groups: animals with blood, equivalent to our concept of vertebrates, and animals without blood, invertebrates. He spent two years on Lesbos, observing and describing the animals and plants, considering the adaptations of different organisms and the function of their parts. Four hundred years later, Roman physician Galen dissected animals to study their anatomy and the function of the different parts, because the dissection of human cadavers was prohibited at the time. This resulted in some of his conclusions being false, but for many centuries it was considered heretical to challenge any of his views, so the study of anatomy stultified. During the post-classical era, Middle Eastern science and medicine was the most advanced in the world, integrating concepts from Ancient Greece, Rome, Mesopotamia and Persia as well as the ancient Indian tradition of Ayurveda, while making numerous advances and innovations. In the 13th century, Albertus Magnus produced commentaries and paraphrases of all Aristotle's works; his books on topics like botany, zoology, and minerals included information from ancient sources, but also the results of his own investigations. His general approach was surprisingly modern, and he wrote, "For it is [the task] of natural science not simply to accept what we are told but to inquire into the causes of natural things." An early pioneer was Conrad Gessner, whose monumental 4,500-page encyclopedia of animals, , was published in four volumes between 1551 and 1558. In Europe, Galen's work on anatomy remained largely unsurpassed and unchallenged up until the 16th century. During the Renaissance and early modern period, zoological thought was revolutionized in Europe by a renewed interest in empiricism and the discovery of many novel organisms. Prominent in this movement were Andreas Vesalius and William Harvey, who used experimentation and careful observation in physiology, and naturalists such as Carl Linnaeus, Jean-Baptiste Lamarck, and Buffon who began to classify the diversity of life and the fossil record, as well as studying the development and behavior of organisms. Antonie van Leeuwenhoek did pioneering work in microscopy and revealed the previously unknown world of microorganisms, laying the groundwork for cell theory. van Leeuwenhoek's observations were endorsed by Robert Hooke; all living organisms were composed of one or more cells and could not generate spontaneously. Cell theory provided a new perspective on the fundamental basis of life. Having previously been the realm of gentlemen naturalists, over the 18th, 19th and 20th centuries, zoology became an increasingly professional scientific discipline. Explorer-naturalists such as Alexander von Humboldt investigated the interaction between organisms and their environment, and the ways this relationship depends on geography, laying the foundations for biogeography, ecology and ethology. Naturalists began to reject essentialism and consider the importance of extinction and the mutability of species. These developments, as well as the results from embryology and paleontology, were synthesized in the 1859 publication of Charles Darwin's theory of evolution by natural selection; in this Darwin placed the theory of organic evolution on a new footing, by explaining the processes by which it can occur, and providing observational evidence that it had done so. Darwin's theory was rapidly accepted by the scientific community and soon became a central axiom of the rapidly developing science of biology. The basis for modern genetics began with the work of Gregor Mendel on peas in 1865, although the significance of his work was not realized at the time. Darwin gave a new direction to morphology and physiology, by uniting them in a common biological theory: the theory of organic evolution. The result was a reconstruction of the classification of animals upon a genealogical basis, fresh investigation of the development of animals, and early attempts to determine their genetic relationships. The end of the 19th century saw the fall of spontaneous generation and the rise of the germ theory of disease, though the mechanism of inheritance remained a mystery. In the early 20th century, the rediscovery of Mendel's work led to the rapid development of genetics, and by the 1930s the combination of population genetics and natural selection in the modern synthesis created evolutionary biology. Research in cell biology is interconnected to other fields such as genetics, biochemistry, medical microbiology, immunology, and cytochemistry. With the determination of the double helical structure of the DNA molecule by Francis Crick and James Watson in 1953, the realm of molecular biology opened up, leading to advances in cell biology, developmental biology and molecular genetics. The study of systematics was transformed as DNA sequencing elucidated the degrees of affinity between different organisms. Scope Zoology is the branch of science dealing with animals. A species can be defined as the largest group of organisms in which any two individuals of the appropriate sex can produce fertile offspring; about 1.5 million species of animal have been described and it has been estimated that as many as 8 million animal species may exist. An early necessity was to identify the organisms and group them according to their characteristics, differences and relationships, and this is the field of the taxonomist. Originally it was thought that species were immutable, but with the arrival of Darwin's theory of evolution, the field of cladistics came into being, studying the relationships between the different groups or clades. Systematics is the study of the diversification of living forms, the evolutionary history of a group is known as its phylogeny, and the relationship between the clades can be shown diagrammatically in a cladogram. Although someone who made a scientific study of animals would historically have described themselves as a zoologist, the term has come to refer to those who deal with individual animals, with others describing themselves more specifically as physiologists, ethologists, evolutionary biologists, ecologists, pharmacologists, endocrinologists or parasitologists. Branches of zoology Although the study of animal life is ancient, its scientific incarnation is relatively modern. This mirrors the transition from natural history to biology at the start of the 19th century. Since Hunter and Cuvier, comparative anatomical study has been associated with morphography, shaping the modern areas of zoological investigation: anatomy, physiology, histology, embryology, teratology and ethology. Modern zoology first arose in German and British universities. In Britain, Thomas Henry Huxley was a prominent figure. His ideas were centered on the morphology of animals. Many consider him the greatest comparative anatomist of the latter half of the 19th century. Similar to Hunter, his courses were composed of lectures and laboratory practical classes in contrast to the previous format of lectures only. Classification Scientific classification in zoology, is a method by which zoologists group and categorize organisms by biological type, such as genus or species. Biological classification is a form of scientific taxonomy. Modern biological classification has its root in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to improve consistency with the Darwinian principle of common descent. Molecular phylogenetics, which uses nucleic acid sequence as data, has driven many recent revisions and is likely to continue to do so. Biological classification belongs to the science of zoological systematics. Many scientists now consider the five-kingdom system outdated. Modern alternative classification systems generally start with the three-domain system: Archaea (originally Archaebacteria); Bacteria (originally Eubacteria); Eukaryota (including protists, fungi, plants, and animals) These domains reflect whether the cells have nuclei or not, as well as differences in the chemical composition of the cell exteriors. Further, each kingdom is broken down recursively until each species is separately classified. The order is: Domain; kingdom; phylum; class; order; family; genus; species. The scientific name of an organism is generated from its genus and species. For example, humans are listed as Homo sapiens. Homo is the genus, and sapiens the specific epithet, both of them combined make up the species name. When writing the scientific name of an organism, it is proper to capitalize the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term may be italicized or underlined. The dominant classification system is called the Linnaean taxonomy. It includes ranks and binomial nomenclature. The classification, taxonomy, and nomenclature of zoological organisms is administered by the International Code of Zoological Nomenclature. A merging draft, BioCode, was published in 1997 in an attempt to standardize nomenclature, but has yet to be formally adopted. Vertebrate and invertebrate zoology Vertebrate zoology is the biological discipline that consists of the study of vertebrate animals, that is animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. The various taxonomically oriented disciplines i.e. mammalogy, biological anthropology, herpetology, ornithology, and ichthyology seek to identify and classify species and study the structures and mechanisms specific to those groups. The rest of the animal kingdom is dealt with by invertebrate zoology, a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, worms, molluscs, arthropods and many other phyla, but single-celled organisms or protists are not usually included. Structural zoology Cell biology studies the structural and physiological properties of cells, including their behavior, interactions, and environment. This is done on both the microscopic and molecular levels for single-celled organisms such as bacteria as well as the specialized cells in multicellular organisms such as humans. Understanding the structure and function of cells is fundamental to all of the biological sciences. The similarities and differences between cell types are particularly relevant to molecular biology. Anatomy considers the forms of macroscopic structures such as organs and organ systems. It focuses on how organs and organ systems work together in the bodies of humans and other animals, in addition to how they work independently. Anatomy and cell biology are two studies that are closely related, and can be categorized under "structural" studies. Comparative anatomy is the study of similarities and differences in the anatomy of different groups. It is closely related to evolutionary biology and phylogeny (the evolution of species). Physiology Physiology studies the mechanical, physical, and biochemical processes of living organisms by attempting to understand how all of the structures function as a whole. The theme of "structure to function" is central to biology. Physiological studies have traditionally been divided into plant physiology and animal physiology, but some principles of physiology are universal, no matter what particular organism is being studied. For example, what is learned about the physiology of yeast cells can also apply to human cells. The field of animal physiology extends the tools and methods of human physiology to non-human species. Physiology studies how, for example, the nervous, immune, endocrine, respiratory, and circulatory systems function and interact. Developmental biology Developmental biology is the study of the processes by which animals and plants reproduce and grow. The discipline includes the study of embryonic development, cellular differentiation, regeneration, asexual and sexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism. Development of both animals and plants is further considered in the articles on evolution, population genetics, heredity, genetic variability, Mendelian inheritance, and reproduction. Evolutionary biology Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. Evolutionary research is concerned with the origin and descent of species, as well as their change over time, and includes scientists from many taxonomically oriented disciplines. For example, it generally involves scientists who have special training in particular organisms such as mammalogy, ornithology, herpetology, or entomology, but use those organisms as systems to answer general questions about evolution. Evolutionary biology is partly based on paleontology, which uses the fossil record to answer questions about the mode and tempo of evolution, and partly on the developments in areas such as population genetics and evolutionary theory. Following the development of DNA fingerprinting techniques in the late 20th century, the application of these techniques in zoology has increased the understanding of animal populations. In the 1980s, developmental biology re-entered evolutionary biology from its initial exclusion from the modern synthesis through the study of evolutionary developmental biology. Related fields often considered part of evolutionary biology are phylogenetics, systematics, and taxonomy. Ethology Ethology is the scientific and objective study of animal behavior under natural conditions, as opposed to behaviorism, which focuses on behavioral response studies in a laboratory setting. Ethologists have been particularly concerned with the evolution of behavior and the understanding of behavior in terms of the theory of natural selection. In one sense, the first modern ethologist was Charles Darwin, whose book, The Expression of the Emotions in Man and Animals, influenced many future ethologists. A subfield of ethology is behavioral ecology which attempts to answer Nikolaas Tinbergen's four questions with regard to animal behavior: what are the proximate causes of the behavior, the developmental history of the organism, the survival value and phylogeny of the behavior? Another area of study is animal cognition, which uses laboratory experiments and carefully controlled field studies to investigate an animal's intelligence and learning. Biogeography Biogeography studies the spatial distribution of organisms on the Earth, focusing on topics like dispersal and migration, plate tectonics, climate change, and cladistics. It is an integrative field of study, uniting concepts and information from evolutionary biology, taxonomy, ecology, physical geography, geology, paleontology and climatology. The origin of this field of study is widely accredited to Alfred Russel Wallace, a British biologist who had some of his work jointly published with Charles Darwin. Molecular biology Molecular biology studies the common genetic and developmental mechanisms of animals and plants, attempting to answer the questions regarding the mechanisms of genetic inheritance and the structure of the gene. In 1953, James Watson and Francis Crick described the structure of DNA and the interactions within the molecule, and this publication jump-started research into molecular biology and increased interest in the subject. While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology. Molecular genetics, the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics. Reproduction Animals generally reproduce by sexual reproduction, a process involving the union of a male and female haploid gamete, each gamete formed by meiosis. Ordinarily, gametes produced by separate individuals unite by a process of fertilization to form a diploid zygote that can then develop into a genetically unique individual progeny. However, some animals are also capable, as an alternative reproductive process, to reproduce parthenogenetically. Parthenogenesis has been described in snakes and lizards (see Wikipedia Parthenogenesis in squamates), in amphibians (see Wikipedia Parthenogenesis in amphibians) and in numerous other species (see Wikipedia Parthenogenesis). Generally, meiosis in parthanogenetically reproducing animals occurs by a similar process to that in sexually reproducing animals, but the diploid zygote nucleus is generated by the union of two haploid genomes from the same individual rather than from different individuals. See also Animal science, the biology of domesticated animals Astrobiology Cognitive zoology Evolutionary biology List of zoologists Outline of zoology Palaeontology Timeline of zoology Zoological distribution Notes References External links Books on Zoology at Project Gutenberg Online Dictionary of Invertebrate Zoology Branches of biology
0.779325
0.99801
0.777774
Mycology
Mycology is the branch of biology concerned with the study of fungi, including their taxonomy, genetics, biochemical properties, and use by humans. Fungi can be a source of tinder, food, traditional medicine, as well as entheogens, poison, and infection. Mycology branches into the field of phytopathology, the study of plant diseases. The two disciplines are closely related, because the vast majority of plant pathogens are fungi. A biologist specializing in mycology is called a mycologist. Overview Although mycology was historically considered a branch of botany, the 1969 discovery of fungi's close evolutionary relationship to animals resulted in the study's reclassification as an independent field. Pioneer mycologists included Elias Magnus Fries, Christiaan Hendrik Persoon, Heinrich Anton de Bary, Elizabeth Eaton Morse, and Lewis David de Schweinitz. Beatrix Potter, author of The Tale of Peter Rabbit, also made significant contributions to the field. Pier Andrea Saccardo developed a system for classifying the imperfect fungi by spore color and form, which became the primary system used before classification by DNA analysis. He is most famous for his Sylloge Fungorum, which was a comprehensive list of all of the names that had been used for mushrooms. Sylloge is still the only work of this kind that was both comprehensive for the botanical kingdom Fungi and reasonably modern. Many fungi produce toxins, antibiotics, and other secondary metabolites. For example, the cosmopolitan genus Fusarium and their toxins associated with fatal outbreaks of alimentary toxic aleukia in humans were extensively studied by Abraham Z. Joffe. Fungi are fundamental for life on earth in their roles as symbionts, e.g. in the form of mycorrhizae, insect symbionts, and lichens. Many fungi are able to break down complex organic biomolecules such as lignin, the more durable component of wood, and pollutants such as xenobiotics, petroleum, and polycyclic aromatic hydrocarbons. By decomposing these molecules, fungi play a critical role in the global carbon cycle. Fungi and other organisms traditionally recognized as fungi, such as oomycetes and myxomycetes (slime molds), often are economically and socially important, as some cause diseases of animals (including humans) and of plants. Apart from pathogenic fungi, many fungal species are very important in controlling the plant diseases caused by different pathogens. For example, species of the filamentous fungal genus Trichoderma are considered one of the most important biological control agents as an alternative to chemical-based products for effective crop diseases management. Field meetings to find interesting species of fungi are known as 'forays', after the first such meeting organized by the Woolhope Naturalists' Field Club in 1868 and entitled "A foray among the funguses". Some fungi can cause disease in humans and other animals; the study of pathogenic fungi that infect animals is referred to as medical mycology. History It is believed that humans started collecting mushrooms as food in prehistoric times. Mushrooms were first written about in the works of Euripides (480–406 BC). The Greek philosopher Theophrastos of Eresos (371–288 BC) was perhaps the first to try to systematically classify plants; mushrooms were considered to be plants missing certain organs. It was later Pliny the Elder (23–79 AD), who wrote about truffles in his encyclopedia Natural History. The word mycology comes from the Ancient Greek: μύκης (mukēs), meaning "fungus" and the suffix (-logia), meaning "study". The Middle Ages saw little advancement in the body of knowledge about fungi. However, the invention of the printing press allowed authors to dispel superstitions and misconceptions about the fungi that had been perpetuated by the classical authors. The start of the modern age of mycology begins with Pier Antonio Micheli's 1737 publication of Nova plantarum genera. Published in Florence, this seminal work laid the foundations for the systematic classification of grasses, mosses and fungi. He originated the still current genus names Polyporus and Tuber, both dated 1729 (though the descriptions were later amended as invalid by modern rules). The founding nomenclaturist Carl Linnaeus included fungi in his binomial naming system in 1753, where each type of organism has a two-word name consisting of a genus and species (whereas up to then organisms were often designated with Latin phrases containing many words). He originated the scientific names of numerous well-known mushroom taxa, such as Boletus and Agaricus, which are still in use today. During this period, fungi were still considered to belong to the plant kingdom, so they were categorized in his Species Plantarum. Linnaeus' fungal taxa were not nearly as comprehensive as his plant taxa, however, grouping together all gilled mushrooms with a stem in genus Agaricus. Thousands of gilled species exist, which were later divided into dozens of diverse genera; in its modern usage, Agaricus only refers to mushrooms closely related to the common shop mushroom, Agaricus bisporus. For example, Linnaeus gave the name Agaricus deliciosus to the saffron milk-cap, but its current name is Lactarius deliciosus. On the other hand, the field mushroom Agaricus campestris has kept the same name ever since Linnaeus's publication. The English word "agaric" is still used for any gilled mushroom, which corresponds to Linnaeus's use of the word. The term mycology and the complementary term mycologist are traditionally attributed to M.J. Berkeley in 1836. However, mycologist appeared in writings by English botanist Robert Kaye Greville as early as 1823 in reference to Schweinitz. Mycology and drug discovery For centuries, certain mushrooms have been documented as a folk medicine in China, Japan, and Russia. Although the use of mushrooms in folk medicine is centered largely on the Asian continent, people in other parts of the world like the Middle East, Poland, and Belarus have been documented using mushrooms for medicinal purposes. Mushrooms produce large amounts of vitamin D when exposed to ultraviolet (UV) light. Penicillin, ciclosporin, griseofulvin, cephalosporin and psilocybin are examples of drugs that have been isolated from molds or other fungi. See also Ethnomycology Glossary of mycology Fungal biochemical test List of mycologists List of mycology journals Marine fungi Mushroom hunting Mycotoxicology Pathogenic fungi Protistology References Cited literature External links Professional organizations BMS: British Mycological Society (United Kingdom) MSA: Mycological Society of America (North America) Amateur organizations MSSF: Mycological Society of San Francisco North American Mycological Association (list of amateur organizations in North America) Puget Sound Mycological Society Oregon Mycological Society IMA Illinois Mycological Association Miscellaneous links Online lectures in mycology University of South Carolina The WWW Virtual Library: Mycology MykoWeb links page Mycological Glossary at the Illinois Mycological Association FUNGI Magazine for professionals and amateurs – largest circulating U.S. publication concerning all things mycological Fungal Cell Biology Group at University of Edinburgh, UK. Mycological Marvels Cornell University, Mann Library Branches of biology
0.780649
0.99625
0.777722
Sociobiology: The New Synthesis
Sociobiology: The New Synthesis (1975; 25th anniversary edition 2000) is a book by the biologist E. O. Wilson. It helped start the sociobiology debate, one of the great scientific controversies in biology of the 20th century and part of the wider debate about evolutionary psychology and the modern synthesis of evolutionary biology. Wilson popularized the term "sociobiology" as an attempt to explain the evolutionary mechanics behind social behaviour such as altruism, aggression, and the nurturing of the young. It formed a position within the long-running nature versus nurture debate. The fundamental principle guiding sociobiology is that an organism's evolutionary success is measured by the extent to which its genes are represented in the next generation. The book was generally well-reviewed in biological journals. It received a much more mixed reaction among sociologists, mainly triggered by the brief coverage of the implications of sociobiology for human society in the first and last chapters of the book; the body of the text was largely welcomed. Such was the level of interest in the debate that a review reached the front page of the New York Times. The sociologist Gerhard Lenski, admitting that sociologists needed to look further into non-human societies, agreed that human society was founded on biology but denied both biological reductionism and determinism. Lenski observed that since the nature-nurture dichotomy was false, there was no reason for sociologists and biologists to disagree. Other sociologists objected in particular to the final chapter, on "Man": Devra G. Kleiman called Wilson's attempt to extend his thesis to humans weak and premature, and noted that he had largely overlooked the importance of co-operative behaviour and females in mammalian societies. Context E. O. Wilson was an American biologist, specialising in the study of ants, social insects on which he was the world's leading expert. He is known also for his pioneering work on island biogeography, which relates species richness to island size, an important consideration in nature conservation. Wilson however favoured group selection over the Neo-Darwinian kin selection as an explanation of co-operation in social animals. Book Publication The book was first published in 1975. It has been reprinted at least 14 times up to 2014. It has been translated into languages including Chinese, Japanese, and Spanish. An abridged edition was published in 1980. Illustrations The book is illustrated with 31 halftone figures, 209 line drawings by Sarah Landry, and 43 tables. The drawings of animal societies were considered "informing and attractive". Contents Part I. Social Evolution The section summarizes the concepts of population genetics, a branch of evolutionary theory combining Mendelian genetics and natural selection in mathematical form to explain the pressures on animal societies. In particular, altruism, self-sacrificing behaviour, would die out unless something such as kin or group selection maintains it. 1. The Morality of the Gene 2. Elementary Concepts of Sociobiology 3. The Prime Movers of Social Evolution 4. The Relevant Principles of Population Biology 5. Group Selection and Altruism Part II. Social Mechanisms This section describes the types of social behaviour in animals, including the principles of animal communication, aggression, dominance systems, and insect castes. 6. Group Size, Reproduction, and Time-Energy Budgets 7. The Development and Modification of Social Behavior 8. Communication: Basic Principles 9. Communication: Functions and Complex Systems 10. Communication: Origins and Evolution 11. Aggression 12. Social Spacing, Including Territory 13. Dominance Systems 14. Roles and Castes 15. Sex and Society 16. Parental Care 17. Social Symbioses Part III. The Social Species The section describes the distribution of social behaviour in different taxa. The theme is that evolution is progressive, with four pinnacles of social evolution, namely the colonial invertebrates such as corals, the social insects, mammals other than humans, and finally humans. The last chapter argues that natural selection has made humans far more flexible in social organisation than any other species. 18. The Four Pinnacles of Social Evolution 19. The Colonial Microorganisms and Invertebrates 20. The Social Insects 21. The Cold-Blooded Vertebrates 22. The Birds 23. Evolutionary Trends within the Mammals 24. The Ungulates and Elephants 25. The Carnivores 26. The Nonhuman Primates 27. Man: From Sociobiology to Sociology Reception Contemporary Sociobiology attracted a large number of critical reviews, not only by biologists, but by social scientists who objected especially to Wilson's application of Darwinian thinking to humans, asserting that Wilson was implying a form of biological determinism. It was, unusually, reviewed on the front page of the New York Times in May 1975, and again in November that year as the controversy grew. The paper described the effect as "a period of ferment", naming the "monumental" book as the "yeast" [which caused the brew to bubble]. The Times noted that the debate was an updated version of the nature or nurture argument that had simmered ever since Darwin's time: "The assertion that man's body is a biological machine, subject to biological rules, has never completely shaken the conviction that the human intellect and human behavior are unique, the subject of free will." The paper reported that Wilson's colleague at Harvard, Richard Lewontin, had issued a 5,000 word attack on the book, and that the "meticulous" Wilson had said "I've tried to be extremely cautious in all this". The paper noted that Wilson had nowhere actually said that human behaviour was totally determined by genes, and reported him as saying that a rough figure was 10 percent genetic. By biologists The theoretical biologist Mary Jane West-Eberhard reviewed the book in detail for The Quarterly Review of Biology as a work "of special significance". She began it with a fable of a "small community of modest scholars called natural historians" who all practised their own sciences, until one day a man who "had been called Entomologist, Ecologist, and even Biochemist" arose among them and pronounced "there shall be a new science". She wrote that Wilson had "assumed god-like powers with this book", attempting to reformulate the foundations of the social sciences, making ethology and comparative psychology obsolete, and restructuring behavioural biology. She marvelled at the "sustained enthusiasm and authoritativeness" across a wide range of fields not Wilson's own, and the usefulness of many of the chapters. "In this book sociobiology is a patchwork neatly stitched from relevant pieces of other fields, without a bold new theoretical pattern of its own". She objected strongly to what she considered Wilson's "confused and misleading" discussion of altruism and group selection, arguing that kin selection provided an alternative (fully Darwinian) explanation and that Wilson was wrong to make it seem that group selection was necessary. Charles D. Michener, an entomologist, reviewed the book for BioScience. He observed that its scope was far wider than the social insects of Wilson's previous book The Insect Societies, dealing with "social phenomena from the slime molds to man". He found the review of population biology (Part I) excellent. He noted Wilson's statement that altruism is the central problem of sociobiology, and remarks that Wilson's account in fact indicates the solution, kin selection. He describes the chapter on Man as being "from the viewpoint of a very knowledgeable extraterrestrial visitor recording man's social natural history". The ornithologist Herbert Friedmann, reviewing the book for The Journal of Wildlife Management, called the book very important for its coverage of topics including of humans, and its "interpretive attitude". It would be a convenient summary of any of the groups it covers for the student, and the question of bio-ethics of interest to every "intelligent biologist". Friedmann noted that Wilson has "the courage of his convictions" to suggest in the chapter on Man that "human ethics and morality should be expressed biologically rather than philosophically", something that "need not deter the zoologist" since in Friedmann's view ethics does not exist in the human sense "in the nonhuman world". David Barash, a psychologist, thought it "about time" students of behaviour were finally becoming Darwinian, starting to turn the "ramshackle" science into something with firmer intellectual foundations. He defended sociobiology, arguing that it does not claim that genes somehow control behaviour, but that they along with experience and culture contribute to it. He speculated that it might be possible to make valid predictions about human behaviour by studying "cross-cultural universals in human behaviour", combining anthropology and evolutionary biology's theorem of fitness maximization. By sociologists The sociologist Eileen Barker reviewed the book for The British Journal of Sociology. She called it an "impressive tome (it weighs 5 lb)" and "a comprehensive, beautifully laid out and illustrated reference book covering the amazing variety of animal social behaviour". She noted that the final section on "Man" contained "several surprises for most sociologists", and that the book should counter "many of the naive inferences that have recently been made about man's evolutionary heritage." Marion Blute, in Contemporary Sociology, noted that it was rare for any book to be reviewed on the front page of the New York Times, or to receive "the extremes of reaction" seen for Sociobiology. She found that "the clarity, breadth and richness of accurately rendered detail in this monograph is really quite breath-taking." However, she objected to the claim that the book covered the biological basis of all social behaviour, as it did not cover what she called the "epigenetic disciplines", the effects of the environment on the embryonic and later development of the individual including learning (nurture, not just nature). She called the gap "unfortunate" and pointed out that "the development problem" and the functioning of the human brain were the frontiers of research. She observed, citing Dobzhansky, that "an evolutionary minded sociology which really appreciated the significance of sociocultural transmission along nongenetic lines would likely see society and culture in a very different way". Despite Wilson's neglect of "epigenetic" and social sciences, she urged sociologists to read "this exceptionally fine book", noting that despite its length it should have been twice as long. She looked forward to seeing sociology coming to terms with the neo-Darwinian synthesis, something that was already under way, which (she argued) would enrich social theory, a much better result than the alternative possibility, a renewed waste of time on the nature-versus-nurture debate. Gerhard Lenski, in Social Forces, admitted that sociologists had too often ignored non-human societies, and thought the book should be required reading. Human societies were plainly founded on biology, but this did not imply either biological reductionism or determinism. Comparison with other species would be productive, as nonhuman societies often had traditions handed down from one generation to the next, such as "the flyways of migratory birds or dietary patterns among primates". Issues of conflict and cooperation were similarly illuminated. But in his view the book raised "uncomfortable issues". The first chapter could sound, he argued, like "intellectual imperialism" as Wilson called sociology "an essentially nontheoretical, descriptive science, not unlike taxonomy and ecology forty years ago, before they were 'reshaped entirely ... [by] neo-Darwinian evolutionary theory'". Lenski however took Wilson more openly than that, noting Wilson's precursors, Julian Huxley, George Gaylord Simpson, Dobzhansky and others of the modern synthesis. They had tried repeatedly to talk to sociologists, and in Lenski's view that remained necessary. Further, he suggested, the nature-nurture dichotomy was evidently false, so there was no reason for sociologists and biologists to disagree. In his view, continued rejection of biology by sociologists only invited "a reductionist response on the part of biologists." Lenski found the final chapter on Man "disappointing", as Wilson had been unable to penetrate the "barriers" put up by social science against the modern synthesis, and Wilson's overestimation of the influence of genetics compared to culture and technology on human society. All the same, Lenski thought these "flaws" could be mended by dialogue between sociology and biology. Allan Mazur reviewed the book for the American Journal of Sociology. He called it an excellent and comprehensive survey, and said he found very few errors, though for instance squirrel monkeys did have dominance hierarchies. But he found the chapter on Man disappointing: it was trite, value-loaded, or wrong; used data uncritically, and seemed to be based on "Gerhard and Jean Lenski's introductory textbook". Further, he agreed with Wilson that scientific theories must be falsifiable, and stated "I claim that the bulk of Wilson's theorizing is not falsifiable and therefore is of little value." This was because Wilson's "theorizing" was sometimes tautologous, sometimes hopelessly vague, and sometimes based on unobservable past events. For instance, Mazur argued that Wilson's claim that altruism has evolved in most social species is untestable: Mazur denied that a mother's action to save her baby is altruistic, as (by kin selection) it increases her own fitness. However, Mazur was glad that Wilson has "legitimate[d] the biological approach to sociology", even if other books like Robert Hinde's 1974 Biological Bases of Human Social Behaviour were of more use to sociologists. Devra G. Kleiman reviewed the work for Signs. She called it "a remarkable attempt to explain the evolution of social behavior and social systems in animals by a synthesis of several disciplines within biology", but noted that it had been severely criticised by some biologists and social scientists. She observed that "it gives less attention to the environmental control of behavior" than to genetics. But "Wilson's ultimate sin" was to include the final chapter, "unfortunately titled 'Man'", attracting "the wrath of those who would deny the influence of biology on human behavior because of its political and social connotations." She called this a pity, since while his attempt to include humans in his analysis was "admittedly weak and premature", the general principles were correct – for instance, she argued, it was useful to know the genetic relatedness of individuals when assessing social interactions. She considered Wilson "nonrigorous and biased in his application of theory in certain areas". His biases included over-representation of insects, genetics, and the dominance of male mammals over females: Wilson had further exaggerated a bias from an ethology literature written mainly by males. Conversely, he had undervalued co-operative behaviour among mammals, except where it concerned males, ignoring the fact that, Kleiman argued, genetically related females were the core of most mammal societies. Wilson's book was in her view valuable as a framework for future research, but premature as a "Synthesis". By other disciplines The philosopher of politics Roger D. Masters reviewed the book for the American Political Science Review, stating that it was impossible both to review the book and not to do so, given the "attention" it had received. In his view, the book "has the indisputable merit of showing that the existence of complex societies is a biological phenomenon. By emphasizing the relationships between animal behavior and population genetics, Wilson compels us to recognize the evolutionary significance of events which social scientists often treat without reference to Darwinian biology." But there was "a large gap" between that and the work of most political scientists, and it was too early to attempt to apply sociobiology directly to human social issues in practice. He concluded that the book was fascinating, provocative, and the start of a return to the tradition "as old as Aristotle" where man is seen as "a 'political animal'", since social behaviour had natural origins. Philip L. Wagner, a geographer reviewing the book in Annals of the Association of American Geographers, argued that the book proposes a "fundamental thesis" for explaining the size, structure, and spatial arrangements of animal populations, all aspects of geography, and noted that Wilson and MacArthur's 1967 Theory of Island Biogeography had already set out some of these ideas. In his view, the most impressive aspect of the book was its mission to extend "rational deterministic explanation" far more widely. However, he thought the last chapter, extending the ideas to humans, far too brief and premature, as it failed to cover technology or tradition in general, while Wilson's speculations about "tradition drift" elsewhere in the book reinvented the study of diffusion of innovations and appeared unaware of "the now classical Hägerstrand diffusion models." The biology teacher Lotte R. Geller, reviewing the book in The American Biology Teacher, thought the book meticulously researched; no one would take exception to its thesis, but for the inclusion of man. "[Wilson] is well aware of the difficulties this presents." Geller called the last chapter, relating biology to sociology, a "step from scientific study to speculation". In her view, the most controversial and disturbing thing was the call for scientist and humanists to "temporarily" remove ethics "from the hands of the philosophers and biologize" it. She called it "dangerous to say that biologists should have a monopoly on truth and ethics." The anthropologist Frances L. Stewart, writing in the Bulletin of the Canadian Archaeological Association, noted that "An anthropologist reading this book is confronted by statements which contradict anthropological theory. The main argument that all social behavior has a biological basis would be questioned." Human biological determinism controversy The application of sociobiology to humans (discussed only in the first and last chapters of the book) was immediately controversial. Some researchers, led by Stephen Jay Gould and Richard Lewontin, contended that sociobiology embodied biological determinism. They argued that it would be used, as similar ideas had been in the past, to justify the status quo, entrench ruling elites, and legitimize authoritarian political programmes. They referred to social Darwinism and eugenics of the early 20th century, and other more recent developments, such as the IQ controversy of the early 1970s, as cautionary tales in the use of evolutionary principles as applied to human society. They believed that Wilson was committing the naturalistic fallacy, attempting to define moral principles using natural concepts. Academics opposed to Wilson's sociobiology, including Gould, Lewontin, Jon Beckwith, Ruth Hubbard, and Anthony Leeds created the Sociobiology Study Group of Science for the People to counter his ideas. Other critics believed that Wilson's theories, as well as the works of subsequent admirers, were not supported scientifically. Objections were raised to many of the ethnocentric assumptions of early sociobiology (like ignoring female gatherers in favour of male hunters in hunter-gatherer societies) and to the sampling and mathematical methods used in informing conclusions. Many of Wilson's less well supported conclusions were attacked (for example, Wilson's mathematical treatment of inheritance as involving a single gene per trait, even though he admitted that traits could be polygenic). Sociobiologists were accused of being "super" adaptationists, or panadaptationist, believing that every aspect of morphology and behaviour must necessarily be an evolutionarily beneficial adaptation. Philosophical debates about the nature of scientific truth and the applicability of any human reason to a subject so complex as human behaviour, considering past failures, raged. Describing the controversy, Eric Holtzmans noted that "Given the baleful history of misuse of biology in justifying or designing social policies and practices, authors who attempt to consider human sociobiology have special responsibilities that are not adequately discharged by the usual academic caveats." Wilson and his admirers countered these criticisms by saying that Wilson had no political agenda, and if he had one it was certainly not authoritarian, citing Wilson's environmentalism in particular. They argued that they as scientists had a duty to uncover the truth whether that was politically correct or not. Wilson called the claim that sociobiology is biological determinism "academic vigilantism" and the Sociobiology Study Group response "a largely ideological argument". Noam Chomsky, a linguist and political scientist, surprised many by coming to the defense of sociobiology on the grounds that political radicals needed to postulate a relatively fixed idea of human nature in order to be able to struggle for a better society, claiming that leaders should know what human needs were in order to build a better society. Retrospective With the publication of the 25th anniversary edition in 2000, the historians of biology Michael Yudell and Rob Desalle reviewed the nature-nurture controversy around the book. "Once again", they wrote, "biological reductionism and genetic determinism became the focus of rancorous debates, discussions and diatribes within both academia and popular culture." They pointed out that the quest for a "sociobiologization" of biology was not new, mentioning Darwin's The Descent of Man, R.A. Fisher, and Julian Huxley, all touching on the biological basis of human society, followed by Konrad Lorenz, Desmond Morris and Robert Ardrey in the 1960s, and Richard Dawkins and David Barash in the 1970s. Wilson's choice of title echoed the modern synthesis (named by Huxley in 1942) and, the reviewers argued, meant to build upon and extend it. 25 years on, they noted, most of the discord had gone, and the discipline had been renamed as evolutionary psychology; they were surprised to find that Wilson was happy with that, and they called the new discipline pop psychology for people "who like telling just-so stories". Concerning the anniversary edition, Yudell and Desalle thought it strange that nothing worth adding had happened in 25 years: the book remained a primary text, and Wilson's failure to develop it weakened the edition's impact. The early chapters still seemed a "lucid and engaging" introduction to population biology, but much of the rest seemed after 25 years to lack "methodological breadth", given that it did not cover the new fields that had emerged; while barely mentioning the growing importance of phylogenetic systematics seemed "curious". They pointed out that comparing human and "animal" social evolution "is tantamount to making homology" claims, but Wilson had said nothing about the need for a methodology to test behavioural homology. The reviewers were also troubled by Wilson's attitude to the debate, remaining "contemptuous of his anti-sociobiological opposition" and "opprobrium towards Marxism" (especially Gould and Lewontin). Yudell and Desalle noted the irony that Wilson despised Marxism but advocated an "aggressive paradigm ... seeking to blaze an historical path towards the future" (as Marxism did). They argued that by demonising his opponents in this way, Wilson created support for Sociobiology "not necessarily sustainable by his data and methodologies." He was still doing that 25 years on, stated the reviewers. An extensive account of the controversy around the book was published at the same time as the new edition, largely supporting Wilson's views. Looking back at Sociobiology 35 years later, the philosopher of biology Michael Ruse called the book "a pretty remarkable achievement" of huge scope, "firmly in the Darwinian paradigm of evolution through natural selection". He found one aspect of the book "very peculiar" in its "metaphysical underpinning", namely that Wilson was committed to the idea of progress in biology, "the idea that organic life has proceeded from the very simple to the very complex, from the value-free to the value-laden, from (as they used to say in the 19th century) the monad to the man." Ruse observed that while producing humans might look like progress, evolution had "also produced smallpox and syphilis and potato blight," raising "serious doubts about whether evolution is progressive." Ruse noted that Gould's 1989 book Wonderful Life was entirely an attack on this idea of progress. References Bibliography External links Sociobiology: The New Synthesis 1975, Harvard University Press, (Twenty-fifth Anniversary Edition, 2000 ) 1975 non-fiction books American non-fiction books Books about evolution Books about sociobiology Cognitive science literature English-language books Harvard University Press books Sociology books Works by E. O. Wilson
0.79961
0.972531
0.777646
Closed ecological system
Closed ecological systems or contained ecological systems (CES) are ecosystems that do not rely on matter exchange with any part outside the system. The term is most often used to describe small, man-made ecosystems. Such systems can potentially serve as a life-support system during space flights, in space stations or space habitats. In a closed ecological system, any waste products produced by one species must be used by at least one other species. If the purpose is to maintain a life form, such as a mouse or a human, waste products such as carbon dioxide, feces and urine must eventually be converted into oxygen, food, and water. A closed ecological system must contain at least one autotrophic organism. While both chemotrophic and phototrophic organisms are plausible, almost all closed ecological systems to date are based on an autotroph such as green algae. Examples A closed ecological system for an entire planet is called an ecosphere. Man-made closed ecological systems which were created to sustain human life include Biosphere 2, MELiSSA, and the BIOS-1, BIOS-2, and BIOS-3 projects. Bottle gardens and aquarium ecospheres are partially or fully enclosed glass containers that are self-sustaining closed ecosystems that can be made or purchased. They can include tiny shrimp, algae, gravel, decorative shells, and Gorgonia. In fiction Closed ecological systems are commonly featured in fiction and particularly in science fiction. These include domed cities, space stations and habitats on foreign planets or asteroids, cylindrical habitats (e.g. O'Neill cylinders), Dyson Spheres and so on. See also References Ecological processes Systems ecology
0.789931
0.984443
0.777642
Domestication syndrome
Domestication syndrome refers to two sets of phenotypic traits that are common to either domesticated plants or domesticated animals. Domesticated animals tend to be smaller and less aggressive than their wild counterparts, they may also have floppy ears, variations to coat color, a smaller brain, and a shorter muzzle. Other traits may include changes in the endocrine system and an extended breeding cycle. These animal traits have been claimed to emerge across the different species in response to selection for tameness, which was purportedly demonstrated in a famous Russian fox breeding experiment, though this claim has been disputed. Other research suggested that pleiotropic change in neural crest cell regulating genes was the common cause of shared traits seen in many domesticated animal species. However, several recent publications have either questioned this neural crest cell explanation or cast doubt on the existence of domestication syndrome itself. One recent publication points out that shared selective regime changes following transition from wild to domestic environments are a more likely cause of any convergent traits. In addition, the sheer number, diversity, and phenotypic importance of neural crest cell-derived vertebrate features means that changes in genes associated with them are almost inevitable in response to any significant selective change.   The process of plant domestication has produced changes in shattering/fruit abscission, shorter height, larger grain or fruit size, easier threshing, synchronous flowering, and increased yield, as well as changes in color, taste, and texture. Origin Charles Darwin's study of The Variation of Animals and Plants Under Domestication in 1868 identified various behavioral, morphological, and physiological traits that are shared by domestic animals, but not by their wild ancestors. These shared traits became known as "the domestication syndrome", a term originally used to describe common changes in domesticated grains. In animals, these traits include tameness, docility, floppy ears, altered tails, novel coat colors and patterns, reduced brain size, reduced body mass and smaller teeth. Other traits include changes in craniofacial morphology, alterations to the endocrine system, and changes to the female estrous cycles including the ability to breed all year-round. A recent hypothesis suggests that neural crest cell behaviour may be modified by domestication, which then leads to those traits that are common across many domesticated animal species. This hypothesis has claimed support from many gene-based studies; e.g., However, recent publications have disputed this support; pointing out that observed change in neural crest related genes only reveals change in neural crest-derived features. In effect, it is not evidence of linked trait changes in different species due to pleiotropic neural crest mechanisms as claimed by the neural crest cell hypothesis. For example, all of the craniofacial skeleton is derived from the neural crest, so any animal population that experiences evolutionary change in craniofacial features will show changes in genes associated with the neural crest. The number and importance of neural crest cell features in all vertebrates means change in these features is almost inevitable under the major selective regime shifts experienced by animals making the wild to domestic transition. Cause Many similar traits – both in animals and plants – are produced by orthologs; however, whether this is true for domestication traits or merely for wild forms is less clear. Especially in the case of plant crops, doubt has been cast because some domestication traits have been found to result from unrelated loci. In 2018, a study identified 429 genes that differed between modern dogs and modern wolves. As the differences in these genes could also be found in ancient dog fossils, these were regarded as being the result of the initial domestication and not from recent breed formation. These genes are linked to neural crest and central nervous system development. These genes affect embryogenesis and can confer tameness, smaller jaws, floppy ears, and diminished craniofacial development, which distinguish domesticated dogs from wolves and are considered to reflect domestication syndrome. The study concluded that during early dog domestication, the initial selection was for behavior. This trait is influenced by those genes which act in the neural crest, which led to the phenotypes observed in modern dogs. The 2023 parasite-mediated domestication hypothesis suggests that endoparasites such as helminths and protozoa could have mediated the domestication of mammals. Domestication involves taming, which has an endocrine component; and parasites can modify endocrine activity and microRNAs. Genes for resistance to parasites might be linked to those for the domestication syndrome; it is predicted that domestic animals are less resistant to parasites than their wild relatives. In animals A dog's cranium is 15% smaller than an equally heavy wolf's, and the dog is less aggressive and more playful. Other species pairs show similar differences. Bonobos, like chimpanzees, are a close genetic cousin to humans, but unlike the chimpanzees, bonobos are not aggressive and do not participate in lethal inter-group aggression or kill within their own group. The most distinctive features of a bonobo are its cranium, which is 15% smaller than a chimpanzee's, and its less aggressive and more playful behavior. These, and other, features led to the proposal that bonobos are a 'self-domesticated' ape. In other examples, the guinea pig's cranium is 13% smaller than its wild cousin the cavy, and domestic fowl show a similar reduction to their wild cousins. In a famous Russian farm fox experiment, foxes selectively bred for reduced aggression appeared to show other traits associated with domestication syndrome. This prompted the claim that domestication syndrome was caused by selection for tameness. The foxes were not selectively bred for smaller craniums and teeth, floppy ears, or skills at using human gestures, but these traits were demonstrated in the friendly foxes. Natural selection favors those that are the most successful at reproducing, not the most aggressive. Selection against aggression made possible the ability to cooperate and communicate among foxes, dogs and bonobos. The more docile animals have been found to have less testosterone than their more aggressive counterparts, and testosterone controls aggression and brain size. The further away a dog breed is genetically from wolves, the larger the relative brain size is. Challenge The domestication syndrome was reported to have appeared in the domesticated silver fox cultivated by Dmitry Belyayev's breeding experiment. However, in 2015 canine researcher Raymond Coppinger found historical evidence that Belyayev's foxes originated in fox farms on Prince Edward Island and had been bred there for fur farming since the 1800s, and that the traits demonstrated by Belyayev had occurred in the foxes prior to the breeding experiment. A 2019 opinion paper by Lord and colleagues argued that the results of the "Russian farm fox experiment" were overstated, although the pre-domesticated origins of these Russian foxes were already a matter of scientific record. In 2020, Wright et al. argued Lord et al.'s critique refuted only a narrow and unrealistic definition of domestication syndrome because their criteria assumed it must be caused by genetic pleiotropy, and arises in response to 'selection for tameness'--as was claimed by Belyaev, Trut, and the proposers of the neural crest hypothesis. In the same year, Zeder pointed out that it makes no sense to deny the existence of domestication syndrome on the basis that domestication syndrome traits were present in the pre-domesticated founding foxes. The hypothesis that neural crest genes underlie some of the phenotypic differences between domestic and wild horses and dogs is supported by the functional enrichment of candidate genes under selection. But, the observation of changed neural crest cell genes between wild and domestic populations need only reveal changes to features derived from neural crest, it does not support the claim of a common underlying genetic architecture that causes all of the domestication syndrome traits in all of the different animal species. Gleeson and Wilson synthesised this debate and showed that animal domestication syndrome is not caused by selection for tameness, or by neural crest cell genetic pleiotropy. However, it could result from shared selective regime changes (which they termed 'reproductive disruption') leading to similarly shared trait changes across different species--in effect, a series of partial trait convergences. They proposed four primary selective pathways that are commonly altered by the shift to a domestic selective context, and would often lead to similar shifts in different populations. These pathways are: Disrupted inter-sexual selection in males (reduced/altered female choice). Disrupted intra-sexual selection in males (reduced/altered male-male competition). Changed resource availability and predation pressure affecting female fertility and offspring survival. Intensified potential for maternal stress, selecting for altered reproductive physiology in females. Because the 'Reproductive Disruption' hypothesis explains domestication syndrome as a result of changed selective regimes, it can encompass multiple genetic or physiological ways that similar traits might emerge in the different domesticated species. For example, tamer behaviour might be caused by reduced adrenal reactivity, by increased oxytocin production, or by a combination of these or other mechanisms, across the different populations and species. In plants Syndrome traits The same concept appears in the plant domestication process which produces crops, but with its own set of syndrome traits. In cereals, these include little to no shattering/fruit abscission, shorter height (thus decreased lodging), larger grain or fruit size, easier threshing, synchronous flowering, altered timing of flowering, increased grain weight, glutinousness (stickiness, not gluten protein content), increased fruit/grain number, altered color compounds, taste, and texture, daylength independence, determinate growth, lesser/no vernalization, less seed dormancy. Cereal genes by trait Control of the syndrome traits in cereals is by: Shattering SH1 in sorghum, rice, and maize/corn sh4 in the rachis of rice qPDH1 in soybean Q in wheat LG1 in rice Plant height Rht-B1/Rht-D1 (two orthologous versions of Rht-1 on different subgenomes, Rht standing for reduced height) in wheat GA20ox-2 in rice and barley KO2 in one Japanese cultivar of rice either dw3 or d2 in sorghum and pearl millet Ghd7 in rice Q in wheat Grain size GS3 in maize/corn and rice GS5 in rice An-1 in rice GAD1/RAE2 (smaller) in rice Yield SPL14/LOC4345998 in rice. pyl1, pyl4, pyl6 in the PYL gene family in rice Threshability Q and Nud An-1 (by reducing or eliminating awns) in rice An-2/LABA1 - small awn reduction/barbless awns - in rice GAD1/RAE2 - awn elimination in rice tga1 - naked kernels in maize Flowering time VRN1 in barley, wheat, ryegrass Grain weight GW2 in rice, wheat, maize/corn GW5 in rice GLW2 in rice GASR7 in wheat GW5 in rice TGW6 in rice Glutinousness GBSSI or Waxy in rice (especially glutinous rice), wheat, corn, barley, sorghum, foxtail millet SBEIIb in rice Determinate growth TERMINAL FLOWER 1/TFL1 in Arabidopsis thaliana and orthologs Specifically, four orthologs in Glycine max and eight in Phaseolus vulgaris Standability PROSTRATE GROWTH/Prog1/PROG1 in rice teosinte branched1/tb1 (apical dominance) in maize/corn Grain/fruit number An-1 in rice GAD1/RAE2 in rice PROG1 (by increasing tiller number) in rice Gn1a in rice AAP3 (by increasing tiller number) in rice Panicle size DEP1 in rice and wheat Spike number vrs1 in barley Fragrance BADH2 produces 2-Acetyl-1-pyrroline when defective in rice; can be artificially disrupted to produce the same compound Delayed sprouting pyl1, pyl4, pyl6 in the PYL gene family - reduced preharvest sprouting in rice Altered color Rc - white pericarp in rice Unspecified trait Teosinte glume architecture/tga'' in maize/corn Many of these are mutations in regulatory genes, especially transcription factors, which is likely why they work so well in domestication: They are not new, and are relatively ready to have their magnitudes altered. In annual grains, loss of function and altered expression are by far the most common, and thus are the most interesting goals of mutation breeding, while copy number variation and chromosomal rearrangements are far less common. See also Agricultural weed syndrome References Domestication Genetics Agriculture
0.785421
0.989855
0.777453
Geology
Geology is a branch of natural science concerned with the Earth and other astronomical objects, the rocks of which they are composed, and the processes by which they change over time. Modern geology significantly overlaps all other Earth sciences, including hydrology. It is integrated with Earth system science and planetary science. Geology describes the structure of the Earth on and beneath its surface and the processes that have shaped that structure. Geologists study the mineralogical composition of rocks in order to get insight into their history of formation. Geology determines the relative ages of rocks found at a given location; geochemistry (a branch of geology) determines their absolute ages. By combining various petrological, crystallographic, and paleontological tools, geologists are able to chronicle the geological history of the Earth as a whole. One aspect is to demonstrate the age of the Earth. Geology provides evidence for plate tectonics, the evolutionary history of life, and the Earth's past climates. Geologists broadly study the properties and processes of Earth and other terrestrial planets. Geologists use a wide variety of methods to understand the Earth's structure and evolution, including fieldwork, rock description, geophysical techniques, chemical analysis, physical experiments, and numerical modelling. In practical terms, geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding natural hazards, remediating environmental problems, and providing insights into past climate change. Geology is a major academic discipline, and it is central to geological engineering and plays an important role in geotechnical engineering. Geological material The majority of geological data comes from research on solid Earth materials. Meteorites and other extraterrestrial natural materials are also studied by geological methods. Minerals Minerals are naturally occurring elements and compounds with a definite homogeneous chemical composition and an ordered atomic arrangement. Each mineral has distinct physical properties, and there are many tests to determine each of them. Minerals are often identified through these tests. The specimens can be tested for: Color: Minerals are grouped by their color. Mostly diagnostic but impurities can change a mineral's color. Streak: Performed by scratching the sample on a porcelain plate. The color of the streak can help identify the mineral. Hardness: The resistance of a mineral to scratching or indentation. Breakage pattern: A mineral can either show fracture or cleavage, the former being breakage of uneven surfaces, and the latter a breakage along closely spaced parallel planes. Luster: Quality of light reflected from the surface of a mineral. Examples are metallic, pearly, waxy, dull. Specific gravity: the weight of a specific volume of a mineral. Effervescence: Involves dripping hydrochloric acid on the mineral to test for fizzing. Magnetism: Involves using a magnet to test for magnetism. Taste: Minerals can have a distinctive taste such as halite (which tastes like table salt). Rock A rock is any naturally occurring solid mass or aggregate of minerals or mineraloids. Most research in geology is associated with the study of rocks, as they provide the primary record of the majority of the geological history of the Earth. There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle illustrates the relationships among them (see diagram). When a rock solidifies or crystallizes from melt (magma or lava), it is an igneous rock. This rock can be weathered and eroded, then redeposited and lithified into a sedimentary rock. Sedimentary rocks are mainly divided into four categories: sandstone, shale, carbonate, and evaporite. This group of classifications focuses partly on the size of sedimentary particles (sandstone and shale), and partly on mineralogy and formation processes (carbonation and evaporation). Igneous and sedimentary rocks can then be turned into metamorphic rocks by heat and pressure that change its mineral content, resulting in a characteristic fabric. All three types may melt again, and when this happens, new magma is formed, from which an igneous rock may once again solidify. Organic matter, such as coal, bitumen, oil, and natural gas, is linked mainly to organic-rich sedimentary rocks. To study all three types of rock, geologists evaluate the minerals of which they are composed and their other physical properties, such as texture and fabric. Unlithified material Geologists also study unlithified materials (referred to as superficial deposits) that lie above the bedrock. This study is often known as Quaternary geology, after the Quaternary period of geologic history, which is the most recent period of geologic time. Magma Magma is the original unlithified source of all igneous rocks. The active flow of molten rock is closely studied in volcanology, and igneous petrology aims to determine the history of igneous rocks from their original molten source to their final crystallization. Whole-Earth structure Plate tectonics In the 1960s, it was discovered that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. This theory is supported by several types of observations, including seafloor spreading and the global distribution of mountain terrain and seismicity. There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle (that is, the heat transfer caused by the slow movement of ductile mantle rock). Thus, oceanic parts of plates and the adjoining mantle convection currents always move in the same direction – because the oceanic lithosphere is actually the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics. The development of plate tectonics has provided a physical basis for many observations of the solid Earth. Long linear regions of geological features are explained as plate boundaries: Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, are seen as divergent boundaries, where two plates move apart. Arcs of volcanoes and earthquakes are theorized as convergent boundaries, where one plate subducts, or moves, under another. Transform boundaries, such as the San Andreas Fault system, are where plates slide horizontally past each other. Plate tectonics has provided a mechanism for Alfred Wegener's theory of continental drift, in which the continents move across the surface of the Earth over geological time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle. Earth structure Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth. Seismologists can use the arrival times of seismic waves to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a lithosphere (including crust) on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model. Mineralogists have been able to use the pressure and temperature data from the seismic and modeling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes within the crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth. Geological time The geological time scale encompasses the history of the Earth. It is bracketed at the earliest by the dates of the first Solar System material at 4.567 Ga (or 4.567 billion years ago) and the formation of the Earth at 4.54 Ga (4.54 billion years), which is the beginning of the Hadean eona division of geological time. At the later end of the scale, it is marked by the present day (in the Holocene epoch). Timescale of the Earth Important milestones on Earth 4.567 Ga (gigaannum: billion years ago): Solar system formation 4.54 Ga: Accretion, or formation, of Earth c. 4 Ga: End of Late Heavy Bombardment, the first life c. 3.5 Ga: Start of photosynthesis c. 2.3 Ga: Oxygenated atmosphere, first snowball Earth 730–635 Ma (megaannum: million years ago): second snowball Earth 541 ± 0.3 Ma: Cambrian explosion – vast multiplication of hard-bodied life; first abundant fossils; start of the Paleozoic c. 380 Ma: First vertebrate land animals 250 Ma: Permian-Triassic extinction – 90% of all land animals die; end of Paleozoic and beginning of Mesozoic 66 Ma: Cretaceous–Paleogene extinction – Dinosaurs die; end of Mesozoic and beginning of Cenozoic c. 7 Ma: First hominins appear 3.9 Ma: First Australopithecus, direct ancestor to modern Homo sapiens, appear 200 ka (kiloannum: thousand years ago): First modern Homo sapiens appear in East Africa Timescale of the Moon Timescale of Mars Dating methods Relative dating Methods for relative dating were developed when geology first emerged as a natural science. Geologists still use the following principles today as a means to provide information about geological history and the timing of geological events. The principle of uniformitarianism states that the geological processes observed in operation that modify the Earth's crust at present have worked in much the same way over geological time. A fundamental principle of geology advanced by the 18th-century Scottish physician and geologist James Hutton is that "the present is the key to the past." In Hutton's words: "the past history of our globe must be explained by what can be seen to be happening now." The principle of intrusive relationships concerns crosscutting intrusions. In geology, when an igneous intrusion cuts across a formation of sedimentary rock, it can be determined that the igneous intrusion is younger than the sedimentary rock. Different types of intrusions include stocks, laccoliths, batholiths, sills and dikes. The principle of cross-cutting relationships pertains to the formation of faults and the age of the sequences through which they cut. Faults are younger than the rocks they cut; accordingly, if a fault is found that penetrates some formations but not those on top of it, then the formations that were cut are older than the fault, and the ones that are not cut must be younger than the fault. Finding the key bed in these situations may help determine whether the fault is a normal fault or a thrust fault. The principle of inclusions and components states that, with sedimentary rocks, if inclusions (or clasts) are found in a formation, then the inclusions must be older than the formation that contains them. For example, in sedimentary rocks, it is common for gravel from an older formation to be ripped up and included in a newer layer. A similar situation with igneous rocks occurs when xenoliths are found. These foreign bodies are picked up as magma or lava flows, and are incorporated, later to cool in the matrix. As a result, xenoliths are older than the rock that contains them. The principle of original horizontality states that the deposition of sediments occurs as essentially horizontal beds. Observation of modern marine and non-marine sediments in a wide variety of environments supports this generalization (although cross-bedding is inclined, the overall orientation of cross-bedded units is horizontal). The principle of superposition states that a sedimentary rock layer in a tectonically undisturbed sequence is younger than the one beneath it and older than the one above it. Logically a younger layer cannot slip beneath a layer previously deposited. This principle allows sedimentary layers to be viewed as a form of the vertical timeline, a partial or complete record of the time elapsed from deposition of the lowest layer to deposition of the highest bed. The principle of faunal succession is based on the appearance of fossils in sedimentary rocks. As organisms exist during the same period throughout the world, their presence or (sometimes) absence provides a relative age of the formations where they appear. Based on principles that William Smith laid out almost a hundred years before the publication of Charles Darwin's theory of evolution, the principles of succession developed independently of evolutionary thought. The principle becomes quite complex, however, given the uncertainties of fossilization, localization of fossil types due to lateral changes in habitat (facies change in sedimentary strata), and that not all fossils formed globally at the same time. Absolute dating Geologists also use methods to determine the absolute age of rock samples and geological events. These dates are useful on their own and may also be used in conjunction with relative dating methods or to calibrate relative methods. At the beginning of the 20th century, advancement in geological science was facilitated by the ability to obtain accurate absolute dates to geological events using radioactive isotopes and other methods. This changed the understanding of geological time. Previously, geologists could only use fossils and stratigraphic correlation to date sections of rock relative to one another. With isotopic dates, it became possible to assign absolute ages to rock units, and these absolute dates could be applied to fossil sequences in which there was datable material, converting the old relative ages into new absolute ages. For many geological applications, isotope ratios of radioactive elements are measured in minerals that give the amount of time that has passed since a rock passed through its particular closure temperature, the point at which different radiometric isotopes stop diffusing into and out of the crystal lattice. These are used in geochronologic and thermochronologic studies. Common methods include uranium–lead dating, potassium–argon dating, argon–argon dating and uranium–thorium dating. These methods are used for a variety of applications. Dating of lava and volcanic ash layers found within a stratigraphic sequence can provide absolute age data for sedimentary rock units that do not contain radioactive isotopes and calibrate relative dating techniques. These methods can also be used to determine ages of pluton emplacement. Thermochemical techniques can be used to determine temperature profiles within the crust, the uplift of mountain ranges, and paleo-topography. Fractionation of the lanthanide series elements is used to compute ages since rocks were removed from the mantle. Other methods are used for more recent events. Optically stimulated luminescence and cosmogenic radionuclide dating are used to date surfaces and/or erosion rates. Dendrochronology can also be used for the dating of landscapes. Radiocarbon dating is used for geologically young materials containing organic carbon. Geological development of an area The geology of an area changes through time as rock units are deposited and inserted, and deformational processes alter their shapes and locations. Rock units are first emplaced either by deposition onto the surface or intrusion into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude. After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates. When rock units are placed under horizontal compression, they shorten and become thicker. Because rock units, other than muds, do not significantly change in volume, this is accomplished in two primary ways: through faulting and folding. In the shallow crust, where brittle deformation can occur, thrust faults form, which causes the deeper rock to move on top of the shallower rock. Because deeper rock is often older, as noted by the principle of superposition, this can result in older rocks moving on top of younger ones. Movement along faults can result in folding, either because the faults are not planar or because rock layers are dragged along, forming drag folds as slip occurs along the fault. Deeper in the Earth, rocks behave plastically and fold instead of faulting. These folds can either be those where the material in the center of the fold buckles upwards, creating "antiforms", or where it buckles downwards, creating "synforms". If the tops of the rock units within the folds remain pointing upwards, they are called anticlines and synclines, respectively. If some of the units in the fold are facing downward, the structure is called an overturned anticline or syncline, and if all of the rock units are overturned or the correct up-direction is unknown, they are simply called by the most general terms, antiforms, and synforms. Even higher pressures and temperatures during horizontal shortening can cause both folding and metamorphism of the rocks. This metamorphism causes changes in the mineral composition of the rocks; creates a foliation, or planar surface, that is related to mineral growth under stress. This can remove signs of the original textures of the rocks, such as bedding in sedimentary rocks, flow features of lavas, and crystal patterns in crystalline rocks. Extension causes the rock units as a whole to become longer and thinner. This is primarily accomplished through normal faulting and through the ductile stretching and thinning. Normal faults drop rock units that are higher below those that are lower. This typically results in younger units ending up below older units. Stretching of units can result in their thinning. In fact, at one location within the Maria Fold and Thrust Belt, the entire sedimentary sequence of the Grand Canyon appears over a length of less than a meter. Rocks at the depth to be ductilely stretched are often also metamorphosed. These stretched rocks can also pinch into lenses, known as boudins, after the French word for "sausage" because of their visual similarity. Where rock units slide past one another, strike-slip faults develop in shallow regions, and become shear zones at deeper depths where the rocks deform ductilely. The addition of new rock units, both depositionally and intrusively, often occurs during deformation. Faulting and other deformational processes result in the creation of topographic gradients, causing material on the rock unit that is increasing in elevation to be eroded by hillslopes and channels. These sediments are deposited on the rock unit that is going down. Continual motion along the fault maintains the topographic gradient in spite of the movement of sediment and continues to create accommodation space for the material to deposit. Deformational events are often also associated with volcanism and igneous activity. Volcanic ashes and lavas accumulate on the surface, and igneous intrusions enter from below. Dikes, long, planar igneous intrusions, enter along cracks, and therefore often form in large numbers in areas that are being actively deformed. This can result in the emplacement of dike swarms, such as those that are observable across the Canadian shield, or rings of dikes around the lava tube of a volcano. All of these processes do not necessarily occur in a single environment and do not necessarily occur in a single order. The Hawaiian Islands, for example, consist almost entirely of layered basaltic lava flows. The sedimentary sequences of the mid-continental United States and the Grand Canyon in the southwestern United States contain almost-undeformed stacks of sedimentary rocks that have remained in place since Cambrian time. Other areas are much more geologically complex. In the southwestern United States, sedimentary, volcanic, and intrusive rocks have been metamorphosed, faulted, foliated, and folded. Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is indiscernible without laboratory analysis. In addition, these processes can occur in stages. In many places, the Grand Canyon in the southwestern United States being a very visible example, the lower rock units were metamorphosed and deformed, and then deformation ended and the upper, undeformed units were deposited. Although any amount of rock emplacement and rock deformation can occur, and they can occur any number of times, these concepts provide a guide to understanding the geological history of an area. Investigative methods Geologists use a number of fields, laboratory, and numerical modeling methods to decipher Earth history and to understand the processes that occur on and inside the Earth. In typical geological investigations, geologists use primary information related to petrology (the study of rocks), stratigraphy (the study of sedimentary layers), and structural geology (the study of positions of rock units and their deformation). In many cases, geologists also study modern soils, rivers, landscapes, and glaciers; investigate past and current life and biogeochemical pathways, and use geophysical methods to investigate the subsurface. Sub-specialities of geology may distinguish endogenous and exogenous geology. Field methods Geological field work varies depending on the task at hand. Typical fieldwork could consist of: Geological mapping Structural mapping: identifying the locations of major rock units and the faults and folds that led to their placement there. Stratigraphic mapping: pinpointing the locations of sedimentary facies (lithofacies and biofacies) or the mapping of isopachs of equal thickness of sedimentary rock Surficial mapping: recording the locations of soils and surficial deposits Surveying of topographic features compilation of topographic maps Work to understand change across landscapes, including: Patterns of erosion and deposition River-channel change through migration and avulsion Hillslope processes Subsurface mapping through geophysical methods These methods include: Shallow seismic surveys Ground-penetrating radar Aeromagnetic surveys Electrical resistivity tomography They aid in: Hydrocarbon exploration Finding groundwater Locating buried archaeological artifacts High-resolution stratigraphy Measuring and describing stratigraphic sections on the surface Well drilling and logging Biogeochemistry and geomicrobiology Collecting samples to: determine biochemical pathways identify new species of organisms identify new chemical compounds and to use these discoveries to: understand early life on Earth and how it functioned and metabolized find important compounds for use in pharmaceuticals Paleontology: excavation of fossil material For research into past life and evolution For museums and education Collection of samples for geochronology and thermochronology Glaciology: measurement of characteristics of glaciers and their motion Petrology In addition to identifying rocks in the field (lithology), petrologists identify rock samples in the laboratory. Two of the primary methods for identifying rocks in the laboratory are through optical microscopy and by using an electron microprobe. In an optical mineralogy analysis, petrologists analyze thin sections of rock samples using a petrographic microscope, where the minerals can be identified through their different properties in plane-polarized and cross-polarized light, including their birefringence, pleochroism, twinning, and interference properties with a conoscopic lens. In the electron microprobe, individual locations are analyzed for their exact chemical compositions and variation in composition within individual crystals. Stable and radioactive isotope studies provide insight into the geochemical evolution of rock units. Petrologists can also use fluid inclusion data and perform high temperature and pressure physical experiments to understand the temperatures and pressures at which different mineral phases appear, and how they change through igneous and metamorphic processes. This research can be extrapolated to the field to understand metamorphic processes and the conditions of crystallization of igneous rocks. This work can also help to explain processes that occur within the Earth, such as subduction and magma chamber evolution. Structural geology Structural geologists use microscopic analysis of oriented thin sections of geological samples to observe the fabric within the rocks, which gives information about strain within the crystalline structure of the rocks. They also plot and combine measurements of geological structures to better understand the orientations of faults and folds to reconstruct the history of rock deformation in the area. In addition, they perform analog and numerical experiments of rock deformation in large and small settings. The analysis of structures is often accomplished by plotting the orientations of various features onto stereonets. A stereonet is a stereographic projection of a sphere onto a plane, in which planes are projected as lines and lines are projected as points. These can be used to find the locations of fold axes, relationships between faults, and relationships between other geological structures. Among the most well-known experiments in structural geology are those involving orogenic wedges, which are zones in which mountains are built along convergent tectonic plate boundaries. In the analog versions of these experiments, horizontal layers of sand are pulled along a lower surface into a back stop, which results in realistic-looking patterns of faulting and the growth of a critically tapered (all angles remain the same) orogenic wedge. Numerical models work in the same way as these analog models, though they are often more sophisticated and can include patterns of erosion and uplift in the mountain belt. This helps to show the relationship between erosion and the shape of a mountain range. These studies can also give useful information about pathways for metamorphism through pressure, temperature, space, and time. Stratigraphy In the laboratory, stratigraphers analyze samples of stratigraphic sections that can be returned from the field, such as those from drill cores. Stratigraphers also analyze data from geophysical surveys that show the locations of stratigraphic units in the subsurface. Geophysical data and well logs can be combined to produce a better view of the subsurface, and stratigraphers often use computer programs to do this in three dimensions. Stratigraphers can then use these data to reconstruct ancient processes occurring on the surface of the Earth, interpret past environments, and locate areas for water, coal, and hydrocarbon extraction. In the laboratory, biostratigraphers analyze rock samples from outcrop and drill cores for the fossils found in them. These fossils help scientists to date the core and to understand the depositional environment in which the rock units formed. Geochronologists precisely date rocks within the stratigraphic section to provide better absolute bounds on the timing and rates of deposition. Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores. Other scientists perform stable-isotope studies on the rocks to gain information about past climate. Planetary geology With the advent of space exploration in the twentieth century, geologists have begun to look at other planetary bodies in the same ways that have been developed to study the Earth. This new field of study is called planetary geology (sometimes known as astrogeology) and relies on known geological principles to study other bodies of the solar system. This is a major aspect of planetary science, and largely focuses on the terrestrial planets, icy moons, asteroids, comets, and meteorites. However, some planetary geophysicists study the giant planets and exoplanets. Although the Greek-language-origin prefix geo refers to Earth, "geology" is often used in conjunction with the names of other planetary bodies when describing their composition and internal processes: examples are "the geology of Mars" and "Lunar geology". Specialized terms such as selenology (studies of the Moon), areology (of Mars), etc., are also in use. Although planetary geologists are interested in studying all aspects of other planets, a significant focus is to search for evidence of past or present life on other worlds. This has led to many missions whose primary or ancillary purpose is to examine planetary bodies for evidence of life. One of these is the Phoenix lander, which analyzed Martian polar soil for water, chemical, and mineralogical constituents related to biological processes. Applied geology Economic geology Economic geology is a branch of geology that deals with aspects of economic minerals that humankind uses to fulfill various needs. Economic minerals are those extracted profitably for various practical uses. Economic geologists help locate and manage the Earth's natural resources, such as petroleum and coal, as well as mineral resources, which include metals such as iron, copper, and uranium. Mining geology Mining geology consists of the extractions of mineral and ore resources from the Earth. Some resources of economic interests include gemstones, metals such as gold and copper, and many minerals such as asbestos, Magnesite, perlite, mica, phosphates, zeolites, clay, pumice, quartz, and silica, as well as elements such as sulfur, chlorine, and helium. Petroleum geology Petroleum geologists study the locations of the subsurface of the Earth that can contain extractable hydrocarbons, especially petroleum and natural gas. Because many of these reservoirs are found in sedimentary basins, they study the formation of these basins, as well as their sedimentary and tectonic evolution and the present-day positions of the rock units. Engineering geology Engineering geology is the application of geological principles to engineering practice for the purpose of assuring that the geological factors affecting the location, design, construction, operation, and maintenance of engineering works are properly addressed. Engineering geology is distinct from geological engineering, particularly in North America. In the field of civil engineering, geological principles and analyses are used in order to ascertain the mechanical principles of the material on which structures are built. This allows tunnels to be built without collapsing, bridges and skyscrapers to be built with sturdy foundations, and buildings to be built that will not settle in clay and mud. Hydrology Geology and geological principles can be applied to various environmental problems such as stream restoration, the restoration of brownfields, and the understanding of the interaction between natural habitat and the geological environment. Groundwater hydrology, or hydrogeology, is used to locate groundwater, which can often provide a ready supply of uncontaminated water and is especially important in arid regions, and to monitor the spread of contaminants in groundwater wells. Paleoclimatology Geologists also obtain data through stratigraphy, boreholes, core samples, and ice cores. Ice cores and sediment cores are used for paleoclimate reconstructions, which tell geologists about past and present temperature, precipitation, and sea level across the globe. These datasets are our primary source of information on global climate change outside of instrumental data. Natural hazards Geologists and geophysicists study natural hazards in order to enact safe building codes and warning systems that are used to prevent loss of property and life. Examples of important natural hazards that are pertinent to geology (as opposed those that are mainly or only pertinent to meteorology) are: History The study of the physical material of the Earth dates back at least to ancient Greece when Theophrastus (372–287 BCE) wrote the work Peri Lithon (On Stones). During the Roman period, Pliny the Elder wrote in detail of the many minerals and metals, then in practical use – even correctly noting the origin of amber. Additionally, in the 4th century BCE Aristotle made critical observations of the slow rate of geological change. He observed the composition of the land and formulated a theory where the Earth changes at a slow rate and that these changes cannot be observed during one person's lifetime. Aristotle developed one of the first evidence-based concepts connected to the geological realm regarding the rate at which the Earth physically changes. Abu al-Rayhan al-Biruni (973–1048 CE) was one of the earliest Persian geologists, whose works included the earliest writings on the geology of India, hypothesizing that the Indian subcontinent was once a sea. Drawing from Greek and Indian scientific literature that were not destroyed by the Muslim conquests, the Persian scholar Ibn Sina (Avicenna, 981–1037) proposed detailed explanations for the formation of mountains, the origin of earthquakes, and other topics central to modern geology, which provided an essential foundation for the later development of the science. In China, the polymath Shen Kuo (1031–1095) formulated a hypothesis for the process of land formation: based on his observation of fossil animal shells in a geological stratum in a mountain hundreds of miles from the ocean, he inferred that the land was formed by the erosion of the mountains and by deposition of silt. Georgius Agricola (1494–1555) published his groundbreaking work De Natura Fossilium in 1546 and is seen as the founder of geology as a scientific discipline. Nicolas Steno (1638–1686) is credited with the law of superposition, the principle of original horizontality, and the principle of lateral continuity: three defining principles of stratigraphy. The word geology was first used by Ulisse Aldrovandi in 1603, then by Jean-André Deluc in 1778 and introduced as a fixed term by Horace-Bénédict de Saussure in 1779. The word is derived from the Greek γῆ, gê, meaning "earth" and λόγος, logos, meaning "speech". But according to another source, the word "geology" comes from a Norwegian, Mikkel Pedersøn Escholt (1600–1669), who was a priest and scholar. Escholt first used the definition in his book titled, Geologia Norvegica (1657). William Smith (1769–1839) drew some of the first geological maps and began the process of ordering rock strata (layers) by examining the fossils contained in them. In 1763, Mikhail Lomonosov published his treatise On the Strata of Earth. His work was the first narrative of modern geology, based on the unity of processes in time and explanation of the Earth's past from the present. James Hutton (1726–1797) is often viewed as the first modern geologist. In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. In his paper, he explained his theory that the Earth must be much older than had previously been supposed to allow enough time for mountains to be eroded and for sediments to form new rocks at the bottom of the sea, which in turn were raised up to become dry land. Hutton published a two-volume version of his ideas in 1795. Followers of Hutton were known as Plutonists because they believed that some rocks were formed by vulcanism, which is the deposition of lava from volcanoes, as opposed to the Neptunists, led by Abraham Werner, who believed that all rocks had settled out of a large ocean whose level gradually dropped over time. The first geological map of the U.S. was produced in 1809 by William Maclure. In 1807, Maclure commenced the self-imposed task of making a geological survey of the United States. Almost every state in the Union was traversed and mapped by him, the Allegheny Mountains being crossed and recrossed some 50 times. The results of his unaided labours were submitted to the American Philosophical Society in a memoir entitled Observations on the Geology of the United States explanatory of a Geological Map, and published in the Society's Transactions, together with the nation's first geological map. This antedates William Smith's geological map of England by six years, although it was constructed using a different classification of rocks. Sir Charles Lyell (1797–1875) first published his famous book, Principles of Geology, in 1830. This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth's history and are still occurring today. In contrast, catastrophism is the theory that Earth's features formed in single, catastrophic events and remained unchanged thereafter. Though Hutton believed in uniformitarianism, the idea was not widely accepted at the time. Much of 19th-century geology revolved around the question of the Earth's exact age. Estimates varied from a few hundred thousand to billions of years. By the early 20th century, radiometric dating allowed the Earth's age to be estimated at two billion years. The awareness of this vast amount of time opened the door to new theories about the processes that shaped the planet. Some of the most significant advances in 20th-century geology have been the development of the theory of plate tectonics in the 1960s and the refinement of estimates of the planet's age. Plate tectonics theory arose from two separate geological observations: seafloor spreading and continental drift. The theory revolutionized the Earth sciences. Today the Earth is known to be approximately 4.5 billion years old. Fields or related disciplines Earth system science Economic geology Mining geology Petroleum geology Engineering geology Environmental geology Environmental science Geoarchaeology Geochemistry Biogeochemistry Isotope geochemistry Geochronology Geodetics Geography Physical geography Technical geography Geological engineering Geological modelling Geometallurgy Geomicrobiology Geomorphology Geomythology Geophysics Glaciology Historical geology Hydrogeology Meteorology Mineralogy Oceanography Marine geology Paleoclimatology Paleontology Micropaleontology Palynology Petrology Petrophysics Planetary geology Plate tectonics Regional geology Sedimentology Seismology Soil science Pedology (soil study) Speleology Stratigraphy Biostratigraphy Chronostratigraphy Lithostratigraphy Structural geology Systems geology Tectonics Volcanology See also List of individual rocks References External links One Geology: This interactive geological map of the world is an international initiative of the geological surveys around the globe. This groundbreaking project was launched in 2007 and contributed to the 'International Year of Planet Earth', becoming one of their flagship projects. Earth Science News, Maps, Dictionary, Articles, Jobs American Geophysical Union American Geosciences Institute European Geosciences Union European Federation of Geologists Geological Society of America Geological Society of London Video-interviews with famous geologists Geology OpenTextbook Chronostratigraphy benchmarks The principles and objects of geology, with special reference to the geology of Egypt (1911), W. F. Hume
0.778622
0.998425
0.777396
A New Kind of Science
A New Kind of Science is a book by Stephen Wolfram, published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science. Contents Computation and its implications The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world. Simple programs The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, amongst others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; recursive functions; nested recursive functions; combinators; tag systems; register machines; reversal-addition. For a program to qualify as simple, there are several requirements: Its operation can be completely explained by a simple graphical illustration. It can be completely explained in a few sentences of human language. It can be implemented in a computer language using just a few lines of code. The number of its possible variations is small enough so that all of them can be computed. Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match. Simple programs are capable of a remarkable range of behavior. Some have been proven to be universal computers. Others exhibit properties familiar from traditional science, such as thermodynamic behavior, continuum behavior, conserved quantities, percolation, sensitive dependence on initial conditions, and others. They have been used as models of traffic, material fracture, crystal growth, biological growth, and various sociological, geological, and ecological phenomena. Another feature of simple programs is that, according to the book, making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system. Mapping and mining the computational universe In order to study simple rules and their often-complex behaviour, Wolfram argues that it is necessary to systematically explore all of these computational systems and document what they do. He further argues that this study should become a new branch of science, like physics or chemistry. The basic goal of this field is to understand and characterize the computational universe using experimental methods. The proposed new branch of scientific exploration admits many different forms of scientific production. For instance, qualitative classifications are often the results of initial forays into the computational jungle. On the other hand, explicit proofs that certain systems compute this or that function are also admissible. There are also some forms of production that are in some ways unique to this field of study. For example, the discovery of computational mechanisms that emerge in different systems but in bizarrely different forms. Another type of production involves the creation of programs for the analysis of computational systems. In the NKS framework, these themselves should be simple programs, and subject to the same goals and methodology. An extension of this idea is that the human mind is itself a computational system, and hence providing it with raw data in as effective a way as possible is crucial to research. Wolfram believes that programs and their analysis should be visualized as directly as possible, and exhaustively examined by the thousands or more. Since this new field concerns abstract rules, it can in principle address issues relevant to other fields of science. However, in general Wolfram's idea is that novel ideas and mechanisms can be discovered in the computational universe, where they can be represented in their simplest forms, and then other fields can choose among these discoveries for those they find relevant. Systematic abstract science While Wolfram advocates simple programs as a scientific discipline, he also argues that its methodology will revolutionize other fields of science. The basis of his argument is that the study of simple programs is the minimal possible form of science, grounded equally in both abstraction and empirical experimentation. Every aspect of the methodology advocated in NKS is optimized to make experimentation as direct, easy, and meaningful as possible while maximizing the chances that the experiment will do something unexpected. Just as this methodology allows computational mechanisms to be studied in their simplest forms, Wolfram argues that the process of doing so engages with the mathematical basis of the physical world, and therefore has much to offer the sciences. Wolfram argues that the computational realities of the universe make science hard for fundamental reasons. But he also argues that by understanding the importance of these realities, we can learn to use them in our favor. For instance, instead of reverse engineering our theories from observation, we can enumerate systems and then try to match them to the behaviors we observe. A major theme of NKS is investigating the structure of the possibility space. Wolfram argues that science is far too ad hoc, in part because the models used are too complicated and unnecessarily organized around the limited primitives of traditional mathematics. Wolfram advocates using models whose variations are enumerable and whose consequences are straightforward to compute and analyze. Philosophical underpinnings Computational irreducibility Wolfram argues that one of his achievements is in providing a coherent system of ideas that justifies computation as an organizing principle of science. For instance, he argues that the concept of computational irreducibility (that some complex computations are not amenable to short-cuts and cannot be "reduced"), is ultimately the reason why computational models of nature must be considered in addition to traditional mathematical models. Likewise, his idea of intrinsic randomness generation—that natural systems can generate their own randomness, rather than using chaos theory or stochastic perturbations—implies that computational models do not need to include explicit randomness. Principle of computational equivalence Based on his experimental results, Wolfram developed the principle of computational equivalence (PCE): the principle states that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power. Most systems can attain this level. Systems, in principle, compute the same things as a computer. Computation is therefore simply a question of translating input and outputs from one system to another. Consequently, most systems are computationally equivalent. Proposed examples of such systems are the workings of the human brain and the evolution of weather systems. The principle can be restated as follows: almost all processes that are not obviously simple are of equivalent sophistication. From this principle, Wolfram draws an array of concrete deductions which he argues reinforce his theory. Possibly the most important among these is an explanation as to why we experience randomness and complexity: often, the systems we analyze are just as sophisticated as we are. Thus, complexity is not a special quality of systems, like for instance the concept of "heat," but simply a label for all systems whose computations are sophisticated. Wolfram argues that understanding this makes possible the "normal science" of the NKS paradigm. Applications and results There are a number of specific results and ideas in the NKS book, and they can be organized into several themes. One common theme of examples and applications is demonstrating how little complexity it takes to achieve interesting behavior, and how the proper methodology can discover this behavior. First, there are several cases where the NKS book introduces what was, during the book's composition, the simplest known system in some class that has a particular characteristic. Some examples include the first primitive recursive function that results in complexity, the smallest universal Turing machine, and the shortest axiom for propositional calculus. In a similar vein, Wolfram also demonstrates many simple programs that exhibit phenomena like phase transitions, conserved quantities, continuum behavior, and thermodynamics that are familiar from traditional science. Simple computational models of natural systems like shell growth, fluid turbulence, and phyllotaxis are a final category of applications that fall in this theme. Another common theme is taking facts about the computational universe as a whole and using them to reason about fields in a holistic way. For instance, Wolfram discusses how facts about the computational universe inform evolutionary theory, SETI, free will, computational complexity theory, and philosophical fields like ontology, epistemology, and even postmodernism. Wolfram suggests that the theory of computational irreducibility may provide a resolution to the existence of free will in a nominally deterministic universe. He posits that the computational process in the brain of the being with free will is actually complex enough so that it cannot be captured in a simpler computation, due to the principle of computational irreducibility. Thus, while the process is indeed deterministic, there is no better way to determine the being's will than, in essence, to run the experiment and let the being exercise it. The book also contains a number of individual results—both experimental and analytic—about what a particular automaton computes, or what its characteristics are, using some methods of analysis. The book contains a new technical result in describing the Turing completeness of the Rule 110 cellular automaton. Very small Turing machines can simulate Rule 110, which Wolfram demonstrates using a 2-state 5-symbol universal Turing machine. Wolfram conjectures that a particular 2-state 3-symbol Turing machine is universal. In 2007, as part of commemorating the book's fifth anniversary, Wolfram's company offered a $25,000 prize for proof that this Turing machine is universal. Alex Smith, a computer science student from Birmingham, UK, won the prize later that year by proving Wolfram's conjecture. Reception Periodicals gave A New Kind of Science coverage, including articles in The New York Times, Newsweek, Wired, and The Economist. Some scientists criticized the book as abrasive and arrogant, and perceived a fatal flaw—that simple systems such as cellular automata are not complex enough to describe the degree of complexity present in evolved systems, and observed that Wolfram ignored the research categorizing the complexity of systems. Although critics accept Wolfram's result showing universal computation, they view it as minor and dispute Wolfram's claim of a paradigm shift. Others found that the work contained valuable insights and refreshing ideas. Wolfram addressed his critics in a series of blog posts. Scientific philosophy A tenet of NKS is that the simpler the system, the more likely a version of it will recur in a wide variety of more complicated contexts. Therefore, NKS argues that systematically exploring the space of simple programs will lead to a base of reusable knowledge. However, many scientists believe that of all possible parameters, only some actually occur in the universe. For instance, of all possible permutations of the symbols making up an equation, most will be essentially meaningless. NKS has also been criticized for asserting that the behavior of simple systems is somehow representative of all systems. Methodology A common criticism of NKS is that it does not follow established scientific methodology. For instance, NKS does not establish rigorous mathematical definitions, nor does it attempt to prove theorems; and most formulas and equations are written in Mathematica rather than standard notation. Along these lines, NKS has also been criticized for being heavily visual, with much information conveyed by pictures that do not have formal meaning. It has also been criticized for not using modern research in the field of complexity, particularly the works that have studied complexity from a rigorous mathematical perspective. And it has been criticized for misrepresenting chaos theory. Utility NKS has been criticized for not providing specific results that would be immediately applicable to ongoing scientific research. There has also been criticism, implicit and explicit, that the study of simple programs has little connection to the physical universe, and hence is of limited value. Steven Weinberg has pointed out that no real world system has been explained using Wolfram's methods in a satisfactory fashion. Mathematician Steven G. Krantz wrote, "Just because Wolfram can cook up a cellular automaton that seems to produce the spot pattern on a leopard, may we safely conclude that he understands the mechanism by which the spots are produced on the leopard, or why the spots are there, or what function (evolutionary or mating or camouflage or other) they perform?" Principle of computational equivalence (PCE) The principle of computational equivalence (PCE) has been criticized for being vague, unmathematical, and for not making directly verifiable predictions. It has also been criticized for being contrary to the spirit of research in mathematical logic and computational complexity theory, which seek to make fine-grained distinctions between levels of computational sophistication, and for wrongly conflating different kinds of universality property. Moreover, critics such as Ray Kurzweil have argued that it ignores the distinction between hardware and software; while two computers may be equivalent in power, it does not follow that any two programs they might run are also equivalent. Others suggest it is little more than a rechristening of the Church–Turing thesis. The fundamental theory (NKS Chapter 9) Wolfram's speculations of a direction towards a fundamental theory of physics have been criticized as vague and obsolete. Scott Aaronson, Professor of Computer Science at University of Texas Austin, also claims that Wolfram's methods cannot be compatible with both special relativity and Bell's theorem violations, and hence cannot explain the observed results of Bell tests. Edward Fredkin and Konrad Zuse pioneered the idea of a computable universe, the former by writing a line in his book on how the world might be like a cellular automaton, and later further developed by Fredkin using a toy model called Salt. It has been claimed that NKS tries to take these ideas as its own, but Wolfram's model of the universe is a rewriting network, and not a cellular automaton, as Wolfram himself has suggested a cellular automaton cannot account for relativistic features such as no absolute time frame. Jürgen Schmidhuber has also charged that his work on Turing machine-computable physics was stolen without attribution, namely his idea on enumerating possible Turing-computable universes. In a 2002 review of NKS, the Nobel laureate and elementary particle physicist Steven Weinberg wrote, "Wolfram himself is a lapsed elementary particle physicist, and I suppose he can't resist trying to apply his experience with digital computer programs to the laws of nature. This has led him to the view (also considered in a 1981 paper by Richard Feynman) that nature is discrete rather than continuous. He suggests that space consists of a set of isolated points, like cells in a cellular automaton, and that even time flows in discrete steps. Following an idea of Edward Fredkin, he concludes that the universe itself would then be an automaton, like a giant computer. It's possible, but I can't see any motivation for these speculations, except that this is the sort of system that Wolfram and others have become used to in their work on computers. So might a carpenter, looking at the moon, suppose that it is made of wood." Natural selection Wolfram's claim that natural selection is not the fundamental cause of complexity in biology has led journalist Chris Lavers to state that Wolfram does not understand the theory of evolution. Originality NKS has been heavily criticized as not being original or important enough to justify its title and claims. The authoritative manner in which NKS presents a vast number of examples and arguments has been criticized as leading the reader to believe that each of these ideas was original to Wolfram; in particular, one of the most substantial new technical results presented in the book, that the rule 110 cellular automaton is Turing complete, was not proven by Wolfram. Wolfram credits the proof to his research assistant, Matthew Cook. However, the notes section at the end of his book acknowledges many of the discoveries made by these other scientists citing their names together with historical facts, although not in the form of a traditional bibliography section. Additionally, the idea that very simple rules often generate great complexity is already an established idea in science, particularly in chaos theory and complex systems. See also Digital physics Scientific reductionism Calculating Space Marcus Hutter's "Universal Artificial Intelligence" algorithm References External links A New Kind of Science free E-Book What We've Learned from NKS YouTube playlist — extensive discussion of each NKS chapter; (As of 2022, Stephen Wolfram discusses the NKS chapters in view of recent developments. Wolfram Physics Project) 2002 non-fiction books Algorithmic art Cellular automata Computer science books Complex systems theory Mathematics and art Metatheory of science Science books Self-organization Systems theory books Wolfram Research Computational science
0.787976
0.986549
0.777377
Enculturation
Enculturation is the process by which people learn the dynamics of their surrounding culture and acquire values and norms appropriate or necessary to that culture and its worldviews. Definition and history of research The term enculturation was used first by sociologist of science Harry Collins to describe one of the models whereby scientific knowledge is communicated among scientists, and is contrasted with the 'algorithmic' mode of communication. The ingredients discussed by Collins for enculturation are Learning by Immersion: whereby aspiring scientists learn by engaging in the daily activities of the laboratory, interacting with other scientists, and participating in experiments and discussions. Tacit Knowledge: highlighting the importance of tacit knowledge—knowledge that is not easily codified or written down but is acquired through experience and practice. Socialization: where individuals learn the social norms, values, and behaviours expected within the scientific community. Language and Discourse: Scientists must become fluent in the terminology, theoretical frameworks, and modes of argumentation specific to their discipline. Community Membership: recognition of the individual as a legitimate member of the scientific community. The problem tackled in the article of Harry Collins was the early experiments for the detection of gravitational waves. Enculturation is mostly studied in sociology and anthropology. The influences that limit, direct, or shape the individual (whether deliberately or not) include parents, other adults, and peers. If successful, enculturation results in competence in the language, values, and rituals of the culture. Growing up, everyone goes through their own version of enculturation. Enculturation helps form an individual into an acceptable citizen. Culture impacts everything that an individual does, regardless of whether they know about it. Enculturation is a deep-rooted process that binds together individuals. Even as a culture undergoes changes, elements such as central convictions, values, perspectives, and young raising practices remain similar. Enculturation paves way for tolerance which is highly needed for peaceful co-habitance. The process of enculturation, most commonly discussed in the field of anthropology, is closely related to socialization, a concept central to the field of sociology. Both roughly describe the adaptation of an individual into social groups by absorbing the ideas, beliefs and practices surrounding them. In some disciplines, socialization refers to the deliberate shaping of the individual. As such, the term may cover both deliberate and informal enculturation. The process of learning and absorbing culture need not be social, direct or conscious. Cultural transmission can occur in various forms, though the most common social methods include observing other individuals, being taught or being instructed. Less obvious mechanisms include learning one's culture from the media, the information environment and various social technologies, which can lead to cultural transmission and adaptation across societies. A good example of this is the diffusion of hip-hop culture into states and communities beyond its American origins. Enculturation has often been studied in the context of non-immigrant African Americans. Conrad Phillip Kottak (in Window on Humanity) writes: Enculturation is referred to as acculturation in some academic literature. However, more recent literature has signalled a difference in meaning between the two. Whereas enculturation describes the process of learning one's own culture, acculturation denotes learning a different culture, for example, that of a host. The latter can be linked to ideas of a culture shock, which describes an emotionally-jarring disconnect between one's old and new culture cues. Famously, the sociologist Talcott Parsons once described children as "barbarians" of a sort, since they are fundamentally uncultured. How enculturation occurs When minorities come into the U.S., these people might fully associate with their racial legacy prior to taking part in processing enculturation. Enculturation can happen in several ways. Direct education implies that your family, instructors, or different individuals from the general public unequivocally show you certain convictions, esteems, or anticipated standards of conduct. Parents may play a vital role in teaching their children standard behavior for their culture, including table manners and some aspects of polite social interactions. Strict familial and societal teaching, which often uses different forms of positive and negative reinforcement to shape behavior, can lead a person to adhere closely to their religious convictions and customs. Schools also provide a formal setting to learn national values, such as honoring a country's flag, national anthem, and other significant patriotic symbols. Participatory learning occurs as individuals take an active role of interacting with their environment and culture. Through their own engagement in meaningful activities, they learn socio-cultural norms for their area and may adopt related qualities and values. For example, if your school organizes an outing to gather trash at a public park, this action assists with ingraining the upsides of regard for nature and ecological protection. Strict customs frequently stress participatory learning - for example, kids who take part in the singing of psalms during Christmas will assimilate the qualities and practices of the occasion. Observational learning is when knowledge is gained essentially by noticing and emulating others. As much as an individual related to a model accepts that emulating the model will prompt good results and feels that one is fit for mimicking the way of behaving, learning can happen with no unequivocal instruction. For example, a youngster who is sufficiently fortunate to be brought into the world by guardians in a caring relationship will figure out how to be tender and mindful in their future connections. See also Civil society Dual inheritance theory Education Educational anthropology Ethnocentrism Indoctrination Intercultural competence Mores Norm (philosophy) Norm (sociology) Peer pressure Transculturation References Bibliography Further reading External links Enculturation and Acculturation Community empowerment Concepts of moral character, historical and contemporary (Stanford Encyclopedia of Philosophy) Cultural concepts Cultural studies Interculturalism
0.785181
0.990046
0.777366
Physical chemistry
Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics, analytical dynamics and chemical equilibria. Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids). Some of the relationships that physical chemistry strives to understand include the effects of: Intermolecular forces that act upon the physical properties of materials (plasticity, tensile strength, surface tension in liquids). Reaction kinetics on the rate of a reaction. The identity of ions and the electrical conductivity of materials. Surface science and electrochemistry of cell membranes. Interaction of one body with another in terms of quantities of heat and work called thermodynamics. Transfer of heat between a chemical system and its surroundings during change of phase or chemical reaction taking place called thermochemistry Study of colligative properties of number of species present in solution. Number of phases, number of components and degree of freedom (or variance) can be correlated with one another with help of phase rule. Reactions of electrochemical cells. Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics. Calculation of the energy of electron movement in molecules and metal complexes. Key concepts The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems. One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them. Disciplines Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter. Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium. Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products, most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate. The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities. History The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations". Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule. The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909. Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development. Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry. See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship Journals Some journals that deal with physical chemistry include Zeitschrift für Physikalische Chemie (1887) Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997) Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905) Macromolecular Chemistry and Physics (1947) Annual Review of Physical Chemistry (1950) Molecular Physics (1957) Journal of Physical Organic Chemistry (1988) Journal of Physical Chemistry B (1997) ChemPhysChem (2000) Journal of Physical Chemistry C (2007) Journal of Physical Chemistry Letters (from 2010, combined letters previously published in the separate journals) Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914). Branches and related topics Chemical thermodynamics Chemical kinetics Statistical mechanics Quantum chemistry Electrochemistry Photochemistry Surface chemistry Solid-state chemistry Spectroscopy Biophysical chemistry Materials science Physical organic chemistry Micromeritics See also List of important publications in chemistry#Physical chemistry List of unsolved problems in chemistry#Physical chemistry problems Physical biochemistry :Category:Physical chemists References External links The World of Physical Chemistry (Keith J. Laidler, 1993) Physical Chemistry from Ostwald to Pauling (John W. Servos, 1996) Physical Chemistry: neither Fish nor Fowl? (Joachim Schummer, The Autonomy of Chemistry, Würzburg, Königshausen & Neumann, 1998, pp. 135–148) The Cambridge History of Science: The modern physical and mathematical sciences (Mary Jo Nye, 2003)
0.780363
0.996066
0.777293
Lamarckism
Lamarckism, also known as Lamarckian inheritance or neo-Lamarckism, is the notion that an organism can pass on to its offspring physical characteristics that the parent organism acquired through use or disuse during its lifetime. It is also called the inheritance of acquired characteristics or more recently soft inheritance. The idea is named after the French zoologist Jean-Baptiste Lamarck (1744–1829), who incorporated the classical era theory of soft inheritance into his theory of evolution as a supplement to his concept of orthogenesis, a drive towards complexity. Introductory textbooks contrast Lamarckism with Charles Darwin's theory of evolution by natural selection. However, Darwin's book On the Origin of Species gave credence to the idea of heritable effects of use and disuse, as Lamarck had done, and his own concept of pangenesis similarly implied soft inheritance. Many researchers from the 1860s onwards attempted to find evidence for Lamarckian inheritance, but these have all been explained away, either by other mechanisms such as genetic contamination or as fraud. August Weismann's experiment, considered definitive in its time, is now considered to have failed to disprove Lamarckism, as it did not address use and disuse. Later, Mendelian genetics supplanted the notion of inheritance of acquired traits, eventually leading to the development of the modern synthesis, and the general abandonment of Lamarckism in biology. Despite this, interest in Lamarckism has continued. Since new experimental results in the fields of epigenetics, genetics, and somatic hypermutation proved the possibility of transgenerational epigenetic inheritance of traits acquired by the previous generation. These proved a limited validity of Lamarckism. The inheritance of the hologenome, consisting of the genomes of all an organism's symbiotic microbes as well as its own genome, is also somewhat Lamarckian in effect, though entirely Darwinian in its mechanisms. Early history Origins The inheritance of acquired characteristics was proposed in ancient times and remained a current idea for many centuries. The historian of science Conway Zirkle wrote in 1935 that: Zirkle noted that Hippocrates described pangenesis, the theory that what is inherited derives from the whole body of the parent, whereas Aristotle thought it impossible; but that all the same, Aristotle implicitly agreed to the inheritance of acquired characteristics, giving the example of the inheritance of a scar, or of blindness, though noting that children do not always resemble their parents. Zirkle recorded that Pliny the Elder thought much the same. Zirkle pointed out that stories involving the idea of inheritance of acquired characteristics appear numerous times in ancient mythology and the Bible and persisted through to Rudyard Kipling's Just So Stories. The idea is mentioned in 18th century sources such as Diderot's D'Alembert's Dream. Erasmus Darwin's Zoonomia (c. 1795) suggested that warm-blooded animals develop from "one living filament... with the power of acquiring new parts" in response to stimuli, with each round of "improvements" being inherited by successive generations. Darwin's pangenesis Charles Darwin's On the Origin of Species proposed natural selection as the main mechanism for development of species, but (like Lamarck) gave credence to the idea of heritable effects of use and disuse as a supplementary mechanism. Darwin subsequently set out his concept of pangenesis in the final chapter of his book The Variation of Animals and Plants Under Domestication (1868), which gave numerous examples to demonstrate what he thought was the inheritance of acquired characteristics. Pangenesis, which he emphasised was a hypothesis, was based on the idea that somatic cells would, in response to environmental stimulation (use and disuse), throw off 'gemmules' or 'pangenes' which travelled around the body, though not necessarily in the bloodstream. These pangenes were microscopic particles that supposedly contained information about the characteristics of their parent cell, and Darwin believed that they eventually accumulated in the germ cells where they could pass on to the next generation the newly acquired characteristics of the parents. Darwin's half-cousin, Francis Galton, carried out experiments on rabbits, with Darwin's cooperation, in which he transfused the blood of one variety of rabbit into another variety in the expectation that its offspring would show some characteristics of the first. They did not, and Galton declared that he had disproved Darwin's hypothesis of pangenesis, but Darwin objected, in a letter to the scientific journal Nature, that he had done nothing of the sort, since he had never mentioned blood in his writings. He pointed out that he regarded pangenesis as occurring in protozoa and plants, which have no blood, as well as in animals. Lamarck's evolutionary framework Between 1800 and 1830, Lamarck proposed a systematic theoretical framework for understanding evolution. He saw evolution as comprising four laws: "Life by its own force, tends to increase the volume of all organs which possess the force of life, and the force of life extends the dimensions of those parts up to an extent that those parts bring to themselves;" "The production of a new organ in an animal body, results from a new requirement arising. and which continues to make itself felt, and a new movement which that requirement gives birth to, and its upkeep/maintenance;" "The development of the organs, and their ability, are constantly a result of the use of those organs." "All that has been acquired, traced, or changed, in the physiology of individuals, during their life, is conserved through the genesis, reproduction, and transmitted to new individuals who are related to those who have undergone those changes." Lamarck's discussion of heredity In 1830, in an aside from his evolutionary framework, Lamarck briefly mentioned two traditional ideas in his discussion of heredity, in his day considered to be generally true. The first was the idea of use versus disuse; he theorized that individuals lose characteristics they do not require, or use, and develop characteristics that are useful. The second was to argue that the acquired traits were heritable. He gave as an imagined illustration the idea that when giraffes stretch their necks to reach leaves high in trees, they would strengthen and gradually lengthen their necks. These giraffes would then have offspring with slightly longer necks. In the same way, he argued, a blacksmith, through his work, strengthens the muscles in his arms, and thus his sons would have similar muscular development when they mature. Lamarck stated the following two laws: Première Loi: Dans tout animal qui n' a point dépassé le terme de ses développemens, l' emploi plus fréquent et soutenu d' un organe quelconque, fortifie peu à peu cet organe, le développe, l' agrandit, et lui donne une puissance proportionnée à la durée de cet emploi; tandis que le défaut constant d' usage de tel organe, l'affoiblit insensiblement, le détériore, diminue progressivement ses facultés, et finit par le faire disparoître. Deuxième Loi: Tout ce que la nature a fait acquérir ou perdre aux individus par l' influence des circonstances où leur race se trouve depuis long-temps exposée, et, par conséquent, par l' influence de l' emploi prédominant de tel organe, ou par celle d' un défaut constant d' usage de telle partie; elle le conserve par la génération aux nouveaux individus qui en proviennent, pourvu que les changemens acquis soient communs aux deux sexes, ou à ceux qui ont produit ces nouveaux individus. English translation: First Law [Use and Disuse]: In every animal which has not passed the limit of its development, a more frequent and continuous use of any organ gradually strengthens, develops and enlarges that organ, and gives it a power proportional to the length of time it has been so used; while the permanent disuse of any organ imperceptibly weakens and deteriorates it, and progressively diminishes its functional capacity, until it finally disappears. Second Law [Soft Inheritance]: All the acquisitions or losses wrought by nature on individuals, through the influence of the environment in which their race has long been placed, and hence through the influence of the predominant use or permanent disuse of any organ; all these are preserved by reproduction to the new individuals which arise, provided that the acquired modifications are common to both sexes, or at least to the individuals which produce the young. In essence, a change in the environment brings about change in "needs" (besoins), resulting in change in behaviour, causing change in organ usage and development, bringing change in form over time—and thus the gradual transmutation of the species. As the evolutionary biologists and historians of science Conway Zirkle, Michael Ghiselin, and Stephen Jay Gould have pointed out, these ideas were not original to Lamarck. Weismann's experiment August Weismann's germ plasm theory held that germline cells in the gonads contain information that passes from one generation to the next, unaffected by experience, and independent of the somatic (body) cells. This implied what came to be known as the Weismann barrier, as it would make Lamarckian inheritance from changes to the body difficult or impossible. Weismann conducted the experiment of removing the tails of 68 white mice, and those of their offspring over five generations, and reporting that no mice were born in consequence without a tail or even with a shorter tail. In 1889, he stated that "901 young were produced by five generations of artificially mutilated parents, and yet there was not a single example of a rudimentary tail or of any other abnormality in this organ." The experiment, and the theory behind it, were thought at the time to be a refutation of Lamarckism. The experiment's effectiveness in refuting Lamarck's hypothesis is doubtful, as it did not address the use and disuse of characteristics in response to the environment. The biologist Peter Gauthier noted in 1990 that: Ghiselin also considered the Weismann tail-chopping experiment to have no bearing on the Lamarckian hypothesis, writing in 1994 that: The acquired characteristics that figured in Lamarck's thinking were changes that resulted from an individual's own drives and actions, not from the actions of external agents. Lamarck was not concerned with wounds, injuries or mutilations, and nothing that Lamarck had set forth was tested or "disproven" by the Weismann tail-chopping experiment. The historian of science Rasmus Winther stated that Weismann had nuanced views about the role of the environment on the germ plasm. Indeed, like Darwin, he consistently insisted that a variable environment was necessary to cause variation in the hereditary material. Textbook Lamarckism The identification of Lamarckism with the inheritance of acquired characteristics is regarded by evolutionary biologists including Ghiselin as a falsified artifact of the subsequent history of evolutionary thought, repeated in textbooks without analysis, and wrongly contrasted with a falsified picture of Darwin's thinking. Ghiselin notes that "Darwin accepted the inheritance of acquired characteristics, just as Lamarck did, and Darwin even thought that there was some experimental evidence to support it." Gould wrote that in the late 19th century, evolutionists "re-read Lamarck, cast aside the guts of it ... and elevated one aspect of the mechanics—inheritance of acquired characters—to a central focus it never had for Lamarck himself." He argued that "the restriction of 'Lamarckism' to this relatively small and non-distinctive corner of Lamarck's thought must be labelled as more than a misnomer, and truly a discredit to the memory of a man and his much more comprehensive system." Neo-Lamarckism Context The period of the history of evolutionary thought between Darwin's death in the 1880s, and the foundation of population genetics in the 1920s and the beginnings of the modern evolutionary synthesis in the 1930s, is called the eclipse of Darwinism by some historians of science. During that time many scientists and philosophers accepted the reality of evolution but doubted whether natural selection was the main evolutionary mechanism. Among the most popular alternatives were theories involving the inheritance of characteristics acquired during an organism's lifetime. Scientists who felt that such Lamarckian mechanisms were the key to evolution were called neo-Lamarckians. They included the British botanist George Henslow (1835–1925), who studied the effects of environmental stress on the growth of plants, in the belief that such environmentally-induced variation might explain much of plant evolution, and the American entomologist Alpheus Spring Packard Jr., who studied blind animals living in caves and wrote a book in 1901 about Lamarck and his work. Also included were paleontologists like Edward Drinker Cope and Alpheus Hyatt, who observed that the fossil record showed orderly, almost linear, patterns of development that they felt were better explained by Lamarckian mechanisms than by natural selection. Some people, including Cope and the Darwin critic Samuel Butler, felt that inheritance of acquired characteristics would let organisms shape their own evolution, since organisms that acquired new habits would change the use patterns of their organs, which would kick-start Lamarckian evolution. They considered this philosophically superior to Darwin's mechanism of random variation acted on by selective pressures. Lamarckism also appealed to those, like the philosopher Herbert Spencer and the German anatomist Ernst Haeckel, who saw evolution as an inherently progressive process. The German zoologist Theodor Eimer combined Larmarckism with ideas about orthogenesis, the idea that evolution is directed towards a goal. With the development of the modern synthesis of the theory of evolution, and a lack of evidence for a mechanism for acquiring and passing on new characteristics, or even their heritability, Lamarckism largely fell from favour. Unlike neo-Darwinism, neo-Lamarckism is a loose grouping of largely heterodox theories and mechanisms that emerged after Lamarck's time, rather than a coherent body of theoretical work. 19th century Neo-Lamarckian versions of evolution were widespread in the late 19th century. The idea that living things could to some degree choose the characteristics that would be inherited allowed them to be in charge of their own destiny as opposed to the Darwinian view, which placed them at the mercy of the environment. Such ideas were more popular than natural selection in the late 19th century as it made it possible for biological evolution to fit into a framework of a divine or naturally willed plan, thus the neo-Lamarckian view of evolution was often advocated by proponents of orthogenesis. According to the historian of science Peter J. Bowler, writing in 2003: Scientists from the 1860s onwards conducted numerous experiments that purported to show Lamarckian inheritance. Some examples are described in the table. Early 20th century A century after Lamarck, scientists and philosophers continued to seek mechanisms and evidence for the inheritance of acquired characteristics. Experiments were sometimes reported as successful, but from the beginning these were either criticised on scientific grounds or shown to be fakes. For instance, in 1906, the philosopher Eugenio Rignano argued for a version that he called "centro-epigenesis", but it was rejected by most scientists. Some of the experimental approaches are described in the table. Late 20th century The British anthropologist Frederic Wood Jones and the South African paleontologist Robert Broom supported a neo-Lamarckian view of human evolution. The German anthropologist Hermann Klaatsch relied on a neo-Lamarckian model of evolution to try and explain the origin of bipedalism. Neo-Lamarckism remained influential in biology until the 1940s when the role of natural selection was reasserted in evolution as part of the modern evolutionary synthesis. Herbert Graham Cannon, a British zoologist, defended Lamarckism in his 1959 book Lamarck and Modern Genetics. In the 1960s, "biochemical Lamarckism" was advocated by the embryologist Paul Wintrebert. Neo-Lamarckism was dominant in French biology for more than a century. French scientists who supported neo-Lamarckism included Edmond Perrier (1844–1921), Alfred Giard (1846–1908), Gaston Bonnier (1853–1922) and Pierre-Paul Grassé (1895–1985). They followed two traditions, one mechanistic, one vitalistic after Henri Bergson's philosophy of evolution. In 1987, Ryuichi Matsuda coined the term "pan-environmentalism" for his evolutionary theory which he saw as a fusion of Darwinism with neo-Lamarckism. He held that heterochrony is a main mechanism for evolutionary change and that novelty in evolution can be generated by genetic assimilation. His views were criticized by Arthur M. Shapiro for providing no solid evidence for his theory. Shapiro noted that "Matsuda himself accepts too much at face value and is prone to wish-fulfilling interpretation." Ideological neo-Lamarckism A form of Lamarckism was revived in the Soviet Union of the 1930s when Trofim Lysenko promoted the ideologically driven research programme, Lysenkoism; this suited the ideological opposition of Joseph Stalin to genetics. Lysenkoism influenced Soviet agricultural policy which in turn was later blamed for the numerous massive crop failures experienced within Soviet states. Critique George Gaylord Simpson in his book Tempo and Mode in Evolution (1944) claimed that experiments in heredity have failed to corroborate any Lamarckian process. Simpson noted that neo-Lamarckism "stresses a factor that Lamarck rejected: inheritance of direct effects of the environment" and neo-Lamarckism is closer to Darwin's pangenesis than Lamarck's views. Simpson wrote, "the inheritance of acquired characters, failed to meet the tests of observation and has been almost universally discarded by biologists." Zirkle pointed out that Lamarck did not originate the hypothesis that acquired characteristics could be inherited, so it is incorrect to refer to it as Lamarckism: What Lamarck really did was to accept the hypothesis that acquired characters were heritable, a notion which had been held almost universally for well over two thousand years and which his contemporaries accepted as a matter of course, and to assume that the results of such inheritance were cumulative from generation to generation, thus producing, in time, new species. His individual contribution to biological theory consisted in his application to the problem of the origin of species of the view that acquired characters were inherited and in showing that evolution could be inferred logically from the accepted biological hypotheses. He would doubtless have been greatly astonished to learn that a belief in the inheritance of acquired characters is now labeled "Lamarckian," although he would almost certainly have felt flattered if evolution itself had been so designated. Peter Medawar wrote regarding Lamarckism, "very few professional biologists believe that anything of the kind occurs—or can occur—but the notion persists for a variety of nonscientific reasons." Medawar stated there is no known mechanism by which an adaptation acquired in an individual's lifetime can be imprinted on the genome and Lamarckian inheritance is not valid unless it excludes the possibility of natural selection, but this has not been demonstrated in any experiment. Martin Gardner wrote in his book Fads and Fallacies in the Name of Science (1957): A host of experiments have been designed to test Lamarckianism. All that have been verified have proved negative. On the other hand, tens of thousands of experiments— reported in the journals and carefully checked and rechecked by geneticists throughout the world— have established the correctness of the gene-mutation theory beyond all reasonable doubt... In spite of the rapidly increasing evidence for natural selection, Lamarck has never ceased to have loyal followers.... There is indeed a strong emotional appeal in the thought that every little effort an animal puts forth is somehow transmitted to his progeny. According to Ernst Mayr, any Lamarckian theory involving the inheritance of acquired characters has been refuted as "DNA does not directly participate in the making of the phenotype and that the phenotype, in turn, does not control the composition of the DNA." Peter J. Bowler has written that although many early scientists took Lamarckism seriously, it was discredited by genetics in the early twentieth century. Mechanisms resembling Lamarckism Studies in the field of epigenetics, genetics and somatic hypermutation have highlighted the possible inheritance of traits acquired by the previous generation. However, the characterization of these findings as Lamarckism has been disputed. Transgenerational epigenetic inheritance Epigenetic inheritance has been argued by scientists including Eva Jablonka and Marion J. Lamb to be Lamarckian. Epigenetics is based on hereditary elements other than genes that pass into the germ cells. These include methylation patterns in DNA and chromatin marks on histone proteins, both involved in gene regulation. These marks are responsive to environmental stimuli, differentially affect gene expression, and are adaptive, with phenotypic effects that persist for some generations. The mechanism may also enable the inheritance of behavioral traits, for example in chickens, rats and human populations that have experienced starvation, DNA methylation resulting in altered gene function in both the starved population and their offspring. Methylation similarly mediates epigenetic inheritance in plants such as rice. Small RNA molecules, too, may mediate inherited resistance to infection. Handel and Ramagopalan commented that "epigenetics allows the peaceful co-existence of Darwinian and Lamarckian evolution." Joseph Springer and Dennis Holley commented in 2013 that: Lamarck and his ideas were ridiculed and discredited. In a strange twist of fate, Lamarck may have the last laugh. Epigenetics, an emerging field of genetics, has shown that Lamarck may have been at least partially correct all along. It seems that reversible and heritable changes can occur without a change in DNA sequence (genotype) and that such changes may be induced spontaneously or in response to environmental factors—Lamarck's "acquired traits." Determining which observed phenotypes are genetically inherited and which are environmentally induced remains an important and ongoing part of the study of genetics, developmental biology, and medicine. The prokaryotic CRISPR system and Piwi-interacting RNA could be classified as Lamarckian, within a Darwinian framework. However, the significance of epigenetics in evolution is uncertain. Critics such as the evolutionary biologist Jerry Coyne point out that epigenetic inheritance lasts for only a few generations, so it is not a stable basis for evolutionary change. The evolutionary biologist T. Ryan Gregory contends that epigenetic inheritance should not be considered Lamarckian. According to Gregory, Lamarck did not claim that the environment directly affected living things. Instead, Lamarck "argued that the environment created needs to which organisms responded by using some features more and others less, that this resulted in those features being accentuated or attenuated, and that this difference was then inherited by offspring." Gregory has stated that Lamarckian evolution in epigenetics is more like Darwin's point of view than Lamarck's. In 2007, David Haig wrote that research into epigenetic processes does allow a Lamarckian element in evolution but the processes do not challenge the main tenets of the modern evolutionary synthesis as modern Lamarckians have claimed. Haig argued for the primacy of DNA and evolution of epigenetic switches by natural selection. Haig has written that there is a "visceral attraction" to Lamarckian evolution from the public and some scientists, as it posits the world with a meaning, in which organisms can shape their own evolutionary destiny. Thomas Dickens and Qazi Rahman (2012) have argued that epigenetic mechanisms such as DNA methylation and histone modification are genetically inherited under the control of natural selection and do not challenge the modern synthesis. They dispute the claims of Jablonka and Lamb on Lamarckian epigenetic processes. In 2015, Khursheed Iqbal and colleagues discovered that although "endocrine disruptors exert direct epigenetic effects in the exposed fetal germ cells, these are corrected by reprogramming events in the next generation." Also in 2015, Adam Weiss argued that bringing back Lamarck in the context of epigenetics is misleading, commenting, "We should remember [Lamarck] for the good he contributed to science, not for things that resemble his theory only superficially. Indeed, thinking of CRISPR and other phenomena as Lamarckian only obscures the simple and elegant way evolution really works." Somatic hypermutation and reverse transcription to germline In the 1970s, the Australian immunologist Edward J. Steele developed a neo-Lamarckian theory of somatic hypermutation within the immune system and coupled it to the reverse transcription of RNA derived from body cells to the DNA of germline cells. This reverse transcription process supposedly enabled characteristics or bodily changes acquired during a lifetime to be written back into the DNA and passed on to subsequent generations. The mechanism was meant to explain why homologous DNA sequences from the VDJ gene regions of parent mice were found in their germ cells and seemed to persist in the offspring for a few generations. The mechanism involved the somatic selection and clonal amplification of newly acquired antibody gene sequences generated via somatic hypermutation in B-cells. The messenger RNA products of these somatically novel genes were captured by retroviruses endogenous to the B-cells and were then transported through the bloodstream where they could breach the Weismann or soma-germ barrier and reverse transcribe the newly acquired genes into the cells of the germ line, in the manner of Darwin's pangenes. The historian of biology Peter J. Bowler noted in 1989 that other scientists had been unable to reproduce his results, and described the scientific consensus at the time: Bowler commented that "[Steele's] work was bitterly criticized at the time by biologists who doubted his experimental results and rejected his hypothetical mechanism as implausible." Hologenome theory of evolution The hologenome theory of evolution, while Darwinian, has Lamarckian aspects. An individual animal or plant lives in symbiosis with many microorganisms, and together they have a "hologenome" consisting of all their genomes. The hologenome can vary like any other genome by mutation, sexual recombination, and chromosome rearrangement, but in addition it can vary when populations of microorganisms increase or decrease (resembling Lamarckian use and disuse), and when it gains new kinds of microorganism (resembling Lamarckian inheritance of acquired characteristics). These changes are then passed on to offspring. The mechanism is largely uncontroversial, and natural selection does sometimes occur at whole system (hologenome) level, but it is not clear that this is always the case. Baldwin effect The Baldwin effect, named after the psychologist James Mark Baldwin by George Gaylord Simpson in 1953, proposes that the ability to learn new behaviours can improve an animal's reproductive success, and hence the course of natural selection on its genetic makeup. Simpson stated that the mechanism was "not inconsistent with the modern synthesis" of evolutionary theory, though he doubted that it occurred very often or could be proven to occur. He noted that the Baldwin effect provided a reconciliation between the neo-Darwinian and neo-Lamarckian approaches, something that the modern synthesis had seemed to render unnecessary. In particular, the effect allows animals to adapt to a new stress in the environment through behavioural changes, followed by genetic change. This somewhat resembles Lamarckism but without requiring animals to inherit characteristics acquired by their parents. The Baldwin effect is broadly accepted by Darwinists. In sociocultural evolution Within the field of cultural evolution, Lamarckism has been applied as a mechanism for dual inheritance theory. Gould viewed culture as a Lamarckian process whereby older generations transmitted adaptive information to offspring via the concept of learning. In the history of technology, components of Lamarckism have been used to link cultural development to human evolution by considering technology as extensions of human anatomy. References Bibliography . Retrieved 2015-10-26. Further reading Translation of Lamarck, ou, Le mythe du précurseur (1979) Contains the BBC Reith Lectures "The Future of Man." "Consists of papers given at a workshop on the origins of music held in Fiesole, Italy, May 1997, the first of a series called Florentine Workshops in Biomusicology." "Essays ... based upon papers read at a conference held at the University of Edinburgh ... 1959." "Annual address of the president of the Biological Society of Washington. Delivered January 24, 1891. (From the Proceedings, vol. VI.)" . External links Jean-Baptiste Lamarck (1744-1829) at the University of California Museum of Paleontology Non-Darwinian evolution History of evolutionary biology Biology theories Jean-Baptiste Lamarck
0.778949
0.997788
0.777227
Panmixia
Panmixia (or panmixis) means uniform random fertilization. A panmictic population is one where all potential parents may contribute equally to the gamete pool, and that these gametes are uniformly distributed within the gamete population (gamodeme). This assumes that there are no hybridising restrictions within the parental population: neither genetics, cytogenetics nor behavioural; and neither spatial nor temporal (see also Quantitative genetics for further discussion). Therefore, all gamete recombination (fertilization) is uniformly possible. Both the Wahlund effect and the Hardy Weinberg equilibrium assume that the overall population is panmictic. In genetics and heredity, random mating usually implies the hybridising (mating) of individuals regardless of any spatial, physical, genetical, temporal or social preference. That is, the mating between two organisms is not influenced by any environmental, nor hereditary interaction. Hence, potential mates have an equal chance of being contributors to the fertilizing gamete pool. If there is no random sub-sampling of gametes involved in the fertilization cohort, panmixia has occurred. Such uniform random mating is distinct from lack of natural selection: in viability selection for instance, selection occurs before mating. Description In simple terms, panmixia (or panmicticism) is the ability of individuals in a population to interbreed without restrictions; individuals are able to move about freely within their habitat, possibly over a range of hundreds to thousands of miles, and thus breed with other members of the population. To signify the importance of this, imagine several different finite populations of the same species (for example: a grazing herbivore), isolated from each other by some physical characteristic of the environment (dense forest areas separating grazing lands). As time progresses, natural selection and genetic drift will slowly move each population toward genetic differentiation that would make each population genetically unique (that could eventually lead to speciation events or extirpation). However, if the separating factor is removed before this happens (e.g. a road is cut through the forest), and the individuals are allowed to move about freely, the individual populations will still be able to interbreed. As the species's populations interbreed over time, they become more genetically uniform, functioning again as a single panmictic population. In attempting to describe the mathematical properties of structured populations, Sewall Wright proposed a "factor of Panmixia" (P) to include in the equations describing the gene frequencies in a population, and accounting for a population's tendency towards panmixia, while a "factor of Fixation" (F) would account for a population's departure from the Hardy–Weinberg expectation, due to less than panmictic mating. In this formulation, the two quantities are complementary, i.e. P = 1 − F. From this factor of fixation, he later developed the F statistics. Background information In a panmictic species, all of the individuals of a single species are potential partners, and the species gives no mating restrictions throughout the population. Panmixia can also be referred to as random mating, referring to a population that randomly chooses their mate, rather than sorting between the adults of the population. Panmixia allows for species to reach genetic diversity through gene flow more efficiently than monandry species. However, outside population factors, like drought and limited food sources, can affect the way any species will mate. When scientists examine species mating to understand their mating style, they look at factors like genetic markers, genetic differentiation, and gene pool. Panmictic species A panmictic population of Monostroma latissimum, a marine green algae, shows sympatric speciation in southwest Japanese islands. Although panmictic, the population is diversifying. Dawson's burrowing bee, Amegilla dawsoni, may be forced to aggregate in common mating areas due to uneven resource distribution in its harsh desert environment. Pantala flavescens should be considered as a global panmictic population. Related experiments and species Anguilla rostrata, or the American eel, exhibits panmixia throughout the entire species. This allows the eel to have phenotypic variation in their offspring and survive in a wide range of environmental conditions In 2016, BMC Evolutionary Biology conducted a study on Pachygrapsus marmoratus, the marbled crab, marking them as panmictic species. The study claimed that the crabs' mating behavior is characterized by genetic differentiation due to geographic breaks across its distribution range and not panmixia In a heterogeneous environment such as the forests of Oregon, United States, Douglas squirrels (Tamiasciurus douglasii) exhibit local patterns of adaptation. In a study conducted by Chaves (2014) a population along an entire transect was found to be panmictic. Traits observed in this study included skull shape, fur color, etc. Swordfish based in the Indian Ocean (Xiphias gladius) have been found to be a single panmictic population. Markers used in the study carried out by Muths et al. (2013) found large spatial and temporal homogeneity in genetic structure satisfactory in order to consider the swordfish a singular panmictic population. See also Population genetics Quantitative Genetics Assortative mating (one form of non-random mating, where similar phenotypes hybridise) Disassortative mating (where phenotypic opposites are hybridised) Monogamy: A mating system in which one male mates with just one female, and one female mates with just one male, in breeding season Polygyny: A mating system in which a male fertilizes the eggs of several partners in breeding season Sexual selection: A form of natural selection that occurs when individuals vary in their ability to compete with others for mates or to attract members of the opposite sex Fitness: A measure of the genes contributed to the next generation by an individual, often stated in terms of the number of surviving offspring produced by the individual References Further reading Population Population genetics
0.79372
0.979174
0.77719
Nature conservation
Nature conservation is the moral philosophy and conservation movement focused on protecting species from extinction, maintaining and restoring habitats, enhancing ecosystem services, and protecting biological diversity. A range of values underlie conservation, which can be guided by biocentrism, anthropocentrism, ecocentrism, and sentientism, environmental ideologies that inform ecocultural practices and identities. There has recently been a movement towards evidence-based conservation which calls for greater use of scientific evidence to improve the effectiveness of conservation efforts. As of 2018 15% of land and 7.3% of the oceans were protected. Many environmentalists set a target of protecting 30% of land and marine territory by 2030. In 2021, 16.64% of land and 7.9% of the oceans were protected. The 2022 IPCC report on climate impacts and adaptation, underlines the need to conserve 30% to 50% of the Earth's land, freshwater and ocean areas – echoing the 30% goal of the U.N.'s Convention on Biodiversity. Introduction Conservation goals include conserving habitat, preventing deforestation, maintaining soil organic matter, halting species extinction, reducing overfishing, and mitigating climate change. Different philosophical outlooks guide conservationists towards these different goals. The principal value underlying many expressions of the conservation ethic is that the natural world has intrinsic and intangible worth along with utilitarian value – a view carried forward by parts of the scientific conservation movement and some of the older Romantic schools of the ecology movement. Philosophers have attached intrinsic value to different aspects of nature, whether this is individual organisms (biocentrism) or ecological wholes such as species or ecosystems (ecoholism). More utilitarian schools of conservation have an anthropocentric outlook and seek a proper valuation of local and global impacts of human activity upon nature in their effect upon human wellbeing, now and to posterity. How such values are assessed and exchanged among people determines the social, political and personal restraints and imperatives by which conservation is practiced. This is a view common in the modern environmental movement. There is increasing interest in extending the responsibility for human wellbeing to include the welfare of sentient animals. In 2022 the United Kingdom introduced the Animal Welfare (Sentience) Act which lists all vertebrates, decapod crustaceans and cephalopods as sentient beings. Branches of conservation ethics focusing on sentient individuals include ecofeminism and compassionate conservation. In the United States of America, the year 1864 saw the publication of two books which laid the foundation for Romantic and Utilitarian conservation traditions in America. The posthumous publication of Henry David Thoreau's Walden established the grandeur of unspoiled nature as a citadel to nourish the spirit of man. A very different book from George Perkins Marsh, Man and Nature, later subtitled "The Earth as Modified by Human Action", catalogued his observations of man exhausting and altering the land from which his sustenance derives. The consumer conservation ethic has been defined as the attitudes and behaviors held and engaged in by individuals and families that ultimately serve to reduce overall societal consumption of energy. The conservation movement has emerged from the advancements of moral reasoning. Increasing numbers of philosophers and scientists have made its maturation possible by considering the relationships between human beings and organisms with the same rigor. This social ethic primarily relates to local purchasing, moral purchasing, the sustained, and efficient use of renewable resources, the moderation of destructive use of finite resources, and the prevention of harm to common resources such as air and water quality, the natural functions of a living earth, and cultural values in a built environment. These practices are used to slow down the accelerating rate in which extinction is occurring at. The origins of this ethic can be traced back to many different philosophical and religious beliefs; that is, these practices has been advocated for centuries. In the past, conservationism has been categorized under a spectrum of views, including anthropocentric, utilitarian conservationism, and radical eco-centric green eco-political views. More recently, the three major movements has been grouped to become what we now know as conservation ethic. The person credited with formulating the conservation ethic in the United States is former president, Theodore Roosevelt. Terminology The term "conservation" was coined by Gifford Pinchot in 1907. He told his close friend United States President Theodore Roosevelt who used it for a national conference of governors in 1908. In common usage, the term refers to the activity of systematically protecting natural resources such as forests, including biological diversity. Carl F. Jordan defines biological conservation as: While this usage is not new, the idea of biological conservation has been applied to the principles of ecology, biogeography, anthropology, economy, and sociology to maintain biodiversity. The term "conservation" itself may cover the concepts such as cultural diversity, genetic diversity, and the concept of movements environmental conservation, seedbank curation (preservation of seeds), and gene bank coordination (preservation of animals' genetic material). These are often summarized as the priority to respect diversity. Much recent movement in conservation can be considered a resistance to commercialism and globalization. Slow Food is a consequence of rejecting these as moral priorities, and embracing a slower and more locally focused lifestyle. Sustainable living is a lifestyle that people are beginning to adopt, promoting to make decisions that would help protect biodiversity. The small lifestyle changes that promote sustainability will eventually accumulate into the proliferation of biological diversity. Regulating the ecolabeling of products from fisheries, controlling for sustainable food production, or keeping the lights off during the day are some examples of sustainable living. However, sustainable living is not a simple and uncomplicated approach. A 1987 Brundtland Report expounds on the notion of sustainability as a process of change that looks different for everyone: "It is not a fixed state of harmony, but rather a process of change in which the exploitation of resources, the direction of investments, the orientation of technological development, and institutional change are made consistent with future as well as present needs. We do not pretend that the process is easy or straightforward." Simply put, sustainable living does make a difference by compiling many individual actions that encourage the protection of biological diversity. Practice Distinct trends exist regarding conservation development. The need for conserving land has only recently intensified during what some scholars refer to as the Capitalocene epoch. This era marks the beginning of colonialism, globalization, and the Industrial Revolution that has led to global land change as well as climate change. While many countries' efforts to preserve species and their habitats have been government-led, those in the North Western Europe tended to arise out of the middle-class and aristocratic interest in natural history, expressed at the level of the individual and the national, regional or local learned society. Thus countries like Britain, the Netherlands, Germany, etc. had what would be called non-governmental organizations – in the shape of the Royal Society for the Protection of Birds, National Trust and County Naturalists' Trusts (dating back to 1889, 1895, and 1912 respectively) Natuurmonumenten, Provincial Conservation Trusts for each Dutch province, Vogelbescherming, etc. – a long time before there were national parks and national nature reserves. This in part reflects the absence of wilderness areas in heavily cultivated Europe, as well as a longstanding interest in laissez-faire government in some countries, like the UK, leaving it as no coincidence that John Muir, the Scottish-born founder of the National Park movement (and hence of government-sponsored conservation) did his sterling work in the US, where he was the motor force behind the establishment of such national parks as Yosemite and Yellowstone. Nowadays, officially more than 10 percent of the world is legally protected in some way or the other, and in practice, private fundraising is insufficient to pay for the effective management of so much land with protective status. Protected areas in developing countries, where probably as many as 70–80 percent of the species of the world live, still enjoy very little effective management and protection. Some countries, such as Mexico, have non-profit civil organizations and landowners dedicated to protecting vast private property, such is the case of Hacienda Chichen's Maya Jungle Reserve and Bird Refuge in Chichen Itza, Yucatán. The Adopt A Ranger Foundation has calculated that worldwide about 140,000 rangers are needed for the protected areas in developing and transition countries. There are no data on how many rangers are employed at the moment, but probably less than half the protected areas in developing and transition countries have any rangers at all and those that have them are at least 50% short. This means that there would be a worldwide ranger deficit of 105,000 rangers in the developing and transition countries. The terms conservation and preservation are frequently conflated outside the academic, scientific, and professional kinds of literature. The United States' National Park Service offers the following explanation of the important ways in which these two terms represent very different conceptions of environmental protection ethics: During the environmental movement of the early 20th century, two opposing factions emerged: conservationists and preservationists. Conservationists sought to regulate human use while preservationists sought to eliminate human impact altogether." C. Anne Claus presents a distinction for conservation practices. Claus divides conservation into conservation-far and conservation-near. Conservation-far is the means of protecting nature by separating it and safeguarding it from humans. Means of doing this include the creation of preserves or national parks. They are meant to keep the flora and fauna away from human influence and have become a staple method in the west. Conservation-near however is conservation via connection. The method of reconnecting people to nature through traditions and beliefs to foster a desire to protect nature. The basis is that instead of forcing compliance to separate from nature onto the people, instead conservationists work with locals and their traditions to find conservation efforts that work for all. Evidence-based conservation Evidence-based conservation is the application of evidence in conservation management actions and policy making. It is defined as systematically assessing scientific information from published, peer-reviewed publications and texts, practitioners' experiences, independent expert assessment, and local and indigenous knowledge on a specific conservation topic. This includes assessing the current effectiveness of different management interventions, threats and emerging problems, and economic factors. Evidence-based conservation was organized based on the observations that decision making in conservation was based on intuition and/or practitioner experience often disregarding other forms of evidence of successes and failures (e.g. scientific information). This has led to costly and poor outcomes. Evidence-based conservation provides access to information that will support decision making through an evidence-based framework of "what works" in conservation. The evidence-based approach to conservation is based on evidence-based practice which started in medicine and later spread to nursing, education, psychology, and other fields. It is part of the larger movement towards evidence-based practices. See also Conservation biology Conservation community, recent term for controlled-growth land use development Cryoconservation of animal genetic resources Dark green environmentalism Environmental history of the United States Environmental protection Forest conservation Geoconservation Index of environmental articles List of environmental issues List of environmental organizations Natural capital Natural environment Natural resource Relationship between animal ethics and environmental ethics Sustainable agriculture Trail ethics Water conservation Wildlife conservation 30 by 30 References Further reading Glacken, C.J. (1967) Traces on the Rhodian Shore. University of California Press. Berkeley Grove, R.H. (1992) 'Origins of Western Environmentalism', Scientific American 267(1): 22–27. Grove, R.H. (1995) Green Imperialism: Colonial Expansion, Tropical Island Edens, and the Origins of Environmentalism, 1600–1860 New York: Cambridge University Press Leopold, A. (1966) A Sand County Almanac New York: Oxford University Press Pinchot, G. (1910) The Fight for Conservation New York: Harcourt Brace. "Why Care for Earth's Environment?" (in the series "The Bible's Viewpoint") is a two-page article in the December 2007 issue of the magazine Awake!. A free textbook for download. External links Protected Areas and Conservation at Our World in Data Dictionary of the History of ideas: Conservation of Natural Resources For Future Generations, a Canadian documentary on how the conservation ethic influenced national parks Category List --- Religion-Online.org "Ecology/Environment" Natural environment Habitat Natural resource management Sustainable development Environmental protection Environmental ethics
0.781811
0.994076
0.777179
Metaphysics
Metaphysics is the branch of philosophy that examines the basic structure of reality. It is traditionally seen as the study of mind-independent features of the world, but some modern theorists view it as an inquiry into the fundamental categories of human understanding. It is sometimes characterized as first philosophy to suggest that it is more fundamental than other forms of philosophical inquiry. Metaphysics encompasses a wide range of general and abstract topics. It investigates the nature of existence, the features all entities have in common, and their division into categories of being. An influential division is between particulars and universals. Particulars are individual unique entities, like a specific apple. Universals are general repeatable entities that characterize particulars, like the color red. Modal metaphysics examines what it means for something to be possible or necessary. Metaphysicians also explore the concepts of space, time, and change, and their connection to causality and the laws of nature. Other topics include how mind and matter are related, whether everything in the world is predetermined, and whether there is free will. Metaphysicians use various methods to conduct their inquiry. Traditionally, they rely on rational intuitions and abstract reasoning but have more recently also included empirical approaches associated with scientific theories. Due to the abstract nature of its topic, metaphysics has received criticisms questioning the reliability of its methods and the meaningfulness of its theories. Metaphysics is relevant to many fields of inquiry that often implicitly rely on metaphysical concepts and assumptions. The roots of metaphysics lie in antiquity with speculations about the nature and origin of universe, like those found in the Upanishads in ancient India, Daoism in ancient China, and pre-Socratic philosophy in ancient Greece. During the subsequent medieval period in the West, discussions about the nature of universals were influenced by the philosophies of Plato and Aristotle. The modern period saw the emergence of various comprehensive systems of metaphysics, many of which embraced idealism. In the 20th century, a "revolt against idealism" was started, metaphysics was once declared meaningless, and then revived with various criticisms of earlier theories and new approaches to metaphysical inquiry. Definition Metaphysics is the study of the most general features of reality, including existence, objects and their properties, possibility and necessity, space and time, change, causation, and the relation between matter and mind. It is one of the oldest branches of philosophy. The precise nature of metaphysics is disputed and its characterization has changed in the course of history. Some approaches see metaphysics as a unified field and give a wide-sweeping definition by understanding it as the study of "fundamental questions about the nature of reality" or as an inquiry into the essences of things. Another approach doubts that the different areas of metaphysics share a set of underlying features and provides instead a fine-grained characterization by listing all the main topics investigated by metaphysicians. Some definitions are descriptive by providing an account of what metaphysicians do while others are normative and prescribe what metaphysicians ought to do. Two historically influential definitions in ancient and medieval philosophy understand metaphysics as the science of the first causes and as the study of being qua being, that is, the topic of what all beings have in common and to what fundamental categories they belong. In the modern period, the scope of metaphysics expanded to include topics such as the distinction between mind and body and free will. Some philosophers follow Aristotle in describing metaphysics as "first philosophy", suggesting that it is the most basic inquiry upon which all other branches of philosophy depend in some way. Metaphysics is traditionally understood as a study of mind-independent features of reality. Starting with Immanuel Kant's critical philosophy, an alternative conception gained prominence that focuses on conceptual schemes rather than external reality. Kant distinguishes transcendent metaphysics, which aims to describe the objective features of reality beyond sense experience, from critical metaphysics, which outlines the aspects and principles underlying all human thought and experience. Philosopher P. F. Strawson further explored the role of conceptual schemes, contrasting descriptive metaphysics, which articulates conceptual schemes commonly used to understand the world, with revisionary metaphysics, which aims to produce better conceptual schemes. Metaphysics differs from the individual sciences by studying the most general and abstract aspects of reality. The individual sciences, by contrast, examine more specific and concrete features and restrict themselves to certain classes of entities, such as the focus on physical things in physics, living entities in biology, and cultures in anthropology. It is disputed to what extent this contrast is a strict dichotomy rather than a gradual continuum. Etymology The word metaphysics has its origin in the ancient Greek words metá (μετά, meaning , , and ) and phusiká (φυσικά), as a short form of ta metá ta phusiká, meaning . This is often interpreted to mean that metaphysics discusses topics that, due to their generality and comprehensiveness, lie beyond the realm of physics and its focus on empirical observation. Metaphysics got its name by a historical accident when Aristotle's book on this subject was published. Aristotle did not use the term metaphysics but his editor (likely Andronicus of Rhodes) may have coined it for its title to indicate that this book should be studied after Aristotle's book published on physics: literally after physics. The term entered the English language through the Latin word metaphysica. Branches The nature of metaphysics can also be characterized in relation to its main branches. An influential division from early modern philosophy distinguishes between general and special or specific metaphysics. General metaphysics, also called ontology, takes the widest perspective and studies the most fundamental aspects of being. It investigates the features that all entities share and how entities can be divided into different categories. Categories are the most general kinds, such as substance, property, relation, and fact. Ontologists research which categories there are, how they depend on one another, and how they form a system of categories that provides a comprehensive classification of all entities. Special metaphysics considers being from more narrow perspectives and is divided into subdisciplines based on the perspective they take. Metaphysical cosmology examines changeable things and investigates how they are connected to form a world as a totality extending through space and time. Rational psychology focuses on metaphysical foundations and problems concerning the mind, such as its relation to matter and the freedom of the will. Natural theology studies the divine and its role as the first cause. The scope of special metaphysics overlaps with other philosophical disciplines, making it unclear whether a topic belongs to it or to areas like philosophy of mind and theology. Applied metaphysics is a relatively young subdiscipline. It belongs to applied philosophy and studies the applications of metaphysics, both within philosophy and other fields of inquiry. In areas like ethics and philosophy of religion, it addresses topics like the ontological foundations of moral claims and religious doctrines. Beyond philosophy, its applications include the use of ontologies in artificial intelligence, economics, and sociology to classify entities. In psychiatry and medicine, it examines the metaphysical status of diseases. Meta-metaphysics is the metatheory of metaphysics and investigates the nature and methods of metaphysics. It examines how metaphysics differs from other philosophical and scientific disciplines and assesses its relevance to them. Even though discussions of these topics have a long history in metaphysics, meta-metaphysics has only recently developed into a systematic field of inquiry. Topics Existence and categories of being Metaphysicians often regard existence or being as one of the most basic and general concepts. To exist means to form part of reality, distinguishing real entities from imaginary ones. According to the orthodox view, existence is a property of properties: if an entity exists then its properties are instantiated. A different position states that existence is a property of individuals, meaning that it is similar to other properties, such as shape or size. It is controversial whether all entities have this property. According to Alexius Meinong, there are nonexistent objects, including merely possible objects like Santa Claus and Pegasus. A related question is whether existence is the same for all entities or whether there are different modes or degrees of existence. For instance, Plato held that Platonic forms, which are perfect and immutable ideas, have a higher degree of existence than matter, which can only imperfectly reflect Platonic forms. Another key concern in metaphysics is the division of entities into distinct groups based on underlying features they share. Theories of categories provide a system of the most fundamental kinds or the highest genera of being by establishing a comprehensive inventory of everything. One of the earliest theories of categories was proposed by Aristotle, who outlined a system of 10 categories. He argued that substances (e.g. man and horse), are the most important category since all other categories like quantity (e.g. four), quality (e.g. white), and place (e.g. in Athens) are said of substances and depend on them. Kant understood categories as fundamental principles underlying human understanding and developed a system of 12 categories, divided into the four classes quantity, quality, relation, and modality. More recent theories of categories were proposed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Many philosophers rely on the contrast between concrete and abstract objects. According to a common view, concrete objects, like rocks, trees, and human beings, exist in space and time, undergo changes, and impact each other as cause and effect, whereas abstract objects, like numbers and sets, exist outside space and time, are immutable, and do not engage in causal relations. Particulars Particulars are individual entities and include both concrete objects, like Aristotle, the Eiffel Tower, or a specific apple, and abstract objects, like the number 2 or a specific set in mathematics. Also called individuals, they are unique, non-repeatable entities and contrast with universals, like the color red, which can at the same time exist in several places and characterize several particulars. A widely held view is that particulars instantiate universals but are not themselves instantiated by something else, meaning that they exist in themselves while universals exist in something else. Substratum theory analyzes each particular as a substratum, also called bare particular, together with various properties. The substratum confers individuality to the particular while the properties express its qualitative features or what it is like. This approach is rejected by bundle theorists, who state that particulars are only bundles of properties without an underlying substratum. Some bundle theorists include in the bundle an individual essence, called haecceity, to ensure that each bundle is unique. Another proposal for concrete particulars is that they are individuated by their space-time location. Concrete particulars encountered in everyday life, like rocks, tables, and organisms, are complex entities composed of various parts. For example, a table is made up of a tabletop and legs, each of which is itself made up of countless particles. The relation between parts and wholes is studied by mereology. The problem of the many is about which groups of entities form mereological wholes, for instance, whether a dust particle on the tabletop is part of the table. According to mereological universalists, every collection of entities forms a whole, meaning that the parts of the table without the dust particle form one whole while they together with it form a second whole. Mereological moderatists hold that certain conditions must be met for a group of entities to compose a whole, for example, that the entities touch one another. Mereological nihilists reject the idea of wholes altogether, claiming that there are no tables and chairs but only particles that are arranged table-wise and chair-wise. A related mereological problem is whether there are simple entities that have no parts, as atomists claim, or not, as continuum theorists contend. Universals Universals are general entities, encompassing both properties and relations, that express what particulars are like and how they resemble one another. They are repeatable, meaning that they are not limited to a unique existent but can be instantiated by different particulars at the same time. For example, the particulars Nelson Mandela and Mahatma Gandhi instantiate the universal humanity, similar to how a strawberry and a ruby instantiate the universal red. A topic discussed since ancient philosophy, the problem of universals consists in the challenge of characterizing the ontological status of universals. Realists argue that universals are real, mind-independent entities that exist in addition to particulars. According to Platonic realists, universals exist independently of particulars, which implies that the universal red would continue to exist even if there were no red things. A more moderate form of realism, inspired by Aristotle, states that universals depend on particulars, meaning that they are only real if they are instantiated. Nominalists reject the idea that universals exist in either form. For them, the world is composed exclusively of particulars. Conceptualists offer an intermediate position, stating that universals exist, but only as concepts in the mind used to order experience by classifying entities. Natural and social kinds are often understood as special types of universals. Entities belonging to the same natural kind share certain fundamental features characteristic of the structure of the natural world. In this regard, natural kinds are not an artificially constructed classification but are discovered, usually by the natural sciences, and include kinds like electrons, , and tigers. Scientific realists and anti-realists disagree about whether natural kinds exist. Social kinds, like money and baseball, are studied by social metaphysics and characterized as useful social constructions that, while not purely fictional, do not reflect the fundamental structure of mind-independent reality. Possibility and necessity The concepts of possibility and necessity convey what can or must be the case, expressed in statements like "it is possible to find a cure for cancer" and "it is necessary that two plus two equals four". They belong to modal metaphysics, which investigates the metaphysical principles underlying them, in particular, why some modal statements are true while others are false. Some metaphysicians hold that modality is a fundamental aspect of reality, meaning that besides facts about what is the case, there are additional facts about what could or must be the case. A different view argues that modal truths are not about an independent aspect of reality but can be reduced to non-modal characteristics, for example, to facts about what properties or linguistic descriptions are compatible with each other or to fictional statements. Borrowing a term from German philosopher Gottfried Wilhelm Leibniz's theodicy, many metaphysicians use the concept of possible worlds to analyze the meaning and ontological ramifications of modal statements. A possible world is a complete and consistent way of how things could have been. For example, the dinosaurs were wiped out in the actual world but there are possible worlds in which they are still alive. According to possible world semantics, a statement is possibly true if it is true in at least one possible world, whereas it is necessarily true if it is true in all possible worlds. Modal realists argue that possible worlds exist as concrete entities in the same sense as the actual world, with the main difference being that the actual world is the world we live in while other possible worlds are inhabited by counterparts. This view is controversial and various alternatives have been suggested, for example, that possible worlds only exist as abstract objects or are similar to stories told in works of fiction. Space, time, and change Space and time are dimensions that entities occupy. Spacetime realists state that space and time are fundamental aspects of reality and exist independently of the human mind. Spacetime idealists, by contrast, hold that space and time are constructs of the human mind, created to organize and make sense of reality. Spacetime absolutism or substantivalism understands spacetime as a distinct object, with some metaphysicians conceptualizing it as a container that holds all other entities within it. Spacetime relationism sees spacetime not as an object but as a network of relations between objects, such as the spatial relation of being next to and the temporal relation of coming before. In the metaphysics of time, an important contrast is between the A-series and the B-series. According to the A-series theory, the flow of time is real, meaning that events are categorized into the past, present, and future. The present continually moves forward in time and events that are in the present now will eventually change their status and lie in the past. From the perspective of the B-series theory, time is static, and events are ordered by the temporal relations earlier-than and later-than without any essential difference between past, present, and future. Eternalism holds that past, present, and future are equally real, whereas presentism asserts that only entities in the present exist. Material objects persist through time and change in the process, like a tree that grows or loses leaves. The main ways of conceptualizing persistence through time are endurantism and perdurantism. According to endurantism, material objects are three-dimensional entities that are wholly present at each moment. As they change, they gain or lose properties but otherwise remain the same. Perdurantists see material objects as four-dimensional entities that extend through time and are made up of different temporal parts. At each moment, only one part of the object is present, not the object as a whole. Change means that an earlier part is qualitatively different from a later part. For example, when a banana ripens, there is an unripe part followed by a ripe part. Causality Causality is the relation between cause and effect whereby one entity produces or affects another entity. For instance, if a person bumps a glass and spills its contents then the bump is the cause and the spill is the effect. Besides the single-case causation between particulars in this example, there is also general-case causation expressed in statements such as "smoking causes cancer". The term agent causation is used when people and their actions cause something. Causation is usually interpreted deterministically, meaning that a cause always brings about its effect. This view is rejected by probabilistic theories, which claim that the cause merely increases the probability that the effect occurs. This view can explain that smoking causes cancer even though this does not happen in every single case. The regularity theory of causation, inspired by David Hume's philosophy, states that causation is nothing but a constant conjunction in which the mind apprehends that one phenomenon, like putting one's hand in a fire, is always followed by another phenomenon, like a feeling of pain. According to nomic regularity theories, regularities manifest as laws of nature studied by science. Counterfactual theories focus not on regularities but on how effects depend on their causes. They state that effects owe their existence to the cause and would not occur without them. According to primitivism, causation is a basic concept that cannot be analyzed in terms of non-causal concepts, such as regularities or dependence relations. One form of primitivism identifies causal powers inherent in entities as the underlying mechanism. Eliminativists reject the above theories by holding that there is no causation. Mind and free will Mind encompasses phenomena like thinking, perceiving, feeling, and desiring as well as the underlying faculties responsible for these phenomena. The mind–body problem is the challenge of clarifying the relation between physical and mental phenomena. According to Cartesian dualism, minds and bodies are distinct substances. They causally interact with each other in various ways but can, at least in principle, exist on their own. This view is rejected by monists, who argue that reality is made up of only one kind. According to idealism, everything is mental, including physical objects, which may be understood as ideas or perceptions of conscious minds. Materialists, by contrast, state that all reality is at its core material. Some deny that mind exists but the more common approach is to explain mind in terms of certain aspects of matter, such as brain states, behavioral dispositions, or functional roles. Neutral monists argue that reality is fundamentally neither material nor mental and suggest that matter and mind are both derivative phenomena. A key aspect of the mind–body problem is the hard problem of consciousness or how to explain that physical systems like brains can produce phenomenal consciousness. The status of free will as the ability of a person to choose their actions is a central aspect of the mind–body problem. Metaphysicians are interested in the relation between free will and causal determinismthe view that everything in the universe, including human behavior, is determined by preceding events and laws of nature. It is controversial whether causal determinism is true, and, if so, whether this would imply that there is no free will. According to incompatibilism, free will cannot exist in a deterministic world since there is no true choice or control if everything is determined. Hard determinists infer from this that there is no free will, whereas libertarians conclude that determinism must be false. Compatibilists offer a third perspective, arguing that determinism and free will do not exclude each other, for instance, because a person can still act in tune with their motivation and choices even if they are determined by other forces. Free will plays a key role in ethics regarding the moral responsibility people have for what they do. Others Identity is a relation that every entity has to itself as a form of sameness. It refers to numerical identity when the very same entity is involved, as in the statement "the morning star is the evening star" (both are the planet Venus). In a slightly different sense, it encompasses qualitative identity, also called exact similarity and indiscernibility, which occurs when two distinct entities are exactly alike, such as perfect identical twins. The principle of the indiscernibility of identicals is widely accepted and holds that numerically identical entities exactly resemble one another. The converse principle, known as identity of indiscernibles or Leibniz's Law, is more controversial and states that two entities are numerically identical if they exactly resemble one another. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time, whereas diachronic identity is about the same entity at different times, as in statements like "the table I bought last year is the same as the table in my dining room now". Personal identity is a related topic in metaphysics that uses the term identity in a slightly different sense and concerns questions like what personhood is or what makes someone a person. Various contemporary metaphysicians rely on the concepts of truth, truth-bearer, and truthmaker to conduct their inquiry. Truth is a property of being in accord with reality. Truth-bearers are entities that can be true or false, such as linguistic statements and mental representations. A truthmaker of a statement is the entity whose existence makes the statement true. For example, the statement "a tomato is red" is true because there exists a red tomato as its truthmaker. Based on this observation, it is possible to pursue metaphysical research by asking what the truthmakers of statements are, with different areas of metaphysics being dedicated to different types of statements. According to this view, modal metaphysics asks what makes statements about what is possible and necessary true while the metaphysics of time is interested in the truthmakers of temporal statements about the past, present, and future. Methodology Metaphysicians employ a variety of methods to develop metaphysical theories and formulate arguments for and against them. Traditionally, a priori methods have been the dominant approach. They rely on rational intuition and abstract reasoning from general principles rather than sensory experience. A posteriori approaches, by contrast, ground metaphysical theories in empirical observations and scientific theories. Some metaphysicians incorporate perspectives from fields such as physics, psychology, linguistics, and history into their inquiry. The two approaches are not mutually exclusive: it is possible to combine elements from both. The method a metaphysician chooses often depends on their understanding of the nature of metaphysics, for example, whether they see it as an inquiry into the mind-independent structure of reality, as metaphysical realists claim, or the principles underlying thought and experience, as some metaphysical anti-realists contend. A priori approaches often rely on intuitionsnon-inferential impressions about the correctness of specific claims or general principles. For example, arguments for the A-theory of time, which states that time flows from the past through the present and into the future, often rely on pre-theoretical intuitions associated with the sense of the passage of time. Some approaches use intuitions to establish a small set of self-evident fundamental principles, known as axioms, and employ deductive reasoning to build complex metaphysical systems by drawing conclusions from these axioms. Intuition-based approaches can be combined with thought experiments, which help evoke and clarify intuitions by linking them to imagined situations. They use counterfactual thinking to assess the possible consequences of these situations. For example, to explore the relation between matter and consciousness, some theorists compare humans to philosophical zombieshypothetical creatures identical to humans but without conscious experience. A related method relies on commonly accepted beliefs instead of intuitions to formulate arguments and theories. The common-sense approach is often used to criticize metaphysical theories that deviate significantly from how the average person thinks about an issue. For example, common-sense philosophers have argued that mereological nihilism is false since it implies that commonly accepted things, like tables, do not exist. Conceptual analysis, a method particularly prominent in analytic philosophy, aims to decompose metaphysical concepts into component parts to clarify their meaning and identify essential relations. In phenomenology, the method of eidetic variation is used to investigate essential structures underlying phenomena. This method involves imagining an object and varying its features to determine which ones are essential and cannot be changed. The transcendental method is a further approach and examines the metaphysical structure of reality by observing what entities there are and studying the conditions of possibility without which these entities could not exist. Some approaches give less importance to a priori reasoning and view metaphysics as a practice continuous with the empirical sciences that generalizes their insights while making their underlying assumptions explicit. This approach is known as naturalized metaphysics and is closely associated with the work of Willard Van Orman Quine. He relies on the idea that true sentences from the sciences and other fields have ontological commitments, that is, they imply that certain entities exist. For example, if the sentence "some electrons are bonded to protons" is true then it can be used to justify that electrons and protons exist. Quine used this insight to argue that one can learn about metaphysics by closely analyzing scientific claims to understand what kind of metaphysical picture of the world they presuppose. In addition to methods of conducting metaphysical inquiry, there are various methodological principles used to decide between competing theories by comparing their theoretical virtues. Ockham's Razor is a well-known principle that gives preference to simple theories, in particular, those that assume that few entities exist. Other principles consider explanatory power, theoretical usefulness, and proximity to established beliefs. Criticism Despite its status as one of the main branches of philosophy, metaphysics has received numerous criticisms questioning its legitimacy as a field of inquiry. One criticism argues that metaphysical inquiry is impossible because humans lack the cognitive capacities needed to access the ultimate nature of reality. This line of thought leads to skepticism about the possibility of metaphysical knowledge. Empiricists often follow this idea, like Hume, who argued that there is no good source of metaphysical knowledge since metaphysics lies outside the field of empirical knowledge and relies on dubious intuitions about the realm beyond sensory experience. A related argument favoring the unreliability of metaphysical theorizing points to the deep and lasting disagreements about metaphysical issues, suggesting a lack of overall progress. Another criticism holds that the problem lies not with human cognitive abilities but with metaphysical statements themselves, which some claim are neither true nor false but meaningless. According to logical positivists, for instance, the meaning of a statement is given by the procedure used to verify it, usually through the observations that would confirm it. Based on this controversial assumption, they argue that metaphysical statements are meaningless since they make no testable predictions about experience. A slightly weaker position allows metaphysical statements to have meaning while holding that metaphysical disagreements are merely verbal disputes about different ways to describe the world. According to this view, the disagreement in the metaphysics of composition about whether there are tables or only particles arranged table-wise is a trivial debate about linguistic preferences without any substantive consequences for the nature of reality. The position that metaphysical disputes have no meaning or no significant point is called metaphysical or ontological deflationism. This view is opposed by so-called serious metaphysicians, who contend that metaphysical disputes are about substantial features of the underlying structure of reality. A closely related debate between ontological realists and anti-realists concerns the question of whether there are any objective facts that determine which metaphysical theories are true. A different criticism, formulated by pragmatists, sees the fault of metaphysics not in its cognitive ambitions or the meaninglessness of its statements, but in its practical irrelevance and lack of usefulness. Martin Heidegger criticized traditional metaphysics, saying that it fails to distinguish between individual entities and being as their ontological ground. His attempt to reveal the underlying assumptions and limitations in the history of metaphysics to "overcome metaphysics" influenced Jacques Derrida's method of deconstruction. Derrida employed this approach to criticize metaphysical texts for relying on opposing terms, like presence and absence, which he thought were inherently unstable and contradictory. There is no consensus about the validity of these criticisms and whether they affect metaphysics as a whole or only certain issues or approaches in it. For example, it could be the case that certain metaphysical disputes are merely verbal while others are substantive. Relation to other disciplines Metaphysics is related to many fields of inquiry by investigating their basic concepts and relation to the fundamental structure of reality. For example, the natural sciences rely on concepts such as law of nature, causation, necessity, and spacetime to formulate their theories and predict or explain the outcomes of experiments. While scientists primarily focus on applying these concepts to specific situations, metaphysics examines their general nature and how they depend on each other. For instance, physicists formulate laws of nature, like laws of gravitation and thermodynamics, to describe how physical systems behave under various conditions. Metaphysicians, by contrast, examine what all laws of nature have in common, asking whether they merely describe contingent regularities or express necessary relations. New scientific discoveries have also influenced existing and inspired new metaphysical theories. Einstein's theory of relativity, for instance, prompted various metaphysicians to conceive space and time as a unified dimension rather than as independent dimensions. Empirically focused metaphysicians often rely on scientific theories to ground their theories about the nature of reality in empirical observations. Similar issues arise in the social sciences where metaphysicians investigate their basic concepts and analyze their metaphysical implications. This includes questions like whether social facts emerge from non-social facts, whether social groups and institutions have mind-independent existence, and how they persist through time. Metaphysical assumptions and topics in psychology and psychiatry include the questions about the relation between body and mind, whether the nature of the human mind is historically fixed, and what the metaphysical status of diseases is. Metaphysics is similar to both physical cosmology and theology in its exploration of the first causes and the universe as a whole. Key differences are that metaphysics relies on rational inquiry while physical cosmology gives more weight to empirical observations and theology incorporates divine revelation and other faith-based doctrines. Historically, cosmology and theology were considered subfields of metaphysics. Metaphysics in the form of ontology plays a central role in computer science to classify objects and formally represent information about them. Unlike metaphysicians, computer scientists are usually not interested in providing a single all-encompassing characterization of reality as a whole. Instead, they employ many different ontologies, each one concerned only with a limited domain of entities. For instance, an organization may use an ontology with categories such as person, company, address, and name to represent information about clients and employees. Ontologies provide standards or conceptualizations for encoding and storing information in a structured way, enabling computational processes to use and transform their information for a variety of purposes. Some knowledge bases integrate information from various domains, which brings with it the challenge of handling data that was formulated using diverse ontologies. They address this by providing an upper ontology that defines concepts at a higher level of abstraction, applicable to all domains. Influential upper ontologies include Suggested Upper Merged Ontology and Basic Formal Ontology. Logic as the study of correct reasoning is often used by metaphysicians as a tool to engage in their inquiry and express insights through precise logical formulas. Another relation between the two fields concerns the metaphysical assumptions associated with logical systems. Many logical systems like first-order logic rely on existential quantifiers to express existential statements. For instance, in the logical formula the existential quantifier is applied to the predicate to express that there are horses. Following Quine, various metaphysicians assume that existential quantifiers carry ontological commitments, meaning that existential statements imply that the entities over which one quantifies are part of reality. History The history of metaphysics examines how the inquiry into the basic structure of reality has evolved in the course of history. Metaphysics originated in the ancient period from speculations about the nature and origin of the cosmos. In ancient India, starting in the 7th century BCE, the Upanishads were written as religious and philosophical texts that examine how ultimate reality constitutes the ground of all being. They further explore the nature of the self and how it can reach liberation by understanding ultimate reality. This period also saw the emergence of Buddhism in the 6th century BCE, which denies the existence of an independent self and understands the world as a cyclic process. At about the same time in ancient China, the school of Daoism was formed and explored the natural order of the universe, known as Dao, and how it is characterized by the interplay of yin and yang as two correlated forces. In ancient Greece, metaphysics emerged in the 6th century BCE with the pre-Socratic philosophers, who gave rational explanations of the cosmos as a whole by examining the first principles from which everything arises. Building on their work, Plato (427–347 BCE) formulated his theory of forms, which states that eternal forms or ideas possess the highest kind of reality while the material world is only an imperfect reflection of them. Aristotle (384–322 BCE) accepted Plato's idea that there are universal forms but held that they cannot exist on their own but depend on matter. He also proposed a system of categories and developed a comprehensive framework of the natural world through his theory of the four causes. Starting in the 4th century BCE, Hellenistic philosophy explored the rational order underlying the cosmos and the idea that it is made up of indivisible atoms. Neoplatonism emerged towards the end of the ancient period in the 3rd century CE and introduced the idea of "the One" as the transcendent and ineffable source of all creation. Meanwhile, in Indian Buddhism, the Madhyamaka school developed the idea that all phenomena are inherently empty without a permanent essence. The consciousness-only doctrine of the Yogācāra school stated that experienced objects are mere transformations of consciousness and do not reflect external reality. The Hindu school of Samkhya philosophy introduced a metaphysical dualism with pure consciousness and matter as its fundamental categories. In China, the school of Xuanxue explored metaphysical problems such as the contrast between being and non-being. Medieval Western philosophy was profoundly shaped by ancient Greek philosophy. Boethius (477–524 CE) sought to reconcile Plato's and Aristotle's theories of universals, proposing that universals can exist both in matter and mind. His theory inspired the development of nominalism and conceptualism, as in the thought of Peter Abelard (1079–1142 CE). Thomas Aquinas (1224–1274 CE) understood metaphysics as the discipline investigating different meanings of being, such as the contrast between substance and accident, and principles applying to all beings, such as the principle of identity. William of Ockham (1285–1347 CE) proposed Ockham's razor, a methodological principles to choose between competing metaphysical theories. Arabic–Persian philosophy flourished from the early 9th century CE to the late 12th century CE, integrating ancient Greek philosophies to interpret and clarify the teachings of the Quran. Avicenna (980–1037 CE) developed a comprehensive philosophical system that examined the contrast between existence and essence and distinguished between contingent and necessary existence. Medieval India saw the emergence of the monist school of Advaita Vedanta in the 8th century CE, which holds that everything is one and that the idea of many entities existing independently is an illusion. In China, Neo-Confucianism arose in the 9th century CE and explored the concept of li as the rational principle that is the ground of being and reflects the order of the universe. In the early modern period, René Descartes (1596–1650) developed a substance dualism according to which body and mind exist as independent entities that causally interact. This idea was rejected by Baruch Spinoza (1632–1677), who formulated a monist philosophy suggesting that there is only one substance with both physical and mental attributes that develop side-by-side without interacting. Gottfried Wilhelm Leibniz (1646–1716) introduced the concept of possible worlds and articulated a metaphysical system known as monadology, which views the universe as a collection of simple substances synchronized without causal interaction. Christian Wolff (1679–1754), conceptualized the scope of metaphysics by distinguishing between general and special metaphysics. According to the idealism of George Berkeley (1685–1753), everything is mental, including material objects, which are ideas perceived by the mind. David Hume (1711–1776) made various contributions to metaphysics, including the regularity theory of causation and the idea that there are no necessary connections between distinct entities. His empiricist outlook led him to criticize metaphysical theories that seek ultimate principles inaccessible to sensory experience. This skeptical outlook was embraced by Immanuel Kant (1724–1804), who tried to reconceptualize metaphysics as an inquiry into the basic principles and categories of thought and understanding rather than seeing it as an attempt to comprehend mind-independent reality. Many developments in the later modern period were shaped by Kant's philosophy. German idealists adopted his idealistic outlook in their attempt to find a unifying principle as the foundation of all reality. Georg Wilhelm Friedrich Hegel (1770–1831) developed a comprehensive system of philosophy that examines how absolute spirit manifests itself. He inspired the British idealism of Francis Herbert Bradley (1846–1924), who interpreted absolute spirit as the all-inclusive totality of being. Arthur Schopenhauer (1788–1860) was a strong critic of German idealism and articulated a different metaphysical vision, positing a blind and irrational will as the underlying principle of reality. Pragmatists like C. S. Peirce (1839–1914) and John Dewey (1859–1952) conceived metaphysics as an observational science of the most general features of reality and experience. At the turn of the 20th century in analytic philosophy, philosophers such as Bertrand Russell (1872–1970) and G. E. Moore (1873–1958) led a "revolt against idealism". Logical atomists, like Russell and the early Ludwig Wittgenstein (1889–1951), conceived the world as a multitude of atomic facts, which later inspired metaphysicians such as D. M. Armstrong (1926–2014). Alfred North Whitehead (1861–1947) developed process metaphysics as an attempt to provide a holistic description of both the objective and the subjective realms. Rudolf Carnap (1891–1970) and other logical positivists formulated a wide-ranging criticism of metaphysical statements, arguing that they are meaningless because there is no way to verify them. Other criticisms of traditional metaphysics identified misunderstandings of ordinary language as the source of many traditional metaphysical problems or challenged complex metaphysical deductions by appealing to common sense. The decline of logical positivism led to a revival of metaphysical theorizing. Willard Van Orman Quine (1908–2000) tried to naturalize metaphysics by connecting it to the empirical sciences. His student David Lewis (1941–2001) employed the concept of possible worlds to formulate his modal realism. Saul Kripke (1940–2022) helped revive discussions of identity and essentialism, distinguishing necessity as a metaphysical notion from the epistemic notion of a priori. In continental philosophy, Edmund Husserl (1859–1938) engaged in ontology through a phenomenological description of experience, while his student Martin Heidegger (1889–1976) developed fundamental ontology to clarify the meaning of being. Heidegger's philosophy inspired general criticisms of metaphysics by postmodern thinkers like Jacques Derrida (1930–2004). Gilles Deleuze's (1925–1995) approach to metaphysics challenged traditionally influential concepts like substance, essence, and identity by reconceptualizing the field through alternative notions such as multiplicity, event, and difference. See also Computational metaphysics Doctor of Metaphysics Enrico Berti's classification of metaphysics Feminist metaphysics Fundamental question of metaphysics List of metaphysicians Metaphysical grounding References Notes Citations Sources External links Metaphysics at Encyclopædia Britannica
0.77734
0.999681
0.777093
Ecological economics
Ecological economics, bioeconomics, ecolonomy, eco-economics, or ecol-econ is both a transdisciplinary and an interdisciplinary field of academic research addressing the interdependence and coevolution of human economies and natural ecosystems, both intertemporally and spatially. By treating the economy as a subsystem of Earth's larger ecosystem, and by emphasizing the preservation of natural capital, the field of ecological economics is differentiated from environmental economics, which is the mainstream economic analysis of the environment. One survey of German economists found that ecological and environmental economics are different schools of economic thought, with ecological economists emphasizing strong sustainability and rejecting the proposition that physical (human-made) capital can substitute for natural capital (see the section on weak versus strong sustainability below). Ecological economics was founded in the 1980s as a modern discipline on the works of and interactions between various European and American academics (see the section on History and development below). The related field of green economics is in general a more politically applied form of the subject. According to ecological economist , ecological economics is defined by its focus on nature, justice, and time. Issues of intergenerational equity, irreversibility of environmental change, uncertainty of long-term outcomes, and sustainable development guide ecological economic analysis and valuation. Ecological economists have questioned fundamental mainstream economic approaches such as cost-benefit analysis, and the separability of economic values from scientific research, contending that economics is unavoidably normative, i.e. prescriptive, rather than positive or descriptive. Positional analysis, which attempts to incorporate time and justice issues, is proposed as an alternative. Ecological economics shares several of its perspectives with feminist economics, including the focus on sustainability, nature, justice and care values. Karl Marx also commented on relationship between capital and ecology, what is now known as ecosocialism. History and development The antecedents of ecological economics can be traced back to the Romantics of the 19th century as well as some Enlightenment political economists of that era. Concerns over population were expressed by Thomas Malthus, while John Stuart Mill predicted the desirability of the stationary state of an economy. Mill thereby anticipated later insights of modern ecological economists, but without having had their experience of the social and ecological costs of the Post–World War II economic expansion. In 1880, Marxian economist Sergei Podolinsky attempted to theorize a labor theory of value based on embodied energy; his work was read and critiqued by Marx and Engels. Otto Neurath developed an ecological approach based on a natural economy whilst employed by the Bavarian Soviet Republic in 1919. He argued that a market system failed to take into account the needs of future generations, and that a socialist economy required calculation in kind, the tracking of all the different materials, rather than synthesising them into money as a general equivalent. In this he was criticised by neo-liberal economists such as Ludwig von Mises and Freidrich Hayek in what became known as the socialist calculation debate. The debate on energy in economic systems can also be traced back to Nobel prize-winning radiochemist Frederick Soddy (1877–1956). In his book Wealth, Virtual Wealth and Debt (1926), Soddy criticized the prevailing belief of the economy as a perpetual motion machine, capable of generating infinite wealth—a criticism expanded upon by later ecological economists such as Nicholas Georgescu-Roegen and Herman Daly. European predecessors of ecological economics include K. William Kapp (1950) Karl Polanyi (1944), and Romanian economist Nicholas Georgescu-Roegen (1971). Georgescu-Roegen, who would later mentor Herman Daly at Vanderbilt University, provided ecological economics with a modern conceptual framework based on the material and energy flows of economic production and consumption. His magnum opus, The Entropy Law and the Economic Process (1971), is credited by Daly as a fundamental text of the field, alongside Soddy's Wealth, Virtual Wealth and Debt. Some key concepts of what is now ecological economics are evident in the writings of Kenneth Boulding and E.F. Schumacher, whose book Small Is Beautiful – A Study of Economics as if People Mattered (1973) was published just a few years before the first edition of Herman Daly's comprehensive and persuasive Steady-State Economics (1977). The first organized meetings of ecological economists occurred in the 1980s. These began in 1982, at the instigation of Lois Banner, with a meeting held in Sweden (including Robert Costanza, Herman Daly, Charles Hall, Bruce Hannon, H.T. Odum, and David Pimentel). Most were ecosystem ecologists or mainstream environmental economists, with the exception of Daly. In 1987, Daly and Costanza edited an issue of Ecological Modeling to test the waters. A book entitled Ecological Economics, by Joan Martinez Alier, was published later that year. Alier renewed interest in the approach developed by Otto Neurath during the interwar period. The year 1989 saw the foundation of the International Society for Ecological Economics and publication of its journal, Ecological Economics, by Elsevier. Robert Costanza was the first president of the society and first editor of the journal, which is currently edited by Richard Howarth. Other figures include ecologists C.S. Holling and H.T. Odum, biologist Gretchen Daily, and physicist Robert Ayres. In the Marxian tradition, sociologist John Bellamy Foster and CUNY geography professor David Harvey explicitly center ecological concerns in political economy. Articles by Inge Ropke (2004, 2005) and Clive Spash (1999) cover the development and modern history of ecological economics and explain its differentiation from resource and environmental economics, as well as some of the controversy between American and European schools of thought. An article by Robert Costanza, David Stern, Lining He, and Chunbo Ma responded to a call by Mick Common to determine the foundational literature of ecological economics by using citation analysis to examine which books and articles have had the most influence on the development of the field. However, citations analysis has itself proven controversial and similar work has been criticized by Clive Spash for attempting to pre-determine what is regarded as influential in ecological economics through study design and data manipulation. In addition, the journal Ecological Economics has itself been criticized for swamping the field with mainstream economics. Schools of thought Various competing schools of thought exist in the field. Some are close to resource and environmental economics while others are far more heterodox in outlook. An example of the latter is the European Society for Ecological Economics. An example of the former is the Swedish Beijer International Institute of Ecological Economics. Clive Spash has argued for the classification of the ecological economics movement, and more generally work by different economic schools on the environment, into three main categories. These are the mainstream new resource economists, the new environmental pragmatists, and the more radical social ecological economists. International survey work comparing the relevance of the categories for mainstream and heterodox economists shows some clear divisions between environmental and ecological economists. A growing field of radical social-ecological theory is degrowth economics.Degrowth addresses both biophysical limits and global inequality while rejecting neoliberal economics. Degrowth prioritizes grassroots initiatives in progressive socio-ecological goals, adhering to ecological limits by shrinking the human ecological footprint (See Differences from Mainstream Economics Below). It involves an equitable downscale in both production and consumption of resources in order to adhere to biophysical limits. Degrowth draws from Marxian economics, citing the growth of efficient systems as the alienation of nature and man. Economic movements like degrowth reject the idea of growth itself. Some degrowth theorists call for an "exit of the economy". Critics of the degrowth movement include new resource economists, who point to the gaining momentum of sustainable development. These economists highlight the positive aspects of a green economy, which include equitable access to renewable energy and a commitment to eradicate global inequality through sustainable development (See Green Economics). Examples of heterodox ecological economic experiments include the Catalan Integral Cooperative and the Solidarity Economy Networks in Italy. Both of these grassroots movements use communitarian based economies and consciously reduce their ecological footprint by limiting material growth and adapting to regenerative agriculture. Non-traditional approaches to ecological economics Cultural and heterodox applications of economic interaction around the world have begun to be included as ecological economic practices. E.F. Schumacher introduced examples of non-western economic ideas to mainstream thought in his book, Small is Beautiful, where he addresses neoliberal economics through the lens of natural harmony in Buddhist economics. This emphasis on natural harmony is witnessed in diverse cultures across the globe. Buen Vivir is a traditional socio-economic movement in South America that rejects the western development model of economics. Meaning Good Life, Buen Vivir emphasizes harmony with nature, diverse pluralculturism, coexistence, and inseparability of nature and material. Value is not attributed to material accumulation, and it instead takes a more spiritual and communitarian approach to economic activity. Ecological Swaraj originated out of India, and is an evolving world view of human interactions within the ecosystem. This train of thought respects physical bio-limits and non-human species, pursuing equity and social justice through direct democracy and grassroots leadership. Social well-being is paired with spiritual, physical, and material well-being. These movements are unique to their region, but the values can be seen across the globe in indigenous traditions, such as the Ubuntu Philosophy in South Africa. Differences from mainstream economics Ecological economics differs from mainstream economics in that it heavily reflects on the ecological footprint of human interactions in the economy. This footprint is measured by the impact of human activities on natural resources and the waste generated in the process. Ecological economists aim to minimize the ecological footprint, taking into account the scarcity of global and regional resources and their accessibility to an economy. Some ecological economists prioritise adding natural capital to the typical capital asset analysis of land, labor, and financial capital. These ecological economists use tools from mathematical economics, as in mainstream economics, but may apply them more closely to the natural world. Whereas mainstream economists tend to be technological optimists, ecological economists are inclined to be technological sceptics. They reason that the natural world has a limited carrying capacity and that its resources may run out. Since destruction of important environmental resources could be practically irreversible and catastrophic, ecological economists are inclined to justify cautionary measures based on the precautionary principle. As ecological economists try to minimize these potential disasters, calculating the fallout of environmental destruction becomes a humanitarian issue as well. Already, the Global South has seen trends of mass migration due to environmental changes. Climate refugees from the Global South are adversely affected by changes in the environment, and some scholars point to global wealth inequality within the current neoliberal economic system as a source of this issue. The most cogent example of how the different theories treat similar assets is tropical rainforest ecosystems, most obviously the Yasuni region of Ecuador. While this area has substantial deposits of bitumen it is also one of the most diverse ecosystems on Earth and some estimates establish it has over 200 undiscovered medical substances in its genomes – most of which would be destroyed by logging the forest or mining the bitumen. Effectively, the instructional capital of the genomes is undervalued by analyses that view the rainforest primarily as a source of wood, oil/tar and perhaps food. Increasingly the carbon credit for leaving the extremely carbon-intensive ("dirty") bitumen in the ground is also valued – the government of Ecuador set a price of US$350M for an oil lease with the intent of selling it to someone committed to never exercising it at all and instead preserving the rainforest. While this natural capital and ecosystems services approach has proven popular amongst many it has also been contested as failing to address the underlying problems with mainstream economics, growth, market capitalism and monetary valuation of the environment. Critiques concern the need to create a more meaningful relationship with Nature and the non-human world than evident in the instrumentalism of shallow ecology and the environmental economists commodification of everything external to the market system. Nature and ecology A simple circular flow of income diagram is replaced in ecological economics by a more complex flow diagram reflecting the input of solar energy, which sustains natural inputs and environmental services which are then used as units of production. Once consumed, natural inputs pass out of the economy as pollution and waste. The potential of an environment to provide services and materials is referred to as an "environment's source function", and this function is depleted as resources are consumed or pollution contaminates the resources. The "sink function" describes an environment's ability to absorb and render harmless waste and pollution: when waste output exceeds the limit of the sink function, long-term damage occurs. Some persistent pollutants, such as some organic pollutants and nuclear waste are absorbed very slowly or not at all; ecological economists emphasize minimizing "cumulative pollutants". Pollutants affect human health and the health of the ecosystem. The economic value of natural capital and ecosystem services is accepted by mainstream environmental economics, but is emphasized as especially important in ecological economics. Ecological economists may begin by estimating how to maintain a stable environment before assessing the cost in dollar terms. Ecological economist Robert Costanza led an attempted valuation of the global ecosystem in 1997. Initially published in Nature, the article concluded on $33 trillion with a range from $16 trillion to $54 trillion (in 1997, total global GDP was $27 trillion). Half of the value went to nutrient cycling. The open oceans, continental shelves, and estuaries had the highest total value, and the highest per-hectare values went to estuaries, swamps/floodplains, and seagrass/algae beds. The work was criticized by articles in Ecological Economics Volume 25, Issue 1, but the critics acknowledged the positive potential for economic valuation of the global ecosystem. The Earth's carrying capacity is a central issue in ecological economics. Early economists such as Thomas Malthus pointed out the finite carrying capacity of the earth, which was also central to the MIT study Limits to Growth. Diminishing returns suggest that productivity increases will slow if major technological progress is not made. Food production may become a problem, as erosion, an impending water crisis, and soil salinity (from irrigation) reduce the productivity of agriculture. Ecological economists argue that industrial agriculture, which exacerbates these problems, is not sustainable agriculture, and are generally inclined favorably to organic farming, which also reduces the output of carbon. Global wild fisheries are believed to have peaked and begun a decline, with valuable habitat such as estuaries in critical condition. The aquaculture or farming of piscivorous fish, like salmon, does not help solve the problem because they need to be fed products from other fish. Studies have shown that salmon farming has major negative impacts on wild salmon, as well as the forage fish that need to be caught to feed them. Since animals are higher on the trophic level, they are less efficient sources of food energy. Reduced consumption of meat would reduce the demand for food, but as nations develop, they tend to adopt high-meat diets similar to that of the United States. Genetically modified food (GMF) a conventional solution to the problem, presents numerous problems – Bt corn produces its own Bacillus thuringiensis toxin/protein, but the pest resistance is believed to be only a matter of time. Global warming is now widely acknowledged as a major issue, with all national scientific academies expressing agreement on the importance of the issue. As the population growth intensifies and energy demand increases, the world faces an energy crisis. Some economists and scientists forecast a global ecological crisis if energy use is not contained – the Stern report is an example. The disagreement has sparked a vigorous debate on issue of discounting and intergenerational equity. Ethics Mainstream economics has attempted to become a value-free 'hard science', but ecological economists argue that value-free economics is generally not realistic. Ecological economics is more willing to entertain alternative conceptions of utility, efficiency, and cost-benefits such as positional analysis or multi-criteria analysis. Ecological economics is typically viewed as economics for sustainable development, and may have goals similar to green politics. Green economics In international, regional, and national policy circles, the concept of the green economy grew in popularity as a response to the financial predicament at first then became a vehicle for growth and development. The United Nations Environment Programme (UNEP) defines a 'green economy' as one that focuses on the human aspects and natural influences and an economic order that can generate high-salary jobs. In 2011, its definition was further developed as the word 'green' is made to refer to an economy that is not only resourceful and well-organized but also impartial, guaranteeing an objective shift to an economy that is low-carbon, resource-efficient, and socially-inclusive. The ideas and studies regarding the green economy denote a fundamental shift for more effective, resourceful, environment-friendly and resource‐saving technologies that could lessen emissions and alleviate the adverse consequences of climate change, at the same time confront issues about resource exhaustion and grave environmental dilapidation. As an indispensable requirement and vital precondition to realizing sustainable development, the Green Economy adherents robustly promote good governance. To boost local investments and foreign ventures, it is crucial to have a constant and foreseeable macroeconomic atmosphere. Likewise, such an environment will also need to be transparent and accountable. In the absence of a substantial and solid governance structure, the prospect of shifting towards a sustainable development route would be insignificant. In achieving a green economy, competent institutions and governance systems are vital in guaranteeing the efficient execution of strategies, guidelines, campaigns, and programmes. Shifting to a Green Economy demands a fresh mindset and an innovative outlook of doing business. It likewise necessitates new capacities, skills set from labor and professionals who can competently function across sectors, and able to work as effective components within multi-disciplinary teams. To achieve this goal, vocational training packages must be developed with focus on greening the sectors. Simultaneously, the educational system needs to be assessed as well in order to fit in the environmental and social considerations of various disciplines. Topics Among the topics addressed by ecological economics are methodology, allocation of resources, weak versus strong sustainability, energy economics, energy accounting and balance, environmental services, cost shifting, modeling, and monetary policy. Methodology A primary objective of ecological economics (EE) is to ground economic thinking and practice in physical reality, especially in the laws of physics (particularly the laws of thermodynamics) and in knowledge of biological systems. It accepts as a goal the improvement of human well-being through development, and seeks to ensure achievement of this through planning for the sustainable development of ecosystems and societies. Of course the terms development and sustainable development are far from lacking controversy. Richard B. Norgaard argues traditional economics has hi-jacked the development terminology in his book Development Betrayed. Well-being in ecological economics is also differentiated from welfare as found in mainstream economics and the 'new welfare economics' from the 1930s which informs resource and environmental economics. This entails a limited preference utilitarian conception of value i.e., Nature is valuable to our economies, that is because people will pay for its services such as clean air, clean water, encounters with wilderness, etc. Ecological economics is distinguishable from neoclassical economics primarily by its assertion that the economy is embedded within an environmental system. Ecology deals with the energy and matter transactions of life and the Earth, and the human economy is by definition contained within this system. Ecological economists argue that neoclassical economics has ignored the environment, at best considering it to be a subset of the human economy. The neoclassical view ignores much of what the natural sciences have taught us about the contributions of nature to the creation of wealth e.g., the planetary endowment of scarce matter and energy, along with the complex and biologically diverse ecosystems that provide goods and ecosystem services directly to human communities: micro- and macro-climate regulation, water recycling, water purification, storm water regulation, waste absorption, food and medicine production, pollination, protection from solar and cosmic radiation, the view of a starry night sky, etc. There has then been a move to regard such things as natural capital and ecosystems functions as goods and services. However, this is far from uncontroversial within ecology or ecological economics due to the potential for narrowing down values to those found in mainstream economics and the danger of merely regarding Nature as a commodity. This has been referred to as ecologists 'selling out on Nature'. There is then a concern that ecological economics has failed to learn from the extensive literature in environmental ethics about how to structure a plural value system. Allocation of resources Resource and neoclassical economics focus primarily on the efficient allocation of resources and less on the two other problems of importance to ecological economics: distribution (equity), and the scale of the economy relative to the ecosystems upon which it relies. Ecological economics makes a clear distinction between growth (quantitative increase in economic output) and development (qualitative improvement of the quality of life), while arguing that neoclassical economics confuses the two. Ecological economists point out that beyond modest levels, increased per-capita consumption (the typical economic measure of "standard of living") may not always lead to improvement in human well-being, but may have harmful effects on the environment and broader societal well-being. This situation is sometimes referred to as uneconomic growth (see diagram above). Weak versus strong sustainability Ecological economics challenges the conventional approach towards natural resources, claiming that it undervalues natural capital by considering it as interchangeable with human-made capital—labor and technology. The impending depletion of natural resources and increase of climate-changing greenhouse gasses should motivate us to examine how political, economic and social policies can benefit from alternative energy. Shifting dependence on fossil fuels with specific interest within just one of the above-mentioned factors easily benefits at least one other. For instance, photo voltaic (or solar) panels have a 15% efficiency when absorbing the sun's energy, but its construction demand has increased 120% within both commercial and residential properties. Additionally, this construction has led to a roughly 30% increase in work demands (Chen). The potential for the substitution of man-made capital for natural capital is an important debate in ecological economics and the economics of sustainability. There is a continuum of views among economists between the strongly neoclassical positions of Robert Solow and Martin Weitzman, at one extreme and the 'entropy pessimists', notably Nicholas Georgescu-Roegen and Herman Daly, at the other. Neoclassical economists tend to maintain that man-made capital can, in principle, replace all types of natural capital. This is known as the weak sustainability view, essentially that every technology can be improved upon or replaced by innovation, and that there is a substitute for any and all scarce materials. At the other extreme, the strong sustainability view argues that the stock of natural resources and ecological functions are irreplaceable. From the premises of strong sustainability, it follows that economic policy has a fiduciary responsibility to the greater ecological world, and that sustainable development must therefore take a different approach to valuing natural resources and ecological functions. Recently, Stanislav Shmelev developed a new methodology for the assessment of progress at the macro scale based on multi-criteria methods, which allows consideration of different perspectives, including strong and weak sustainability or conservationists vs industrialists and aims to search for a 'middle way' by providing a strong neo-Keynesian economic push without putting excessive pressure on the natural resources, including water or producing emissions, both directly and indirectly. Energy economics A key concept of energy economics is net energy gain, which recognizes that all energy sources require an initial energy investment in order to produce energy. To be useful the energy return on energy invested (EROEI) has to be greater than one. The net energy gain from the production of coal, oil and gas has declined over time as the easiest to produce sources have been most heavily depleted. In traditional energy economics, surplus energy is often seen as something to be capitalized on—either by storing for future use or by converting it into economic growth. Ecological economics generally rejects the view of energy economics that growth in the energy supply is related directly to well-being, focusing instead on biodiversity and creativity – or natural capital and individual capital, in the terminology sometimes adopted to describe these economically. In practice, ecological economics focuses primarily on the key issues of uneconomic growth and quality of life. Ecological economists are inclined to acknowledge that much of what is important in human well-being is not analyzable from a strictly economic standpoint and suggests an interdisciplinary approach combining social and natural sciences as a means to address this. When considering surplus energy, ecological economists state this could be used for activities that do not directly contribute to economic productivity but instead enhance societal and environmental well-being. This concept of dépense, as developed by Georges Bataille, offers a novel perspective on the management of surplus energy within economies. This concept encourages a shift from growth-centric models to approaches that prioritise sustainable and meaningful expenditures of excess resources. Thermoeconomics is based on the proposition that the role of energy in biological evolution should be defined and understood through the second law of thermodynamics, but also in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits (or profitability) of the various mechanisms for capturing and utilizing available energy to build biomass and do work. As a result, thermoeconomics is often discussed in the field of ecological economics, which itself is related to the fields of sustainability and sustainable development. Exergy analysis is performed in the field of industrial ecology to use energy more efficiently. The term exergy, was coined by Zoran Rant in 1956, but the concept was developed by J. Willard Gibbs. In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Energy accounting and balance An energy balance can be used to track energy through a system, and is a very useful tool for determining resource use and environmental impacts, using the First and Second laws of thermodynamics, to determine how much energy is needed at each point in a system, and in what form that energy is a cost in various environmental issues. The energy accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within the system. Scientists have written and speculated on different aspects of energy accounting.<ref>Stabile, Donald R. "Veblen and the Political Economy of the Engineer: the radical thinker and engineering leaders came to technocratic ideas at the same time," American Journal of Economics and Sociology (45:1) 1986, 43-44.</ref> Ecosystem services and their valuation Ecological economists agree that ecosystems produce enormous flows of goods and services to human beings, playing a key role in producing well-being. At the same time, there is intense debate about how and when to place values on these benefits. A study was carried out by Costanza and colleagues to determine the 'value' of the services provided by the environment. This was determined by averaging values obtained from a range of studies conducted in very specific context and then transferring these without regard to that context. Dollar figures were averaged to a per hectare number for different types of ecosystem e.g. wetlands, oceans. A total was then produced which came out at 33 trillion US dollars (1997 values), more than twice the total GDP of the world at the time of the study. This study was criticized by pre-ecological and even some environmental economists – for being inconsistent with assumptions of financial capital valuation – and ecological economists – for being inconsistent with an ecological economics focus on biological and physical indicators. The whole idea of treating ecosystems as goods and services to be valued in monetary terms remains controversial. A common objection is that life is precious or priceless, but this demonstrably degrades to it being worthless within cost-benefit analysis and other standard economic methods. Reducing human bodies to financial values is a necessary part of mainstream economics and not always in the direct terms of insurance or wages. One example of this in practice is the value of a statistical life, which is a dollar value assigned to one life used to evaluate the costs of small changes in risk to life–such as exposure to one pollutant. Economics, in principle, assumes that conflict is reduced by agreeing on voluntary contractual relations and prices instead of simply fighting or coercing or tricking others into providing goods or services. In doing so, a provider agrees to surrender time and take bodily risks and other (reputation, financial) risks. Ecosystems are no different from other bodies economically except insofar as they are far less replaceable than typical labour or commodities. Despite these issues, many ecologists and conservation biologists are pursuing ecosystem valuation. Biodiversity measures in particular appear to be the most promising way to reconcile financial and ecological values, and there are many active efforts in this regard. The growing field of biodiversity finance began to emerge in 2008 in response to many specific proposals such as the Ecuadoran Yasuni proposalMultinational Monitor, 9/2007. Accessed: December 23, 2012. or similar ones in the Congo. US news outlets treated the stories as a "threat" to "drill a park" reflecting a previously dominant view that NGOs and governments had the primary responsibility to protect ecosystems. However Peter Barnes and other commentators have recently argued that a guardianship/trustee/commons model is far more effective and takes the decisions out of the political realm. Commodification of other ecological relations as in carbon credit and direct payments to farmers to preserve ecosystem services are likewise examples that enable private parties to play more direct roles protecting biodiversity, but is also controversial in ecological economics. The United Nations Food and Agriculture Organization achieved near-universal agreement in 2008 that such payments directly valuing ecosystem preservation and encouraging permaculture were the only practical way out of a food crisis. The holdouts were all English-speaking countries that export GMOs and promote "free trade" agreements that facilitate their own control of the world transport network: The US, UK, Canada and Australia. Not 'externalities', but cost shifting Ecological economics is founded upon the view that the neoclassical economics (NCE) assumption that environmental and community costs and benefits are mutually canceling "externalities" is not warranted. Joan Martinez Alier, for instance shows that the bulk of consumers are automatically excluded from having an impact upon the prices of commodities, as these consumers are future generations who have not been born yet. The assumptions behind future discounting, which assume that future goods will be cheaper than present goods, has been criticized by David Pearce and by the recent Stern Report (although the Stern report itself does employ discounting and has been criticized for this and other reasons by ecological economists such as Clive Spash). Concerning these externalities, some like the eco-businessman Paul Hawken argue an orthodox economic line that the only reason why goods produced unsustainably are usually cheaper than goods produced sustainably is due to a hidden subsidy, paid by the non-monetized human environment, community or future generations. These arguments are developed further by Hawken, Amory and Hunter Lovins to promote their vision of an environmental capitalist utopia in Natural Capitalism: Creating the Next Industrial Revolution. In contrast, ecological economists, like Joan Martinez-Alier, appeal to a different line of reasoning. Rather than assuming some (new) form of capitalism is the best way forward, an older ecological economic critique questions the very idea of internalizing externalities as providing some corrective to the current system. The work by Karl William Kapp explains why the concept of "externality" is a misnomer. In fact the modern business enterprise operates on the basis of shifting costs onto others as normal practice to make profits. Charles Eisenstein has argued that this method of privatising profits while socialising the costs through externalities, passing the costs to the community, to the natural environment or to future generations is inherently destructive. As social ecological economist Clive Spash has noted, externality theory fallaciously assumes environmental and social problems are minor aberrations in an otherwise perfectly functioning efficient economic system. Internalizing the odd externality does nothing to address the structural systemic problem and fails to recognize the all pervasive nature of these supposed 'externalities'. Ecological-economic modeling Mathematical modeling is a powerful tool that is used in ecological economic analysis. Various approaches and techniques include:Faucheux, S., Pearce, D., and Proops, J. (eds.) (1995), Models of Sustainable Development, Edward Elgar evolutionary, input-output, neo-Austrian modeling, entropy and thermodynamic models, multi-criteria, and agent-based modeling, the environmental Kuznets curve, and Stock-Flow consistent model frameworks. System dynamics and GIS are techniques applied, among other, to spatial dynamic landscape simulation modeling. The Matrix accounting methods of Christian Felber provide a more sophisticated method for identifying "the common good" Monetary theory and policy Ecological economics draws upon its work on resource allocation and strong sustainability to address monetary policy. Drawing upon a transdisciplinary literature, ecological economics roots its policy work in monetary theory and its goals of sustainable scale, just distribution, and efficient allocation. Ecological economics' work on monetary theory and policy can be traced to Frederick Soddy's work on money. The field considers questions such as the growth imperative of interest-bearing debt, the nature of money, and alternative policy proposals such as alternative currencies and public banking. Criticism Assigning monetary value to natural resources such as biodiversity, and the emergent ecosystem services is often viewed as a key process in influencing economic practices, policy, and decision-making.Dasgupta P. Nature’s role in sustaining economic development. Philos Trans R Soc Lond B Biol Sci. 2010 Jan 12;365(1537):5–11. While this idea is becoming more and more accepted among ecologists and conservationist, some argue that it is inherently false. McCauley argues that ecological economics and the resulting ecosystem service based conservation can be harmful. He describes four main problems with this approach: Firstly, it seems to be assumed that all ecosystem services are financially beneficial. This is undermined by a basic characteristic of ecosystems: they do not act specifically in favour of any single species. While certain services might be very useful to us, such as coastal protection from hurricanes by mangroves for example, others might cause financial or personal harm, such as wolves hunting cattle. The complexity of Eco-systems makes it challenging to weigh up the value of a given species. Wolves play a critical role in regulating prey populations; the absence of such an apex predator in the Scottish Highlands has caused the over population of deer, preventing afforestation, which increases the risk of flooding and damage to property. Secondly, allocating monetary value to nature would make its conservation reliant on markets that fluctuate. This can lead to devaluation of services that were previously considered financially beneficial. Such is the case of the bees in a forest near former coffee plantations in Finca Santa Fe, Costa Rica. The pollination services were valued to over US$60,000 a year, but soon after the study, coffee prices dropped and the fields were replanted with pineapple. Pineapple does not require bees to be pollinated, so the value of their service dropped to zero. Thirdly, conservation programmes for the sake of financial benefit underestimate human ingenuity to invent and replace ecosystem services by artificial means. McCauley argues that such proposals are deemed to have a short lifespan as the history of technology is about how Humanity developed artificial alternatives to nature's services and with time passing the cost of such services tend to decrease. This would also lead to the devaluation of ecosystem services. Lastly, it should not be assumed that conserving ecosystems is always financially beneficial as opposed to alteration. In the case of the introduction of the Nile perch to Lake Victoria, the ecological consequence was decimation of native fauna. However, this same event is praised by the local communities as they gain significant financial benefits from trading the fish. McCauley argues that, for these reasons, trying to convince decision-makers to conserve nature for monetary reasons is not the path to be followed, and instead appealing to morality is the ultimate way to campaign for the protection of nature. See also Agroecology Circular economy Critique of political economy Deep ecology Earth Economics (policy think tank) Eco-economic decoupling Eco-socialism Ecofeminism Ecological economists (category) Ecological model of competition Ecological values of mangrove Energy quality Harrington paradox Green accounting Gund Institute for Ecological Economics Index of Sustainable Economic Welfare International Society for Ecological Economics Natural capital accounting Natural resource economics Outline of green politics Social metabolism Spaceship Earth Steady-state economy References Further reading Common, M. and Stagl, S. (2005). Ecological Economics: An Introduction. New York: Cambridge University Press. Costanza, R., Cumberland, J. H., Daly, H., Goodland, R., Norgaard, R. B. (1997). An Introduction to Ecological Economics. St. Lucie Press and International Society for Ecological Economics, (e-book at the Encyclopedia of Earth) Daly, H. (1980). Economics, Ecology, Ethics: Essays Toward a Steady-State Economy, W.H. Freeman and Company, . Daly, H. and Townsend, K. (eds.) 1993. Valuing The Earth: Economics, Ecology, Ethics. Cambridge, Mass.; London, England: MIT Press. Daly, H. (1994). "Steady-state Economics". In: Ecology - Key Concepts in Critical Theory, edited by C. Merchant. Humanities Press, . Daly, H., and J. B. Cobb (1994). For the Common Good: Redirecting the Economy Toward Community, the Environment, and a Sustainable Future. Beacon Press, . Daly, H. (1997). Beyond Growth: The Economics of Sustainable Development. Beacon Press, . Daly, H. (2015). "Economics for a Full World." Great Transition Initiative, https://www.greattransition.org/publication/economics-for-a-full-world. Daly, H., and J. Farley (2010). Ecological Economics: Principles and Applications. Island Press, . Fragio, A. (2022). Historical Epistemology of Ecological Economics. Springer. Georgescu-Roegen, N. (1999). The Entropy Law and the Economic Process. iUniverse Press, . Greer, J. M. (2011). The Wealth of Nature: Economics as if Survival Mattered. New Society Publishers, . Hesmyr, Atle Kultorp (2020). Civilization: Its Economic Basis, Historical Lessons and Future Prospects. Nisus Publications. Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp. Jackson, Tim (2009). Prosperity without Growth - Economics for a finite Planet. London: Routledge/Earthscan. . Kevlar, M. (2014). Eco-Economics on the horizon, Economics and human nature from a behavioural perspective. Krishnan R., Harris J. M., and N. R. Goodwin (1995). A Survey of Ecological Economics. Island Press. . Martinez-Alier, J. (1990). Ecological Economics: Energy, Environment and Society. Oxford, England: Basil Blackwell. Martinez-Alier, J., Ropke, I. eds. (2008). Recent Developments in Ecological Economics, 2 vols., E. Elgar, Cheltenham, UK. Soddy, F. A. (1926). Wealth, Virtual Wealth and Debt. London, England: George Allen & Unwin. Stern, D. I. (1997). "Limits to substitution and irreversibility in production and consumption: A neoclassical interpretation of ecological economics". Ecological Economics 21(3): 197–215. Tacconi, L. (2000). Biodiversity and Ecological Economics: Participation, Values, and Resource Management. London, UK: Earthscan Publications. Vatn, A. (2005). Institutions and the Environment. Cheltenham: Edward Elgar. Vianna Franco, M. P., and A. Missemer (2022). A History of Ecological Economic Thought. London & New York: Routledge. Vinje, Victor Condorcet (2015). Economics as if Soil & Health Matters. Nisus Publications. Walker, J. (2020). More Heat than Life: The Tangled Roots of Ecology, Energy, and Economics''. Springer. Industrial ecology Natural resources Environmental social science Environmental economics Schools of economic thought Political ecology
0.784424
0.990519
0.776987
Histology
Histology, also known as microscopic anatomy or microanatomy, is the branch of biology that studies the microscopic anatomy of biological tissues. Histology is the microscopic counterpart to gross anatomy, which looks at larger structures visible without a microscope. Although one may divide microscopic anatomy into organology, the study of organs, histology, the study of tissues, and cytology, the study of cells, modern usage places all of these topics under the field of histology. In medicine, histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. In the field of paleontology, the term paleohistology refers to the histology of fossil organisms. Biological tissues Animal tissue classification There are four basic types of animal tissues: muscle tissue, nervous tissue, connective tissue, and epithelial tissue. All animal tissues are considered to be subtypes of these four principal tissue types (for example, blood is classified as connective tissue, since the blood cells are suspended in an extracellular matrix, the plasma). Plant tissue classification For plants, the study of their tissues falls under the field of plant anatomy, with the following four main types: Dermal tissue Vascular tissue Ground tissue Meristematic tissue Medical histology Histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. It is an important part of anatomical pathology and surgical pathology, as accurate diagnosis of cancer and other diseases often requires histopathological examination of tissue samples. Trained physicians, frequently licensed pathologists, perform histopathological examination and provide diagnostic information based on their observations. Occupations The field of histology that includes the preparation of tissues for microscopic examination is known as histotechnology. Job titles for the trained personnel who prepare histological specimens for examination are numerous and include histotechnicians, histotechnologists, histology technicians and technologists, medical laboratory technicians, and biomedical scientists. Sample preparation Most histological samples need preparation before microscopic observation; these methods depend on the specimen and method of observation. Fixation Chemical fixatives are used to preserve and maintain the structure of tissues and cells; fixation also hardens tissues which aids in cutting the thin sections of tissue needed for observation under the microscope. Fixatives generally preserve tissues (and cells) by irreversibly cross-linking proteins. The most widely used fixative for light microscopy is 10% neutral buffered formalin, or NBF (4% formaldehyde in phosphate buffered saline). For electron microscopy, the most commonly used fixative is glutaraldehyde, usually as a 2.5% solution in phosphate buffered saline. Other fixatives used for electron microscopy are osmium tetroxide or uranyl acetate. The main action of these aldehyde fixatives is to cross-link amino groups in proteins through the formation of methylene bridges (-CH2-), in the case of formaldehyde, or by C5H10 cross-links in the case of glutaraldehyde. This process, while preserving the structural integrity of the cells and tissue can damage the biological functionality of proteins, particularly enzymes. Formalin fixation leads to degradation of mRNA, miRNA, and DNA as well as denaturation and modification of proteins in tissues. However, extraction and analysis of nucleic acids and proteins from formalin-fixed, paraffin-embedded tissues is possible using appropriate protocols. Selection and trimming Selection is the choice of relevant tissue in cases where it is not necessary to put the entire original tissue mass through further processing. The remainder may remain fixed in case it needs to be examined at a later time. Trimming is the cutting of tissue samples in order to expose the relevant surfaces for later sectioning. It also creates tissue samples of appropriate size to fit into cassettes. Embedding Tissues are embedded in a harder medium both as a support and to allow the cutting of thin tissue slices. In general, water must first be removed from tissues (dehydration) and replaced with a medium that either solidifies directly, or with an intermediary fluid (clearing) that is miscible with the embedding media. Paraffin wax For light microscopy, paraffin wax is the most frequently used embedding material. Paraffin is immiscible with water, the main constituent of biological tissue, so it must first be removed in a series of dehydration steps. Samples are transferred through a series of progressively more concentrated ethanol baths, up to 100% ethanol to remove remaining traces of water. Dehydration is followed by a clearing agent (typically xylene although other environmental safe substitutes are in use) which removes the alcohol and is miscible with the wax, finally melted paraffin wax is added to replace the xylene and infiltrate the tissue. In most histology, or histopathology laboratories the dehydration, clearing, and wax infiltration are carried out in tissue processors which automate this process. Once infiltrated in paraffin, tissues are oriented in molds which are filled with wax; once positioned, the wax is cooled, solidifying the block and tissue. Other materials Paraffin wax does not always provide a sufficiently hard matrix for cutting very thin sections (which are especially important for electron microscopy). Paraffin wax may also be too soft in relation to the tissue, the heat of the melted wax may alter the tissue in undesirable ways, or the dehydrating or clearing chemicals may harm the tissue. Alternatives to paraffin wax include, epoxy, acrylic, agar, gelatin, celloidin, and other types of waxes. In electron microscopy epoxy resins are the most commonly employed embedding media, but acrylic resins are also used, particularly where immunohistochemistry is required. For tissues to be cut in a frozen state, tissues are placed in a water-based embedding medium. Pre-frozen tissues are placed into molds with the liquid embedding material, usually a water-based glycol, OCT, TBS, Cryogen, or resin, which is then frozen to form hardened blocks. Sectioning For light microscopy, a knife mounted in a microtome is used to cut tissue sections (typically between 5-15 micrometers thick) which are mounted on a glass microscope slide. For transmission electron microscopy (TEM), a diamond or glass knife mounted in an ultramicrotome is used to cut between 50 and 150 nanometer thick tissue sections. A limited number of manufacturers are recognized for their production of microtomes, including vibrating microtomes commonly referred to as vibratomes, primarily for research and clinical studies. Additionally, Leica Biosystems is known for its production of products related to light microscopy in the context of research and clinical studies. Staining Biological tissue has little inherent contrast in either the light or electron microscope. Staining is employed to give both contrast to the tissue as well as highlighting particular features of interest. When the stain is used to target a specific chemical component of the tissue (and not the general structure), the term histochemistry is used. Light microscopy Hematoxylin and eosin (H&E stain) is one of the most commonly used stains in histology to show the general structure of the tissue. Hematoxylin stains cell nuclei blue; eosin, an acidic dye, stains the cytoplasm and other tissues in different stains of pink. In contrast to H&E, which is used as a general stain, there are many techniques that more selectively stain cells, cellular components, and specific substances. A commonly performed histochemical technique that targets a specific chemical is the Perls' Prussian blue reaction, used to demonstrate iron deposits in diseases like hemochromatosis. The Nissl method for Nissl substance and Golgi's method (and related silver stains) are useful in identifying neurons are other examples of more specific stains. Historadiography In historadiography, a slide (sometimes stained histochemically) is X-rayed. More commonly, autoradiography is used in visualizing the locations to which a radioactive substance has been transported within the body, such as cells in S phase (undergoing DNA replication) which incorporate tritiated thymidine, or sites to which radiolabeled nucleic acid probes bind in in situ hybridization. For autoradiography on a microscopic level, the slide is typically dipped into liquid nuclear tract emulsion, which dries to form the exposure film. Individual silver grains in the film are visualized with dark field microscopy. Immunohistochemistry Recently, antibodies have been used to specifically visualize proteins, carbohydrates, and lipids. This process is called immunohistochemistry, or when the stain is a fluorescent molecule, immunofluorescence. This technique has greatly increased the ability to identify categories of cells under a microscope. Other advanced techniques, such as nonradioactive in situ hybridization, can be combined with immunochemistry to identify specific DNA or RNA molecules with fluorescent probes or tags that can be used for immunofluorescence and enzyme-linked fluorescence amplification (especially alkaline phosphatase and tyramide signal amplification). Fluorescence microscopy and confocal microscopy are used to detect fluorescent signals with good intracellular detail. Electron microscopy For electron microscopy heavy metals are typically used to stain tissue sections. Uranyl acetate and lead citrate are commonly used to impart contrast to tissue in the electron microscope. Specialized techniques Cryosectioning Similar to the frozen section procedure employed in medicine, cryosectioning is a method to rapidly freeze, cut, and mount sections of tissue for histology. The tissue is usually sectioned on a cryostat or freezing microtome. The frozen sections are mounted on a glass slide and may be stained to enhance the contrast between different tissues. Unfixed frozen sections can be used for studies requiring enzyme localization in tissues and cells. Tissue fixation is required for certain procedures such as antibody-linked immunofluorescence staining. Frozen sections are often prepared during surgical removal of tumors to allow rapid identification of tumor margins, as in Mohs surgery, or determination of tumor malignancy, when a tumor is discovered incidentally during surgery. Ultramicrotomy Ultramicrotomy is a method of preparing extremely thin sections for transmission electron microscope (TEM) analysis. Tissues are commonly embedded in epoxy or other plastic resin. Very thin sections (less than 0.1 micrometer in thickness) are cut using diamond or glass knives on an ultramicrotome. Artifacts Artifacts are structures or features in tissue that interfere with normal histological examination. Artifacts interfere with histology by changing the tissues appearance and hiding structures. Tissue processing artifacts can include pigments formed by fixatives, shrinkage, washing out of cellular components, color changes in different tissues types and alterations of the structures in the tissue. An example is mercury pigment left behind after using Zenker's fixative to fix a section. Formalin fixation can also leave a brown to black pigment under acidic conditions. History In the 17th century the Italian Marcello Malpighi used microscopes to study tiny biological entities; some regard him as the founder of the fields of histology and microscopic pathology. Malpighi analyzed several parts of the organs of bats, frogs and other animals under the microscope. While studying the structure of the lung, Malpighi noticed its membranous alveoli and the hair-like connections between veins and arteries, which he named capillaries. His discovery established how the oxygen breathed in enters the blood stream and serves the body. In the 19th century histology was an academic discipline in its own right. The French anatomist Xavier Bichat introduced the concept of tissue in anatomy in 1801, and the term "histology", coined to denote the "study of tissues", first appeared in a book by Karl Meyer in 1819. Bichat described twenty-one human tissues, which can be subsumed under the four categories currently accepted by histologists. The usage of illustrations in histology, deemed as useless by Bichat, was promoted by Jean Cruveilhier. In the early 1830s Purkynĕ invented a microtome with high precision. During the 19th century many fixation techniques were developed by Adolph Hannover (solutions of chromates and chromic acid), Franz Schulze and Max Schultze (osmic acid), Alexander Butlerov (formaldehyde) and Benedikt Stilling (freezing). Mounting techniques were developed by Rudolf Heidenhain (1824–1898), who introduced gum Arabic; Salomon Stricker (1834–1898), who advocated a mixture of wax and oil; and Andrew Pritchard (1804–1884) who, in 1832, used a gum/isinglass mixture. In the same year, Canada balsam appeared on the scene, and in 1869 Edwin Klebs (1834–1913) reported that he had for some years embedded his specimens in paraffin. The 1906 Nobel Prize in Physiology or Medicine was awarded to histologists Camillo Golgi and Santiago Ramon y Cajal. They had conflicting interpretations of the neural structure of the brain based on differing interpretations of the same images. Ramón y Cajal won the prize for his correct theory, and Golgi for the silver-staining technique that he invented to make it possible. Future directions In vivo histology Currently there is intense interest in developing techniques for in vivo histology (predominantly using MRI), which would enable doctors to non-invasively gather information about healthy and diseased tissues in living patients, rather than from fixed tissue samples. See also National Society for Histotechnology Slice preparation Notes References External links Histotechnology Staining Histochemistry Anatomy Laboratory healthcare occupations
0.778408
0.99788
0.776758
Integrated farming
Integrated farming (IF), integrated production, or integrated farm management is a whole farm management system which aims to deliver more sustainable agriculture without compromising the quality or quantity of agricultural products. Integrated farming combines modern tools and technologies with traditional practices according to a given site and situation, often employing many different cultivation techniques in a small growing area. Definition The International Organization of Biological Control (IOBC) describes integrated farming according to the UNI 11233-2009 European standard as a farming system where high-quality organic food, animal feed, fiber, and renewable energy are produced by using resources such as soil, water, air, and nature as well as regulating factors to farm sustainably and with as few polluting inputs as possible. Particular emphasis is placed on an integrated organic approach which views the farm and its environmental surroundings as an intricately cross-linked whole, on the fundamental role and function of agro-ecosystems, on nutrient cycles, which are balanced and adapted to the demands of specific crops, and on the health and welfare of livestock residing on the farm. Preserving and enhancing soil fertility, maintaining and improving biodiversity, and adhering to ethical and social criteria are indispensable basic elements. Crop protection takes into account all biological, technical, and chemical methods, which then are balanced carefully with objectives to protect the environment, to maintain economic profitability, and to fulfill social or cultural requirements. The European Initiative for Sustainable Development in Agriculture (EISA) has an Integrated Farming Framework, which provides additional explanations on key aspects of integrated farming. These include: Organization & Planning, Human & Social Capital, Energy Efficiency, Water Use & Protection, Climate Change & Air Quality, Soil Management, Crop Nutrition, Crop Health & Protection, Animal Husbandry, Health & Welfare, Landscape & Nature Conservation, and Waste Management Pollution Control. In the UK, LEAF (Linking Environment and Farming) promotes a comparable model and defines Integrated Farm Management (IFM) as a whole-farm business approach that delivers more sustainable farming. LEAF's Integrated Farm Management consists of nine interrelated sections: Organization & Planning, Soil Management & Fertility, Crop Health & Protection, Pollution Control & By-Product Management, Animal Husbandry, Energy Efficiency, Water Management, and Landscape & Nature Conservation. Classification The Food and Agriculture Organization of the United Nations (FAO) promotes Integrated Pest Management (IPM) as the preferred approach to crop protection and regards it as a pillar of both sustainable intensification of crop production and pesticide risk reduction. IPM, thus, is an indispensable element of Integrated Crop Management, which in turn is an essential part of the holistic integrated farming approach towards sustainable agriculture. In France, the Forum des Agriculteurs Responsables Respectueux de l'Environnement (FARRE) defines a set of common principles and practices to help farmers achieve these goals. These principles include: Producing sufficient high quality food, fibre, and industrial raw materials Meeting the demands of society Maintaining a viable farming business Caring for the environment Sustaining natural resources The practices include: Organization and management Monitoring and auditing Crop protection Animal husbandry Soil and water management Crop nutrition Energy management Waste management and pollution prevention Wildlife and landscape management Crop rotation and variety choice Keller, 1986 (quoted in Lütke Entrup et al., 1998 1) highlights that integrated crop management is not to be understood as a compromise between different agricultural production systems. Rather, it must be understood as a production system with targeted, dynamic, and continuous use and development of methods based on knowledge obtained from experiences in so-called conventional farming. In addition to natural scientific findings, impulses from organic farming are also taken up. History Integrated Pest Management can be seen as a starting point for a holistic approach to agricultural production. Following the excessive use of crop protection chemicals, first steps in IPM were taken in fruit production at the end of the 1950s. The concept was then further developed globally in all major crops. On the basis of results of the system-oriented IPM approach, models for integrated crop management were developed. Initially, animal husbandry was not seen as part of such integrated approaches (Lütke Entrup et al., 1998 1). In the years to follow, various national and regional initiatives and projects were formed. These include LEAF (Linking Environment And Farming) in the UK, FNL (Fördergemeinschaft Nachhaltige Landwirtschaft e.V.) in Germany, FARRE (Forum des Agriculteurs Responsables Respectueux de l'Environnement) in France, FILL (Fördergemeinschaft Integrierte Landbewirtschaftung Luxemburg) in Luxembourg, and OiB (Odling i Balans) in Sweden. However, there are few if any figures available on the uptake of integrated farming systems in the major crops throughout Europe, which has led to a recommendation by the European Economic and Social Committee in February 2014 that the EU should carry out an in-depth analysis of integrated production in Europe in order to obtain insights into the current situation and potential developments. There is evidence, however, that between 60 and 80% of pome, stone, and soft fruits were grown, controlled, and marketed according to "Integrated Production Guidelines" in 1999 in Germany. LEAF is a sustainable farming organization established in the UK in 1991 which promotes the uptake and knowledge sharing of integrated farm management by the LEAF Network, a series of LEAF demonstration farms and innovation centres. The LEAF Marque System was established in 2003 and is an environmental assurance system recognising more sustainably farmed products. The principles of integrated farm management (IFM) underpin the requirements of LEAF Marque certification, as set out in the LEAF Marque Standard. LEAF Marque is a global system and adopts a whole farm approach, certifying the entire farm business and its products. In 2019, LEAF Marque businesses were in 29 countries, and 39% of UK fruit and vegetables were grown by LEAF Marque-certified businesses. Animal husbandry and integrated crop management (ICM) often are just two branches of one agricultural enterprise. In modern agriculture, animal husbandry and crop production must be understood as interlinked sectors which cannot be looked at in isolation, as the context of agricultural systems leads to tight interdependencies. Uncoupling animal husbandry from arable production (too high stocking rates) is therefore not considered in accordance with the principles and objectives of integrated farming (Lütke Entrup et al., 1998 1). Accordingly, holistic concepts for integrated farming or integrated farm management such as the EISA Integrated Farming Framework, and the concept of sustainable agriculture, are increasingly developed, promoted, and implemented at the global level. Related to the 'sustainable intensification' of agriculture, an objective which in part is discussed controversially, efficiency of resource use becomes increasingly important today. Environmental impacts of agricultural production depend on the efficiency achieved when using natural resources and all other means of production. The input per kg of output, the output per kg of input, and the output achieved per hectare of land—a limited resource in the light of world population growth—are decisive figures for evaluating the efficiency and the environmental impact of agricultural systems. Efficiency parameters therefore offer important evidence how efficiency and environmental impacts of agriculture can be judged and where improvements can or must be made. Against this background, documentation as well certification schemes and farm audits such as LEAF Marque in the UK and 33 other countries throughout the world become more and more important tools to evaluate—and further improve—agricultural practices. Even though being by far more product- or sector-oriented, SAI Platform principles and practices and GlobalGap for example, pursue similar approaches. Objectives Integrated farming is based on attention to detail, continuous improvement and managing all resources available. Being bound to sustainable development, the underlying three dimensions economic development, social development and environmental protection are thoroughly considered in the practical implementation of integrated farming. However, the need for profitability is a decisive prerequisite: To be sustainable, the system must be profitable, as profits generate the possibility to support all activities outlined in the IF Framework. As a management and planning strategy, integrated farming incorporates regular benchmarking of goals against results. The EISA Integrated Farming Framework idea places a strong emphasis on farmers' understanding of their own performance. Farmers become aware of accomplishments as well as inadequacies by evaluating their performance on a regular basis, and by paying attention to detail, they may continuously work on improving the entire farming operation as well as their economic performance: According to research in the United Kingdom, lowering fertilizer and chemical inputs to proportions proportionate to crop demand allowed for cost reductions ranging from £2,500 to £10,000 per year and per farm . Prevalence Following first developments in the 1950s, various approaches to integrated pest management, integrated crop management, integrated production, and integrated farming were developed worldwide, including Germany, Switzerland, US, Australia, and India. As the implementation of integrated farming should be handled according to the given site and situation instead of following strict rules and recipes, the concept is applicable all over the world. Criticism Environmental organizations have criticized integrated farming. That is in part due to the fact that there are European Organic Regulations such as (EC) No 834/2007 or the new draft from 2014 but no comparable regulations for integrated farming. Whereas organic farming and the in Germany for example are legally protected, EU Commission has not yet considered to start working on a comparable framework or blueprint for integrated farming. When products are marketed as Controlled Integrated Produce, according control mechanisms and quality-labels are not based on national or European directives but are established and handled by private organizations and quality schemes such as LEAF Marque. References Further reading Lütke Entrup, N., Onnen, O., and Teichgräber, B., 1998: Zukunftsfähige Landwirtschaft – Integrierter Landbau in Deutschland und Europa – Studie zur Entwicklung und den Perspektiven. Heft 14/1998, Fördergemeinschaft Integrierter Pflanzenbau, Bonn. . (Available in German only) Oerke, E.-C., Dehne, H.-W., Schönbeck, F., and Weber, A., 1994: Crop Production and Crop Protection – Estimated Losses in Major Food and Cash Crops. Elsevier, Amsterdam, Lausanne, New York, Oxford, Shannon, Tokyo. Sustainable agriculture
0.786222
0.987953
0.77675
Bioturbation
Bioturbation is defined as the reworking of soils and sediments by animals or plants. It includes burrowing, ingestion, and defecation of sediment grains. Bioturbating activities have a profound effect on the environment and are thought to be a primary driver of biodiversity. The formal study of bioturbation began in the 1800s by Charles Darwin experimenting in his garden. The disruption of aquatic sediments and terrestrial soils through bioturbating activities provides significant ecosystem services. These include the alteration of nutrients in aquatic sediment and overlying water, shelter to other species in the form of burrows in terrestrial and water ecosystems, and soil production on land. Bioturbators are deemed ecosystem engineers because they alter resource availability to other species through the physical changes they make to their environments. This type of ecosystem change affects the evolution of cohabitating species and the environment, which is evident in trace fossils left in marine and terrestrial sediments. Other bioturbation effects include altering the texture of sediments (diagenesis), bioirrigation, and displacement of microorganisms and non-living particles. Bioturbation is sometimes confused with the process of bioirrigation, however these processes differ in what they are mixing; bioirrigation refers to the mixing of water and solutes in sediments and is an effect of bioturbation. Walruses, salmon, and pocket gophers are examples of large bioturbators. Although the activities of these large macrofaunal bioturbators are more conspicuous, the dominant bioturbators are small invertebrates, such as earthworms, polychaetes, ghost shrimp, mud shrimp, and midge larvae. The activities of these small invertebrates, which include burrowing and ingestion and defecation of sediment grains, contribute to mixing and the alteration of sediment structure. Functional groups Bioturbators have been organized by a variety of functional groupings based on either ecological characteristics or biogeochemical effects. While the prevailing categorization is based on the way bioturbators transport and interact with sediments, the various groupings likely stem from the relevance of a categorization mode to a field of study (such as ecology or sediment biogeochemistry) and an attempt to concisely organize the wide variety of bioturbating organisms in classes that describe their function. Examples of categorizations include those based on feeding and motility, feeding and biological interactions, and mobility modes. The most common set of groupings are based on sediment transport and are as follows: Gallery-diffusers create complex tube networks within the upper sediment layers and transport sediment through feeding, burrow construction, and general movement throughout their galleries. Gallery-diffusers are heavily associated with burrowing polychaetes, such as Nereis diversicolor and Marenzelleria spp. Biodiffusers transport sediment particles randomly over short distances as they move through sediments. Animals mostly attributed to this category include bivalves such as clams, and amphipod species, but can also include larger vertebrates, such as bottom-dwelling fish and rays that feed along the sea floor. Biodiffusers can be further divided into two subgroups, which include epifaunal (organisms that live on the surface sediments) biodiffusers and surface biodiffusers. This subgrouping may also include gallery-diffusers, reducing the number of functional groups. Upward-conveyors are oriented head-down in sediments, where they feed at depth and transport sediment through their guts to the sediment surface. Major upward-conveyor groups include burrowing polychaetes like the lugworm, Arenicola marina, and thalassinid shrimps. Downward-conveyor species are oriented with their heads towards the sediment-water interface and defecation occurs at depth. Their activities transport sediment from the surface to deeper sediment layers as they feed. Notable downward-conveyors include those in the peanut worm family, Sipunculidae. Regenerators are categorized by their ability to release sediment to the overlying water column, which is then dispersed as they burrow. After regenerators abandon their burrows, water flow at the sediment surface can push in and collapse the burrow. Examples of regenerator species include fiddler and ghost crabs. Ecological roles The evaluation of the ecological role of bioturbators has largely been species-specific. However, their ability to transport solutes, such as dissolved oxygen, enhance organic matter decomposition and diagenesis, and alter sediment structure has made them important for the survival and colonization by other macrofaunal and microbial communities. Microbial communities are greatly influenced by bioturbator activities, as increased transport of more energetically favorable oxidants, such as oxygen, to typically highly reduced sediments at depth alters the microbial metabolic processes occurring around burrows. As bioturbators burrow, they also increase the surface area of sediments across which oxidized and reduced solutes can be exchanged, thereby increasing the overall sediment metabolism. This increase in sediment metabolism and microbial activity further results in enhanced organic matter decomposition and sediment oxygen uptake. In addition to the effects of burrowing activity on microbial communities, studies suggest that bioturbator fecal matter provides a highly nutritious food source for microbes and other macrofauna, thus enhancing benthic microbial activity. This increased microbial activity by bioturbators can contribute to increased nutrient release to the overlying water column. Nutrients released from enhanced microbial decomposition of organic matter, notably limiting nutrients, such as ammonium, can have bottom-up effects on ecosystems and result in increased growth of phytoplankton and bacterioplankton. Burrows offer protection from predation and harsh environmental conditions. For example, termites (Macrotermes bellicosus) burrow and create mounds that have a complex system of air ducts and evaporation devices that create a suitable microclimate in an unfavorable physical environment. Many species are attracted to bioturbator burrows because of their protective capabilities. The shared use of burrows has enabled the evolution of symbiotic relationships between bioturbators and the many species that utilize their burrows. For example, gobies, scale-worms, and crabs live in the burrows made by innkeeper worms. Social interactions provide evidence of co-evolution between hosts and their burrow symbionts. This is exemplified by shrimp-goby associations. Shrimp burrows provide shelter for gobies and gobies serve as a scout at the mouth of the burrow, signaling the presence of potential danger. In contrast, the blind goby Typhlogobius californiensis lives within the deep portion of Callianassa shrimp burrows where there is not much light. The blind goby is an example of a species that is an obligate commensalist, meaning their existence depends on the host bioturbator and its burrow. Although newly hatched blind gobies have fully developed eyes, their eyes become withdrawn and covered by skin as they develop. They show evidence of commensal morphological evolution because it is hypothesized that the lack of light in the burrows where the blind gobies reside is responsible for the evolutionary loss of functional eyes. Bioturbators can also inhibit the presence of other benthic organisms by smothering, exposing other organisms to predators, or resource competition. While thalassinidean shrimps can provide shelter for some organisms and cultivate interspecies relationships within burrows, they have also been shown to have strong negative effects on other species, especially those of bivalves and surface-grazing gastropods, because thalassinidean shrimps can smother bivalves when they resuspend sediment. They have also been shown to exclude or inhibit polychaetes, cumaceans, and amphipods. This has become a serious issue in the northwestern United States, as ghost and mud shrimp (thalassinidean shrimp) are considered pests to bivalve aquaculture operations. The presence of bioturbators can have both negative and positive effects on the recruitment of larvae of conspecifics (those of the same species) and those of other species, as the resuspension of sediments and alteration of flow at the sediment-water interface can affect the ability of larvae to burrow and remain in sediments. This effect is largely species-specific, as species differences in resuspension and burrowing modes have variable effects on fluid dynamics at the sediment-water interface. Deposit-feeding bioturbators may also hamper recruitment by consuming recently settled larvae. Biogeochemical effects Since its onset around 539 million years ago, bioturbation has been responsible for changes in ocean chemistry, primarily through nutrient cycling. Bioturbators played, and continue to play, an important role in nutrient transport across sediments. For example, bioturbating animals are hypothesized to have affected the cycling of sulfur in the early oceans. According to this hypothesis, bioturbating activities had a large effect on the sulfate concentration in the ocean. Around the Cambrian-Precambrian boundary (539 million years ago), animals begin to mix reduced sulfur from ocean sediments to overlying water causing sulfide to oxidize, which increased the sulfate composition in the ocean. During large extinction events, the sulfate concentration in the ocean was reduced. Although this is difficult to measure directly, seawater sulfur isotope compositions during these times indicates bioturbators influenced the sulfur cycling in the early Earth. Bioturbators have also altered phosphorus cycling on geologic scales. Bioturbators mix readily available particulate organic phosphorus (P) deeper into ocean sediment layers which prevents the precipitation of phosphorus (mineralization) by increasing the sequestration of phosphorus above normal chemical rates. The sequestration of phosphorus limits oxygen concentrations by decreasing production on a geologic time scale. This decrease in production results in an overall decrease in oxygen levels, and it has been proposed that the rise of bioturbation corresponds to a decrease in oxygen levels of that time. The negative feedback of animals sequestering phosphorus in the sediments and subsequently reducing oxygen concentrations in the environment limits the intensity of bioturbation in this early environment. Organic contaminants Bioturbation can either enhance or reduce the flux of contaminants from the sediment to the water column, depending on the mechanism of sediment transport. In polluted sediments, bioturbating animals can mix the surface layer and cause the release of sequestered contaminants into the water column. Upward-conveyor species, like polychaete worms, are efficient at moving contaminated particles to the surface. Invasive animals can remobilize contaminants previously considered to be buried at a safe depth. In the Baltic Sea, the invasive Marenzelleria species of polychaete worms can burrow to 35-50 centimeters which is deeper than native animals, thereby releasing previously sequestered contaminants. However, bioturbating animals that live in the sediment (infauna) can also reduce the flux of contaminants to the water column by burying hydrophobic organic contaminants into the sediment. Burial of uncontaminated particles by bioturbating organisms provides more absorptive surfaces to sequester chemical pollutants in the sediments. Ecosystem impacts Nutrient cycling is still affected by bioturbation in the modern Earth. Some examples in the terrestrial and aquatic ecosystems are below. Terrestrial Plants and animals utilize soil for food and shelter, disturbing the upper soil layers and transporting chemically weathered rock called saprolite from the lower soil depths to the surface. Terrestrial bioturbation is important in soil production, burial, organic matter content, and downslope transport. Tree roots are sources of soil organic matter, with root growth and stump decay also contributing to soil transport and mixing. Death and decay of tree roots first delivers organic matter to the soil and then creates voids, decreasing soil density. Tree uprooting causes considerable soil displacement by producing mounds, mixing the soil, or inverting vertical sections of soil. Burrowing animals, such as earth worms and small mammals, form passageways for air and water transport which changes the soil properties, such as the vertical particle-size distribution, soil porosity, and nutrient content. Invertebrates that burrow and consume plant detritus help produce an organic-rich topsoil known as the soil biomantle, and thus contribute to the formation of soil horizons. Small mammals such as pocket gophers also play an important role in the production of soil, possibly with an equal magnitude to abiotic processes. Pocket gophers form above-ground mounds, which moves soil from the lower soil horizons to the surface, exposing minimally weathered rock to surface erosion processes, speeding soil formation. Pocket gophers are thought to play an important role in the downslope transport of soil, as the soil that forms their mounds is more susceptible to erosion and subsequent transport. Similar to tree root effects, the construction of burrows-even when backfilled- decreases soil density. The formation of surface mounds also buries surface vegetation, creating nutrient hotspots when the vegetation decomposes, increasing soil organic matter. Due to the high metabolic demands of their burrow-excavating subterranean lifestyle, pocket gophers must consume large amounts of plant material. Though this has a detrimental effect on individual plants, the net effect of pocket gophers is increased plant growth from their positive effects on soil nutrient content and physical soil properties. Freshwater Important sources of bioturbation in freshwater ecosystems include benthivorous (bottom-dwelling) fish, macroinvertebrates such as worms, insect larvae, crustaceans and molluscs, and seasonal influences from anadromous (migrating) fish such as salmon. Anadromous fish migrate from the sea into fresh-water rivers and streams to spawn. Macroinvertebrates act as biological pumps for moving material between the sediments and water column, feeding on sediment organic matter and transporting mineralized nutrients into the water column. Both benthivorous and anadromous fish can affect ecosystems by decreasing primary production through sediment re-suspension, the subsequent displacement of benthic primary producers, and recycling nutrients from the sediment back into the water column. Lakes and ponds The sediments of lake and pond ecosystems are rich in organic matter, with higher organic matter and nutrient contents in the sediments than in the overlying water. Nutrient re-regeneration through sediment bioturbation moves nutrients into the water column, thereby enhancing the growth of aquatic plants and phytoplankton (primary producers). The major nutrients of interest in this flux are nitrogen and phosphorus, which often limit the levels of primary production in an ecosystem. Bioturbation increases the flux of mineralized (inorganic) forms of these elements, which can be directly used by primary producers. In addition, bioturbation increases the water column concentrations of nitrogen and phosphorus-containing organic matter, which can then be consumed by fauna and mineralized. Lake and pond sediments often transition from the aerobic (oxygen containing) character of the overlaying water to the anaerobic (without oxygen) conditions of the lower sediment over sediment depths of only a few millimeters, therefore, even bioturbators of modest size can affect this transition of the chemical characteristics of sediments. By mixing anaerobic sediments into the water column, bioturbators allow aerobic processes to interact with the re-suspended sediments and the newly exposed bottom sediment surfaces. Macroinvertebrates including chironomid (non-biting midges) larvae and tubificid worms (detritus worms) are important agents of bioturbation in these ecosystems and have different effects based on their respective feeding habits. Tubificid worms do not form burrows, they are upward conveyors. Chironomids, on the other hand, form burrows in the sediment, acting as bioirrigators and aerating the sediments and are downward conveyors. This activity, combined with chironomid's respiration within their burrows, decrease available oxygen in the sediment and increase the loss of nitrates through enhanced rates of denitrification. The increased oxygen input to sediments by macroinvertebrate bioirrigation coupled with bioturbation at the sediment-water interface complicates the total flux of phosphorus . While bioturbation results in a net flux of phosphorus into the water column, the bio-irrigation of the sediments with oxygenated water enhances the adsorption of phosphorus onto iron-oxide compounds, thereby reducing the total flux of phosphorus into the water column. The presence of macroinvertebrates in sediment can initiate bioturbation due to their status as an important food source for benthivorous fish such as carp. Of the bioturbating, benthivorous fish species, carp in particular are important ecosystem engineers and their foraging and burrowing activities can alter the water quality characteristics of ponds and lakes. Carp increase water turbidity by the re-suspension of benthic sediments. This increased turbidity limits light penetration and coupled with increased nutrient flux from the sediment into the water column, inhibits the growth of macrophytes (aquatic plants) favoring the growth of phytoplankton in the surface waters. Surface phytoplankton colonies benefit from both increased suspended nutrients and from recruitment of buried phytoplankton cells released from the sediments by the fish bioturbation. Macrophyte growth has also been shown to be inhibited by displacement from the bottom sediments due to fish burrowing. Rivers and streams River and stream ecosystems show similar responses to bioturbation activities, with chironomid larvae and tubificid worm macroinvertebrates remaining as important benthic agents of bioturbation. These environments can also be subject to strong season bioturbation effects from anadromous fish. Salmon function as bioturbators on both gravel to sand-sized sediment and a nutrient scale, by moving and re-working sediments in the construction of redds (gravel depressions or "nests" containing eggs buried under a thin layer of sediment) in rivers and streams and by mobilization of nutrients. The construction of salmon redds functions to increase the ease of fluid movement (hydraulic conductivity) and porosity of the stream bed. In select rivers, if salmon congregate in large enough concentrations in a given area of the river, the total sediment transport from redd construction can equal or exceed the sediment transport from flood events. The net effect on sediment movement is the downstream transfer of gravel, sand and finer materials and enhancement of water mixing within the river substrate. The construction of salmon redds increases sediment and nutrient fluxes through the hyporheic zone (area between surface water and groundwater) of rivers and effects the dispersion and retention of marine derived nutrients (MDN) within the river ecosystem. MDN are delivered to river and stream ecosystems by the fecal matter of spawning salmon and the decaying carcasses of salmon that have completed spawning and died. Numerical modeling suggests that residence time of MDN within a salmon spawning reach is inversely proportional to the amount of redd construction within the river. Measurements of respiration within a salmon-bearing river in Alaska further suggest that salmon bioturbation of the river bed plays a significant role in mobilizing MDN and limiting primary productivity while salmon spawning is active. The river ecosystem was found to switch from a net autotrophic to heterotrophic system in response to decreased primary production and increased respiration. The decreased primary production in this study was attributed to the loss of benthic primary producers who were dislodged due to bioturbation, while increased respiration was thought to be due to increased respiration of organic carbon, also attributed to sediment mobilization from salmon redd construction. While marine derived nutrients are generally thought to increase productivity in riparian and freshwater ecosystems, several studies have suggested that temporal effects of bioturbation should be considered when characterizing salmon influences on nutrient cycles. Marine Major marine bioturbators range from small infaunal invertebrates to fish and marine mammals. In most marine sediments, however, they are dominated by small invertebrates, including polychaetes, bivalves, burrowing shrimp, and amphipods. Shallow and coastal Coastal ecosystems, such as estuaries, are generally highly productive, which results in the accumulation of large quantities of detritus (organic waste). These large quantities, in addition to typically small sediment grain size and dense populations, make bioturbators important in estuarine respiration. Bioturbators enhance the transport of oxygen into sediments through irrigation and increase the surface area of oxygenated sediments through burrow construction. Bioturbators also transport organic matter deeper into sediments through general reworking activities and production of fecal matter. This ability to replenish oxygen and other solutes at sediment depth allows for enhanced respiration by both bioturbators as well as the microbial community, thus altering estuarine elemental cycling. The effects of bioturbation on the nitrogen cycle are well-documented. Coupled denitrification and nitrification are enhanced due to increased oxygen and nitrate delivery to deep sediments and increased surface area across which oxygen and nitrate can be exchanged. The enhanced nitrification-denitrification coupling contributes to greater removal of biologically available nitrogen in shallow and coastal environments, which can be further enhanced by the excretion of ammonium by bioturbators and other organisms residing in bioturbator burrows. While both nitrification and denitrification are enhanced by bioturbation, the effects of bioturbators on denitrification rates have been found to be greater than that on rates of nitrification, further promoting the removal of biologically available nitrogen. This increased removal of biologically available nitrogen has been suggested to be linked to increased rates of nitrogen fixation in microenvironments within burrows, as indicated by evidence of nitrogen fixation by sulfate-reducing bacteria via the presence of nifH (nitrogenase) genes. Bioturbation by walrus feeding is a significant source of sediment and biological community structure and nutrient flux in the Bering Sea. Walruses feed by digging their muzzles into the sediment and extracting clams through powerful suction. By digging through the sediment, walruses rapidly release large amounts of organic material and nutrients, especially ammonium, from the sediment to the water column. Additionally, walrus feeding behavior mixes and oxygenates the sediment and creates pits in the sediment which serve as new habitat structures for invertebrate larvae. Deep sea Bioturbation is important in the deep sea because deep-sea ecosystem functioning depends on the use and recycling of nutrients and organic inputs from the photic zone. In low energy regions (areas with relatively still water), bioturbation is the only force creating heterogeneity in solute concentration and mineral distribution in the sediment. It has been suggested that higher benthic diversity in the deep sea could lead to more bioturbation which, in turn, would increase the transport of organic matter and nutrients to benthic sediments. Through the consumption of surface-derived organic matter, animals living on the sediment surface facilitate the incorporation of particulate organic carbon (POC) into the sediment where it is consumed by sediment dwelling animals and bacteria. Incorporation of POC into the food webs of sediment dwelling animals promotes carbon sequestration by removing carbon from the water column and burying it in the sediment. In some deep-sea sediments, intense bioturbation enhances manganese and nitrogen cycling. Mathematical modelling The role of bioturbators in sediment biogeochemistry makes bioturbation a common parameter in sediment biogeochemical models, which are often numerical models built using ordinary and partial differential equations. Bioturbation is typically represented as DB, or the biodiffusion coefficient, and is described by a diffusion and, sometimes, an advective term. This representation and subsequent variations account for the different modes of mixing by functional groups and bioirrigation that results from them. The biodiffusion coefficient is usually measured using radioactive tracers such as Pb210, radioisotopes from nuclear fallout, introduced particles including glass beads tagged with radioisotopes or inert fluorescent particles, and chlorophyll a. Biodiffusion models are then fit to vertical distributions (profiles) of tracers in sediments to provide values for DB. Parameterization of bioturbation, however, can vary, as newer and more complex models can be used to fit tracer profiles. Unlike the standard biodiffusion model, these more complex models, such as expanded versions of the biodiffusion model, random walk, and particle-tracking models, can provide more accuracy, incorporate different modes of sediment transport, and account for more spatial heterogeneity. Evolution The onset of bioturbation had a profound effect on the environment and the evolution of other organisms. Bioturbation is thought to have been an important co-factor of the Cambrian Explosion, during which most major animal phyla appeared in the fossil record over a short time. Predation arose during this time and promoted the development of hard skeletons, for example bristles, spines, and shells, as a form of armored protection. It is hypothesized that bioturbation resulted from this skeleton formation. These new hard parts enabled animals to dig into the sediment to seek shelter from predators, which created an incentive for predators to search for prey in the sediment (see Evolutionary Arms Race). Burrowing species fed on buried organic matter in the sediment which resulted in the evolution of deposit feeding (consumption of organic matter within sediment). Prior to the development of bioturbation, laminated microbial mats were the dominant biological structures of the ocean floor and drove much of the ecosystem functions. As bioturbation increased, burrowing animals disturbed the microbial mat system and created a mixed sediment layer with greater biological and chemical diversity. This greater biological and chemical diversity is thought to have led to the evolution and diversification of seafloor-dwelling species. An alternate, less widely accepted hypothesis for the origin of bioturbation exists. The trace fossil Nenoxites is thought to be the earliest record of bioturbation, predating the Cambrian Period. The fossil is dated to 555 million years, which places it in the Ediacaran Period. The fossil indicates a 5 centimeter depth of bioturbation in muddy sediments by a burrowing worm. This is consistent with food-seeking behavior, as there tended to be more food resources in the mud than the water column. However, this hypothesis requires more precise geological dating to rule out an early Cambrian origin for this specimen. The evolution of trees during the Devonian Period enhanced soil weathering and increased the spread of soil due to bioturbation by tree roots. Root penetration and uprooting also enhanced soil carbon storage by enabling mineral weathering and the burial of organic matter. Fossil record Patterns or traces of bioturbation are preserved in lithified rock. The study of such patterns is called ichnology, or the study of "trace fossils", which, in the case of bioturbators, are fossils left behind by digging or burrowing animals. This can be compared to the footprint left behind by these animals. In some cases bioturbation is so pervasive that it completely obliterates sedimentary structures, such as laminated layers or cross-bedding. Thus, it affects the disciplines of sedimentology and stratigraphy within geology. The study of bioturbator ichnofabrics uses the depth of the fossils, the cross-cutting of fossils, and the sharpness (or how well defined) of the fossil to assess the activity that occurred in old sediments. Typically the deeper the fossil, the better preserved and well defined the specimen. Important trace fossils from bioturbation have been found in marine sediments from tidal, coastal and deep sea sediments. In addition sand dune, or Eolian, sediments are important for preserving a wide variety of fossils. Evidence of bioturbation has been found in deep-sea sediment cores including into long records, although the act extracting the core can disturb the signs of bioturbation, especially at shallower depths. Arthropods, in particular are important to the geologic record of bioturbation of Eolian sediments. Dune records show traces of burrowing animals as far back as the lower Mesozoic (250 Million years ago), although bioturbation in other sediments has been seen as far back as 550 Ma. Research history Bioturbation's importance for soil processes and geomorphology was first realized by Charles Darwin, who devoted his last scientific book to the subject (The Formation of Vegetable Mould through the Action of Worms). Darwin spread chalk dust over a field to observe changes in the depth of the chalk layer over time. Excavations 30 years after the initial deposit of chalk revealed that the chalk was buried 18 centimeters under the sediment, which indicated a burial rate of 6 millimeters per year. Darwin attributed this burial to the activity of earthworms in the sediment and determined that these disruptions were important in soil formation. In 1891, geologist Nathaniel Shaler expanded Darwin's concept to include soil disruption by ants and trees. The term "bioturbation" was later coined by Rudolf Richter in 1952 to describe structures in sediment caused by living organisms. Since the 1980s, the term "bioturbation" has been widely used in soil and geomorphology literature to describe the reworking of soil and sediment by plants and animals. See also Argillipedoturbation Bioirrigation Zoophycos References External links Nereis Park (the World of Bioturbation) Worm Cam Biological oceanography Aquatic ecology Limnology Pedology Physical oceanography Sedimentology
0.798395
0.972864
0.77673
Biomagnification
Biomagnification, also known as bioamplification or biological magnification, is the increase in concentration of a substance, e.g a pesticide, in the tissues of organisms at successively higher levels in a food chain. This increase can occur as a result of: Persistence – where the substance cannot be broken down by environmental processes. Food chain energetics – where the substance's concentration increases progressively as it moves up a food chain. Low or non-existent rate of internal degradation or excretion of the substance – mainly due to water-insolubility. Biological magnification often refers to the process whereby substances such as pesticides or heavy metals work their way into lakes, rivers and the ocean, and then move up the food chain in progressively greater concentrations as they are incorporated into the diet of aquatic organisms such as zooplankton, which in turn are eaten perhaps by fish, which then may be eaten by bigger fish, large birds, animals, or humans. The substances become increasingly concentrated in tissues or internal organs as they move up the chain. Bioaccumulants are substances that increase in concentration in living organisms as they take in contaminated air, water, or food because the substances are very slowly metabolized or excreted. Processes Although sometimes used interchangeably with "bioaccumulation", an important distinction is drawn between the two, and with bioconcentration. Bioaccumulation occurs within a trophic level, and is the increase in the concentration of a substance in certain tissues of organisms' bodies due to absorption from food and the environment. Bioconcentration is defined as occurring when uptake from the water is greater than excretion. Thus, bioconcentration and bioaccumulation occur within an organism, and biomagnification occurs across trophic (food chain) levels. Biodilution is also a process that occurs to all trophic levels in an aquatic environment; it is the opposite of biomagnification, thus when a pollutant gets smaller in concentration as it progresses up a food web. Many chemicals that bioaccumulate are highly soluble in fats (lipophilic) and insoluble in water (hydrophobic). For example, though mercury is only present in small amounts in seawater, it is absorbed by algae (generally as methylmercury). Methylmercury is one of the most harmful mercury molecules. It is efficiently absorbed, but only very slowly excreted by organisms. Bioaccumulation and bioconcentration result in buildup in the adipose tissue of successive trophic levels: zooplankton, small nekton, larger fish, etc. Anything which eats these fish also consumes the higher level of mercury the fish have accumulated. This process explains why predatory fish such as swordfish and sharks or birds like osprey and eagles have higher concentrations of mercury in their tissue than could be accounted for by direct exposure alone. For example, herring contains mercury at approximately 0.01 parts per million (ppm) and shark contains mercury at greater than 1 ppm. DDT is a pesticide known to biomagnify, which is one of the most significant reasons it was deemed harmful to the environment by the EPA and other organizations. DDT is one of the least soluble chemicals known and accumulates progressively in adipose tissue, and as the fat is consumed by predators, the amounts of DDT biomagnify. A well known example of the harmful effects of DDT biomagnification is the significant decline in North American populations of predatory birds such as bald eagles and peregrine falcons due to DDT caused eggshell thinning in the 1950s. DDT is now a banned substance in many parts of the world. Current status In a review, a large number of studies, Suedel et al. concluded that although biomagnification is probably more limited in occurrence than previously thought, there is good evidence that DDT, DDE, PCBs, toxaphene, and the organic forms of mercury and arsenic do biomagnify in nature. For other contaminants, bioconcentration and bioaccumulation account for their high concentrations in organism tissues. More recently, Gray reached a similar substances remaining in the organisms and not being diluted to non-threatening concentrations. The success of top predatory-bird recovery (bald eagles, peregrine falcons) in North America following the ban on DDT use in agriculture is testament to the importance of recognizing and responding to biomagnification. Substances that biomagnify Two common groups that are known to biomagnify are chlorinated hydrocarbons, also known as organochlorines, and inorganic compounds like methylmercury or heavy metals. Both are lipophilic and not easily degraded. Novel organic substances like organochlorines are not easily degraded because organisms lack previous exposure and have thus not evolved specific detoxification and excretion mechanisms, as there has been no selection pressure from them. These substances are consequently known as "persistent organic pollutants" or POPs. Metals are not degradable because they are chemical elements. Organisms, particularly those subject to naturally high levels of exposure to metals, have mechanisms to sequester and excrete metals. Problems arise when organisms are exposed to higher concentrations than usual, which they cannot excrete rapidly enough to prevent damage. Persistent heavy metals, such as lead, cadmium, mercury, and arsenic, can have a wide variety of adverse health effects across species. Novel organic substances DDT (dichlorodiphenyltrichloroethane). Hexachlorobenzene (HCB). PCBs (polychlorinated biphenyls). Toxaphene. Monomethylmercury. See also Mercury in fish Methylmercury Dichlorodiphenyldichloroethylene Toxaphene References External links Fisk AT, Hoekstra PF, Borga K,and DCG Muir, 2003. Biomagnification. Mar. Pollut. Bull. 46 (4): 522-524 Ecotoxicology Food chains Pollution
0.781817
0.993463
0.776706
Biological carbon fixation
Biological carbon fixation, or сarbon assimilation, is the process by which living organisms convert inorganic carbon (particularly carbon dioxide) to organic compounds. These organic compounds are then used to store energy and as structures for other biomolecules. Carbon is primarily fixed through photosynthesis, but some organisms use chemosynthesis in the absence of sunlight. Chemosynthesis is carbon fixation driven by chemical energy rather than from sunlight.   The process of biological carbon fixation plays a crucial role in the global carbon cycle, as it serves as the primary mechanism for removing CO2 (carbon dioxide) from the atmosphere and incorporating it into living biomass. The primary production of organic compounds allows carbon to enter the biosphere. Carbon is considered essential for life as a base element for building organic compounds. The element of carbon forms the bases biogeochemical cycles (or nutrient cycles) and drives communities of living organisms. Understanding biological carbon fixation is essential for comprehending ecosystem dynamics, climate regulation, and the sustainability of life on Earth. Organisms that grow by fixing carbon, such as most plants and algae, are called autotrophs. These include photoautotrophs (which use sunlight) and lithoautotrophs (which use inorganic oxidation). Heterotrophs, such as animals and fungi, are not capable of carbon fixation but are able to grow by consuming the carbon fixed by autotrophs or other heterotrophs. Six natural or autotrophic carbon fixation pathways are currently known. They are the: i) Calvin-Benson-Bassham (Calvin Cycle), ii) Reverse Krebs (rTCA) cycle, iii) the reductive acetyl-CoA (Wood-Ljungdahl pathway), iv) 3-hydroxy propionate [3-HP] bicycle, v) 3-hydroypropionate/4- hydroxybutyrate (3-HP/4-HB) cycle, and vi) the dicarboxylate/ 4-hydroxybutyrate (DC/4-HB) cycle. "Fixed carbon," "reduced carbon," and "organic carbon" may all be used interchangeably to refer to various organic compounds. Net vs. gross CO2 fixation The primary form of fixed inorganic carbon is carbon dioxide (CO2). It is estimated that approximately 250 billion tons of carbon dioxide are converted by photosynthesis annually. The majority of the fixation occurs in terrestrial environments, especially the tropics. The gross amount of carbon dioxide fixed is much larger since approximately 40% is consumed by respiration following photosynthesis. Historically, it is estimated that approximately 2×1011 billion tons of carbon has been fixed since the origin of life. Overview of pathways Six autotrophic carbon fixation pathways are known: the Calvin Cycle, the Reverse Krebs Cycle, the reductive acetyl-CoA, the 3-HP bicycle, the 3-HP/4-HB cycle, and the DC/4-HB cycles. The organisms the Calvin cycle is found in are plants, algae, cyanobacteria, aerobic proteobacteria, and purple bacteria. The Calvin cycle fixes carbon in the chloroplasts of plants and algae, and in the cyanobacteria. It also fixes carbon in the anoxygenic photosynthesis in one type of Pseudomonadota called purple bacteria, and in some non-phototrophic Pseudomonadota. Of the other autotrophic pathways, two are known only in bacteria (the reductive citric acid cycle and the 3-hydroxypropionate cycle), two only in archaea (two variants of the 3-hydroxypropionate cycle), and one in both bacteria and archaea (the reductive acetyl CoA pathway). Sulfur- and hydrogen-oxidizing bacteria often use the Calvin cycle or the reductive citric acid cycle. List of pathways Calvin cycle The Calvin cycle accounts for 90% of biological carbon fixation. Consuming adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADPH), the Calvin cycle in plants accounts for the predominance of carbon fixation on land. In algae and cyanobacteria, it accounts for the dominance of carbon fixation in the oceans. The Calvin cycle converts carbon dioxide into sugar, as triose phosphate (TP), which is glyceraldehyde 3-phosphate (GAP) together with dihydroxyacetone phosphate (DHAP): 3 CO2 + 12 e− + 12 H+ + Pi → TP + 4 H2O An alternative perspective accounts for NADPH (source of e−) and ATP: 3 CO2 + 6 NADPH + 6 H+ + 9 ATP + 5 H2O → TP + 6 NADP+ + 9 ADP + 8 Pi The formula for inorganic phosphate (Pi) is HOPO32− + 2H+. Formulas for triose and TP are C2H3O2-CH2OH and C2H3O2-CH2OPO32− + 2H+ Reverse Krebs cycle The reverse Krebs cycle, also known as the reverse TCA cycle (rTCA) or reductive citric acid cycle, is an alternative to the standard Calvin-Benson cycle for carbon fixation. It has been found in strict anaerobic or microaerobic bacteria (as Aquificales) and anaerobic archea. It was discovered by Evans, Buchanan and Arnon in 1966 working with the photosynthetic green sulfur bacterium Chlorobium limicola. In particular, it is one of the most used pathways in hydrothermal vents by the Campylobacterota. This feature allows primary production in the ocean's aphotic environments, or "dark primary production." Without it, there would be no primary production in aphotic environments, which would lead to habitats without life. The cycle involves the biosynthesis of acetyl-CoA from two molecules of CO2. The key steps of the reverse Krebs cycle are: Oxaloacetate to malate, using NADH + H+ Oxaloacetate + NADH/H+ -> Malate + NAD+ Fumarate to succinate, catalyzed by an oxidoreductase, Fumarate reductase Fumarate + FADH2 <=> Succinate + FAD Succinate to succinyl-CoA, an ATP-dependent step Succinate + ATP + CoA -> Succinyl-CoA + ADP + Pi Succinyl-CoA to alpha-ketoglutarate, using one molecule of CO2 Succinyl-CoA + CO2 + Fd{(red)} -> alpha-ketoglutarate + Fd{(ox)} Alpha-ketoglutarate to isocitrate, using NADPH + H+ and another molecule of CO2 Alpha-ketoglutarate + CO2 + NAD(P)H/H+ -> Isocitrate + NAD(P)+ Citrate converted into oxaloacetate and acetyl-CoA, this is an ATP dependent step and the key enzyme is the ATP citrate lyase Citrate + ATP + CoA -> Oxaloacetate + Acetyl-CoA + ADP + Pi This pathway is cyclic due to the regeneration of the oxaloacetate. The bacteria Gammaproteobacteria and Riftia pachyptila switch from the Calvin-Benson cycle to the rTCA cycle in response to concentrations of H2S. Reductive acetyl CoA pathway The reductive acetyl CoA pathway (CoA) pathway, also known as the Wood-Ljungdahl pathway uses CO2 as electron acceptor and carbon source, and H2 as an electron donor to form acetic acid. This metabolism is wide spread within the phylum Bacillota, especially in the Clostridia. The pathway is also used by methanogens, which are mainly Euryarchaeota, and several anaerobic chemolithoautotrophs, such as sulfate-reducing bacteria and archaea. It is probably performed also by the Brocadiales, an order of Planctomycetota that oxidize ammonia in anaerobic condition. Hydrogenotrophic methanogenesis, which is only found in certain archaea and accounts for 80% of global methanogenesis, is also based on the reductive acetyl CoA pathway. The Carbon Monoxide Dehydrogenase/Acetyl-CoA Synthase is the oxygen-sensitive enzyme that permits the reduction of CO2 to CO and the synthesis of acetyl-CoA in several reactions. One branch of this pathway, the methyl branch, is similar but non-homologous between bacteria and archaea. In this branch happens the reduction of CO2 to a methyl residue bound to a cofactor. The intermediates are formate for bacteria and formyl-methanofuran for archaea, and also the carriers, tetrahydrofolate and tetrahydropterins respectively in bacteria and archaea, are different, such as the enzymes forming the cofactor-bound methyl group. Otherwise, the carbonyl branch is homologous between the two domains and consists of the reduction of another molecule of CO2 to a carbonyl residue bound to an enzyme, catalyzed by the CO dehydrogenase/acetyl-CoA synthase. This key enzyme is also the catalyst for the formation of acetyl-CoA starting from the products of the previous reactions, the methyl and the carbonyl residues. This carbon fixation pathway requires only one molecule of ATP for the production of one molecule of pyruvate, which makes this process one of the main choice for chemolithoautotrophs limited in energy and living in anaerobic conditions. 3-Hydroxypropionate [3-HP] bicycle The 3-Hydroxypropionate bicycle, also known as 3-HP/malyl-CoA cycle, discovered only in 1989, is utilized by green non-sulfur phototrophs of Chloroflexaceae family, including the maximum exponent of this family Chloroflexus auranticus by which this way was discovered and demonstrated. The 3-Hydroxipropionate bicycle is composed of two cycles and the name of this way comes from the 3-Hydroxyporopionate which corresponds to an intermediate characteristic of it. The first cycle is a way of synthesis of glyoxylate. During this cycle, two equivalents of bicarbonate are fixed by the action of two enzymes: the Acetyl-CoA carboxylase catalyzes the carboxylation of the Acetyl-CoA to Malonyl-CoA and Propionyl-CoA carboxylase catalyses the carboxylation of propionyl-CoA to methylamalonyl-CoA. From this point a series of reactions lead to the formation of glyoxylate which will thus become part of the second cycle. In the second cycle, glyoxylate is approximately one equivalent of propionyl-CoA forming methylamalonyl-CoA. This, in turn, is then converted through a series of reactions into citramalyl-CoA. The citramalyl-CoA is split into pyruvate and Acetyl-CoA thanks to the enzyme MMC lyase. At this point the pyruvate is released, while the Acetyl-CoA is reused and carboxylated again at Malonyl-CoA thus reconstituting the cycle. A total of 19 reactions are involved in 3-hydroxypropionate bicycle and 13 multifunctional enzymes are used. The multifunctionality of these enzymes is an important feature of this pathway which thus allows the fixation of three bicarbonate molecules. It is a very expensive pathway: 7 ATP molecules are used for the synthesis of the new pyruvate and 3 ATP for the phosphate triose. An important characteristic of this cycle is that it allows the co-assimilation of numerous compounds making it suitable for the mixotrophic organisms. Cycles related to the 3-hydroxypropionate cycle A variant of the 3-hydroxypropionate cycle was found to operate in the aerobic extreme thermoacidophile archaeon Metallosphaera sedula. This pathway is called the 3-hydroxypropionate/4-hydroxybutyrate (3-HP/4-HB) cycle. Yet another variant of the 3-hydroxypropionate cycle is the dicarboxylate/4-hydroxybutyrate (DC/4-HB) cycle. It was discovered in anaerobic archaea. It was proposed in 2008 for the hyperthermophile archeon Ignicoccus hospitalis. enoyl-CoA carboxylases/reductases fixation is catalyzed by enoyl-CoA carboxylases/reductases. Non-autotrophic pathways Although no heterotrophs use carbon dioxide in biosynthesis, some carbon dioxide is incorporated in their metabolism. Notably pyruvate carboxylase consumes carbon dioxide (as bicarbonate ions) as part of gluconeogenesis, and carbon dioxide is consumed in various anaplerotic reactions. 6-phosphogluconate dehydrogenase catalyzes the reductive carboxylation of ribulose 5-phosphate to 6-phosphogluconate in E. coli under elevated CO2 concentrations. Carbon isotope discrimination Some carboxylases, particularly RuBisCO, preferentially bind the lighter carbon stable isotope carbon-12 over the heavier carbon-13. This is known as carbon isotope discrimination and results in carbon-12 to carbon-13 ratios in the plant that are higher than in the free air. Measurement of this ratio is important in the evaluation of water use efficiency in plants, and also in assessing the possible or likely sources of carbon in global carbon cycle studies. Biological carbon fixation in soils In addition to photosynthetic and chemosynthetic processes, biological carbon fixation occurs in soil through the activity of microorganisms, such as bacteria and fungi. These soil microbes play a crucial role in the global carbon cycle by sequestering carbon from decomposed organic matter and recycling it back into the soil, thereby contributing to soil fertility and ecosystem productivity.   In soil environments, organic matter derived from dead plant and animal material undergoes decomposition, a process carried out by a diverse community of microorganisms. During decomposition, complex organic compounds are broken down into simpler molecules by the action of enzymes produced by bacteria, fungi, and other soil organisms. As organic matter is decomposed, carbon is released in various forms, including carbon dioxide (CO2) and dissolved organic carbon (DOC). However, not all of the carbon released during decomposition is immediately lost to the atmosphere; a significant portion is retained in the soil through processes collectively known as soil carbon sequestration. Soil microbes, particularly bacteria and fungi, play a pivotal role in this process by incorporating decomposed organic carbon into their biomass or by facilitating the formation of stable organic compounds, such as humus and soil organic matter. One key mechanism by which soil microbes sequester carbon is through the process of microbial biomass production. Bacteria and fungi assimilate carbon from decomposed organic matter into their cellular structures as they grow and reproduce. This microbial biomass serves as a reservoir for stored carbon in the soil, effectively sequestering carbon from the atmosphere. Additionally, soil microbes contribute to the formation of stable soil organic matter through the synthesis of extracellular polymers, enzymes, and other biochemical compounds. These substances help bind soil particles together, forming aggregates that protect organic carbon from microbial decomposition and physical erosion. Over time, these aggregates accumulate in the soil, resulting in the formation of soil organic matter, which can persist for centuries to millennia. The sequestration of carbon in soil not only helps mitigate the accumulation of atmospheric CO2 and mitigate climate change but also enhances soil fertility, water retention, and nutrient cycling, thereby supporting plant growth and ecosystem productivity. Consequently, understanding the role of soil microbes in biological carbon fixation is essential for managing soil health, mitigating climate change, and promoting sustainable land management practices. Biological carbon fixation is a fundamental process that sustains life on Earth by regulating atmospheric CO2 levels, supporting the growth of plants and other photosynthetic organisms, and maintaining ecological balance. See also Blue carbon Nitrogen fixation Oxygen cycle Biogeochemical cycles References Further reading Photosynthesis Carbon Metabolic pathways Atmospheric chemistry Microbiology
0.789369
0.983941
0.776692
Descriptive research
Descriptive research is used to describe characteristics of a population or phenomenon being studied. It does not answer questions about how/when/why the characteristics occurred. Rather it addresses the "what" question (what are the characteristics of the population or situation being studied?). The characteristics used to describe the situation or population are usually some kind of categorical scheme also known as descriptive categories. For example, the periodic table categorizes the elements. Scientists use knowledge about the nature of electrons, protons and neutrons to devise this categorical scheme. We now take for granted the periodic table, yet it took descriptive research to devise it. Descriptive research generally precedes explanatory research. For example, over time the periodic table's description of the elements allowed scientists to explain chemical reaction and make sound prediction when elements were combined. Hence, descriptive research cannot describe what caused a situation. Thus, descriptive research cannot be used as the basis of a causal relationship, where one variable affects another. In other words, descriptive research can be said to have a low requirement for internal validity. The description is used for frequencies, averages, and other statistical calculations. Often the best approach, prior to writing descriptive research, is to conduct a survey investigation. Qualitative research often has the aim of description and researchers may follow up with examinations of why the observations exist and what the implications of the findings are. Social science research In addition, the conceptualizing of descriptive research (categorization or taxonomy) precedes the hypotheses of explanatory research. (For a discussion of how the underlying conceptualization of exploratory research, descriptive research and explanatory research fit together, see: Conceptual framework.) Descriptive research can be statistical research. The main objective of this type of research is to describe the data and characteristics of what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a topic. That is, analysis of the past as opposed to the future. Descriptive research is the exploration of the existing certain phenomena. The details of the facts won't be known. The existing phenomena's facts are not known to the person. Descriptive science Descriptive science is a category of science that involves descriptive research; that is, observing, recording, describing, and classifying phenomena. Descriptive research is sometimes contrasted with hypothesis-driven research, which is focused on testing a particular hypothesis by means of experimentation. David A. Grimaldi and Michael S. Engel suggest that descriptive science in biology is currently undervalued and misunderstood: "Descriptive" in science is a pejorative, almost always preceded by "merely," and typically applied to the array of classical -ologies and -omies: anatomy, archaeology, astronomy, embryology, morphology, paleontology, taxonomy, botany, cartography, stratigraphy, and the various disciplines of zoology, to name a few. [...] First, an organism, object, or substance is not described in a vacuum, but rather in comparison with other organisms, objects, and substances. [...] Second, descriptive science is not necessarily low-tech science, and high tech is not necessarily better. [...] Finally, a theory is only as good as what it explains and the evidence (i.e., descriptions) that supports it. A negative attitude by scientists toward descriptive science is not limited to biological disciplines: Lord Rutherford's notorious quote, "All science is either physics or stamp collecting," displays a clear negative attitude about descriptive science, and it is known that he was dismissive of astronomy, which at the beginning of the 20th century was still gathering largely descriptive data about stars, nebulae, and galaxies, and was only beginning to develop a satisfactory integration of these observations within the framework of physical law, a cornerstone of the philosophy of physics. Descriptive versus design sciences Ilkka Niiniluoto has used the terms "descriptive sciences" and "design sciences" as an updated version of the distinction between basic and applied science. According to Niiniluoto, descriptive sciences are those that seek to describe reality, while design sciences seek useful knowledge for human activities. See also Methodology Normative science Procedural knowledge Scientific method References External links Descriptive Research from BYU linguistics department Research Descriptive statistics Philosophy of science
0.783294
0.991542
0.776668
Bioecological model
The bioecological model of development is the mature and final revision of Urie Bronfenbrenner's ecological system theory. The primary focus of ecological systems theory is on the systemic examination of contextual variability in development processes. It focuses on the world outside the developing person and how they were affected by it. After publication of The Ecology of Human Development, Bronfenbrenner's first comprehensive statement of ecological systems theory, additional refinements were added to the theory. Whereas earlier statements of ecological systems theory focused on characteristics of the environment, the goal of the bioecological model was to explicate how characteristics of the developing person influenced the environments to which the person was exposed and how they were affected by the environment. The bioecological model is strongly influenced by Bronfenbrenner's collaborations with Stephen Ceci. Whereas much of Bronfenbrenner's work had focused on social development and the influence of social environments on development, Ceci's work focuses on memory and intelligence. The bioecological model reflects Ceci's work on contextual variability in intelligence and cognition and Bronfenbrenner's interest in developmentally instigative characteristics - how people help to create their own environments. Evolution of Bronfenbrenner's theory Bronfenbrenner's initial investigations into contextual variability in developmental processes can be seen in the 1950s in the analysis of differences in methods of parental discipline as a function of historical time and social class. It was further developed in his work on the differential effects of parental discipline on boys and girls in the 1960s and of convergence of socialization processes in the US and USSR in the 1970s. These works were expressed in the experimental variations built in the development and implementation of the HeadStart program. informally discussed new ideas concerning Ecological Systems Theory throughout the late 1970s and early 1980s during lectures and presentations to the psychological community. Bronfenbrenner published a major statement of ecological systems theory in American Psychologist, articulated it in a series of propositions and hypotheses in his most cited book, The Ecology of Human Development and further developing it in The Bioecological Model of Human Development and later writings. Bronfenbrenner's early thinking was strongly influenced by other developmentalists and social psychologists who studied developmental processes as contextually bound and dependent on the meaning of experience as defined by the developing person. One strong influence was Lev Vygotsky, a Russian psychologist who emphasized recognized that learning always occurs and cannot be separated from a social context. A second influence was Kurt Lewin, a German forerunner of ecological systems models who focused on a person's psychological activities that occur within a kind of psychological field, including all the events in the past, present, and future that shape and affect an individual. The centrality of the person's interpretation of their environment and phenomenological nature was built on the work of Thomas & Thomas: “(i)f men define situations as real they are real in their consequences”. Bronfenbrenner was also influenced by his colleague, Stephen J. Ceci, with whom he co-authored the article “Nature-nurture reconceptualized in developmental perspective: A bioecological theory” in 1994. Ceci is a developmental psychologist who redefined modern developmental psychology's approach to intellectual development. He focused on predicting a pattern of associations among ecological, genetic, and cognitive variables as a function of proximal processes. Together, Bronfenbrenner and Ceci published the beginnings of the bioecological model and made it an accessible framework to use in understanding developmental processes. History The history of bioecological systems theory is divided into two periods. The first period resulted in the publication of Bronfenbrenner's theory of ecological systems theory, titled The Ecology of Human Development, in 1979. Bronfenbrenner described the second period as a time of criticism and evaluation of his original work. The development of ecological systems theory arose because Bronfenbrenner noted a lack of focus on the role of context in terms of development. He argued the environment in which children operate is important because development may be shaped by their interactions with the specific environment. He urged his colleagues to study development in terms of ecological contexts, that is the normal environments of children (schools, homes, daycares). Researchers heeded his advice and a great deal of research flourished in the early 1980s that focused on context. However, where prior research was ignoring context, Bronfenbrenner felt current research focused too much on context and ignored development. In his justification for a new theory, Bronfenbrenner wrote he was not pleased with the direction of research in the mid 1980s and that he felt there were other realms of development that were overlooked. In comparison to the original theory, bioecological systems theory adds more emphasis to the person in the context of development. Additionally, Bronfenbrenner chose to leave out key features of the ecological systems theory (e.g., ecological validity and ecological experiments) during his development of bioecological systems theory. As a whole, Bronfenbrenner's new theory continued to go through a series of transformations as he continuously analyzed different factors in human development. Critical components of bioecological systems theory did not emerge all at once. Instead, his ideas evolved and adapted to the research and ideas of the times. For example, the role of proximal processes, which is now recognized as a key feature of bioecological systems theory, did not emerge until the 1990s. This theory went through a series of transformations and elaborations until 2005 when Bronfenbrenner died. Process–Person–Context–Time Bronfenbrenner further developed the model by adding the chronosystem, which refers to how the person and environments change over time. He also placed a greater emphasis on processes and the role of the biological person. The Process–Person–Context–Time Model (PPCT) has since become the bedrock of the bioecological model. PPCT includes four concepts. The interactions between the concepts form the basis for the theory. 1. Process – Bronfenbrenner viewed proximal processes as the primary mechanism for development, featuring them in two central propositions of the bioecological model. Proposition 1: [H]uman development takes place through processes of progressively more complex reciprocal interaction between an active, evolving biopsychological human organism and the persons, objects, and symbols in its immediate external environment. To be effective, the interaction must occur on a fairly regular basis over extended periods of time. Such enduring forms of interaction in the immediate environment are referred to as proximal processes. Proximal processes are the development processes of systematic interaction between person and environment. Bronfenbrenner identifies group and solitary activities such as playing with other children or reading as mechanisms through which children come to understand their world and formulate ideas about their place within it. However, processes function differently depending on the person and the context. Proposition 2: The form, power, content, and direction of the proximal processes effecting development vary systematically as a joint function of the characteristics of the developing person; of the environment—both immediate and more remote—in which the processes are taking place; the nature of the developmental outcomes under consideration; and the social continuities and changes occurring over time through the life course and the historical period during which the person has lived. 2. Person – Bronfenbrenner acknowledged the role that personal characteristics of individuals play in social interactions. He identified three personal characteristics that can significantly influence proximal processes across the lifespan. Demand characteristics such as age, gender or physical appearance set processes in motion, acting as “personal stimulus” characteristics. Resource characteristics are not as immediately recognizable and include mental and emotional resources such as past experiences, intelligence, and skills as well as material resources such as access to housing, education, and responsive caregivers. Force characteristics are related to variations in motivation, persistence and temperament. Bronfenbrenner notes that even when children have equivalent access to resources, their developmental courses may differ as a function of characteristics such as drive to succeed and persistence in the face of hardship. In doing this, Bronfenbrenner provides a rationale for how environments (i.e., the systems mentioned above under “The Original Model: Ecological Systems Theory”) influence personal characteristics, yet also suggests personal characteristics can change environments. 3. Context – Context involves five interconnected systems, which are based on Bronfenbrenner’s original model, ecological systems theory. The microsystem describes environments such as home or school in which children spend significant time interacting. Mesosystems are interrelations between microsystems. The exosystem describes events that have important indirect influence on development (e.g., a parent consistently working late). The macrosystem is a feature of any group (culture, subculture) that share values and belief systems. The chronosystem describes historical circumstances that affect contexts at all other levels. 4. Time – Time has a prominent place in this developmental model. It is constituted at three levels: micro, meso, and macro. Micro-time refers to what is happening during specific episodes of proximal processes. Meso-time refers to the extent to which the processes occur in the person’s environment, such as over the course of days, weeks or years. Macro-time (or the chronosystem) focuses on the shifting expectancies in wider culture. This functions both within and across generations and affects proximal processes across the lifespan. Thus, the bioecological model highlights the importance of understanding a person's development within environmental systems. It further explains that both the person and the environment affect one another bidirectionally. Although even Bronfenbrenner himself critiqued the falsifiability of the model, the bioecological model has real world applications for developmental research, practice, and policies (as demonstrated below). Research implications In addition to adding to the theoretical understanding of human development, the bioecological model lends itself to changes in the conceptualization of the research endeavor. In some of his earliest comments on the state of developmental research, Bronfenbrenner lamented that developmental research concerned itself with studying “strange behavior of children in strange situations for the briefest possible period of time”. He proposed, rather, that developmental science should take as its goal a study of children in context in order to best determine which processes are naturally “developmentally generative” (promote development) and which are naturally “developmentally disruptive” (prevent development). Bronfenbrenner set up a contrast to the traditional “confirmatory” approach to hypothesis testing (in which research is done to “confirm” that a hypothesis is correct or incorrect) when specifying the types of research needed to support the bioecological model of development. In Bronfenbrenner's view, the dynamic nature of the model calls for “primarily generative” research designs that explore interactions between proximal processes (see Proposition 1) and the developing person, environment, time, and developmental outcome (Proposition 2). Bronfenbrenner called this type of research the “discovery mode” of developmental science. To best capture such dynamic processes, developmental research designs would ideally be longitudinal (over time), rather than cross-sectional (a single point in time), and conducted in children's natural environments, rather than a laboratory. Such designs would thus occur in schools, homes, day-care centers, and other environments in which proximal processes are most likely to occur. The bioecological model also proposes that the most scientifically rich studies would include more than one distinct but theoretically related proximal process in the same design. Indeed, studies that claim to be based upon bioecological theory should include elements of process, person, context, and time, and should include explicit explanation and acknowledgement if one of the elements is lacking. Based on the interactions of proposed elements of the PPCT model, appropriate statistical analyses of PPCT data would likely include explorations of mediation and moderation effects, as well as multilevel modeling of data to account for the nesting of different components of the model. Moreover, research that includes both genetic and environmental components would capture even more of the bioecological model's elements. Ecological techno-subsystem The ecological systems theory emerged before the advent of Internet revolution and the developmental influence of then available technology (e.g., television) was conceptually situated in the child's microsystem. Johnson and Puplampu, for instance, proposed in 2008 the ecological techno-subsystem, a dimension of the microsystem. This microsystem comprises both child interaction with living (e.g., peers, parents, teachers) and non-living (e.g., hardware, gadgets) elements of communication, information, and recreation technologies in immediate or direct environments. Johnson published a validation study in 2010. Neo-ecological Theory Whereas the theory of the techno-subsystem merely highlights the influence that digital technologies have on the development of an individual within the microsystem, Navarro and Tudge argue that the virtual world be given its own consideration throughout the Bioecological model. They suggest two key modifications as a way to incorporate Bonfenbrenner's theory into our technologized world: The microsystem should be delineated to include distinct forms in which an individual lives: physical microsystem and virtual microsystem. The role of the macrosystem, specifically the cultural influence of digital technology, should be emphasized in understanding human development. See also Ecological systems theory Diathesis-stress model References Genetics Developmental psychology Systems psychology
0.790079
0.983013
0.776657
Gradualism
Gradualism, from the Latin ("step"), is a hypothesis, a theory or a tenet assuming that change comes about gradually or that variation is gradual in nature and happens over time as opposed to in large steps. Uniformitarianism, incrementalism, and reformism are similar concepts. Gradualism can also refer to desired, controlled change in society, institutions, or policies. For example, social democrats and democratic socialists see the socialist society as achieved through gradualism. Geology and biology In the natural sciences, gradualism is the theory which holds that profound change is the cumulative product of slow but continuous processes, often contrasted with catastrophism. The theory was proposed in 1795 by James Hutton, a Scottish geologist, and was later incorporated into Charles Lyell's theory of uniformitarianism. Tenets from both theories were applied to biology and formed the basis of early evolutionary theory. Charles Darwin was influenced by Lyell's Principles of Geology, which explained both uniformitarian methodology and theory. Using uniformitarianism, which states that one cannot make an appeal to any force or phenomenon which cannot presently be observed (see catastrophism), Darwin theorized that the evolutionary process must occur gradually, not in saltations, since saltations are not presently observed, and extreme deviations from the usual phenotypic variation would be more likely to be selected against. Gradualism is often confused with the concept of phyletic gradualism. It is a term coined by Stephen Jay Gould and Niles Eldredge to contrast with their model of punctuated equilibrium, which is gradualist itself, but argues that most evolution is marked by long periods of evolutionary stability (called stasis), which is punctuated by rare instances of branching evolution. Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual. When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution. While the traditional model of palaeontology, the phylogenetic model, states that features evolved slowly without any direct association with speciation, the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes do not happen over a gradual period but in localized, rare, rapid events of branching speciation. Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states. Politics and society In politics, gradualism is the hypothesis that social change can be achieved in small, discrete increments rather than in abrupt strokes such as revolutions or uprisings. Gradualism is one of the defining features of political liberalism and reformism. Machiavellian politics pushes politicians to espouse gradualism. Gradualism in social change implemented through reformist means is a moral principle to which the Fabian Society is committed. In a more general way, reformism is the assumption that gradual changes through and within existing institutions can ultimately change a society's fundamental economic system and political structures; and that an accumulation of reforms can lead to the emergence of an entirely different economic system and form of society than present-day capitalism. That hypothesis of social change grew out of opposition to revolutionary socialism, which contends that revolution is necessary for fundamental structural changes to occur. In socialist politics and within the socialist movement, the concept of gradualism is frequently distinguished from reformism, with the former insisting that short-term goals need to be formulated and implemented in such a way that they inevitably lead into long-term goals. It is most commonly associated with the libertarian socialist concept of dual power and is seen as a middle way between reformism and revolutionism. Martin Luther King Jr. was opposed to the idea of gradualism as a method of eliminating segregation. The United States government wanted to try to integrate African-Americans and European-Americans slowly into the same society, but many believed it was a way for the government to put off actually doing anything about racial segregation: Conspiracy theories In the terminology of NWO-related speculations, gradualism refers to the gradual implementation of a totalitarian world government. Linguistics and language change In linguistics, language change is seen as gradual, the product of chain reactions and subject to cyclic drift. The view that creole languages are the product of catastrophism is heavily disputed. Morality Christianity Buddhism, Theravada and Yoga Gradualism is the approach of certain schools of Buddhism and other Eastern philosophies (e.g. Theravada or Yoga), that enlightenment can be achieved step by step, through an arduous practice. The opposite approach, that insight is attained all at once, is called subitism. The debate on the issue was very important to the history of the development of Zen, which rejected gradualism, and to the establishment of the opposite approach within the Tibetan Buddhism, after the Debate of Samye. It was continued in other schools of Indian and Chinese philosophy. Philosophy Contradictorial gradualism is the paraconsistent treatment of fuzziness developed by Lorenzo Peña which regards true contradictions as situations wherein a state of affairs enjoys only partial existence. See also Evolution Uniformitarianism Incrementalism Normalization (sociology) Reformism Catastrophism Saltation Punctuated equilibrium Accelerationism Boiling frog References Geology theories Rate of evolution Liberalism Social democracy Democratic socialism Historical linguistics Social theories
0.788508
0.984947
0.776638
Unicellular organism
A unicellular organism, also known as a single-celled organism, is an organism that consists of a single cell, unlike a multicellular organism that consists of multiple cells. Organisms fall into two general categories: prokaryotic organisms and eukaryotic organisms. Most prokaryotes are unicellular and are classified into bacteria and archaea. Many eukaryotes are multicellular, but some are unicellular such as protozoa, unicellular algae, and unicellular fungi. Unicellular organisms are thought to be the oldest form of life, with early protocells possibly emerging 3.5–4.1 billion years ago. Although some prokaryotes live in colonies, they are not specialised cells with differing functions. These organisms live together, and each cell must carry out all life processes to survive. In contrast, even the simplest multicellular organisms have cells that depend on each other to survive. Most multicellular organisms have a unicellular life-cycle stage. Gametes, for example, are reproductive unicells for multicellular organisms. Additionally, multicellularity appears to have evolved independently many times in the history of life. Some organisms are partially unicellular, like Dictyostelium discoideum. Additionally, unicellular organisms can be multinucleate, like Caulerpa, Plasmodium, and Myxogastria. Evolutionary hypothesis Primitive protocells were the precursors to today's unicellular organisms. Although the origin of life is largely still a mystery, in the currently prevailing theory, known as the RNA world hypothesis, early RNA molecules would have been the basis for catalyzing organic chemical reactions and self-replication. Compartmentalization was necessary for chemical reactions to be more likely as well as to differentiate reactions with the external environment. For example, an early RNA replicator ribozyme may have replicated other replicator ribozymes of different RNA sequences if not kept separate. Such hypothetic cells with an RNA genome instead of the usual DNA genome are called 'ribocells' or 'ribocytes'. When amphiphiles like lipids are placed in water, the hydrophobic tails aggregate to form micelles and vesicles, with the hydrophilic ends facing outwards. Primitive cells likely used self-assembling fatty-acid vesicles to separate chemical reactions and the environment. Because of their simplicity and ability to self-assemble in water, it is likely that these simple membranes predated other forms of early biological molecules. Prokaryotes Prokaryotes lack membrane-bound organelles, such as mitochondria or a nucleus. Instead, most prokaryotes have an irregular region that contains DNA, known as the nucleoid. Most prokaryotes have a single, circular chromosome, which is in contrast to eukaryotes, which typically have linear chromosomes. Nutritionally, prokaryotes have the ability to utilize a wide range of organic and inorganic material for use in metabolism, including sulfur, cellulose, ammonia, or nitrite. Prokaryotes are relatively ubiquitous in the environment and some (known as extremophiles) thrive in extreme environments. Bacteria Bacteria are one of the world's oldest forms of life, and are found virtually everywhere in nature. Many common bacteria have plasmids, which are short, circular, self-replicating DNA molecules that are separate from the bacterial chromosome. Plasmids can carry genes responsible for novel abilities, of current critical importance being antibiotic resistance. Bacteria predominantly reproduce asexually through a process called binary fission. However, about 80 different species can undergo a sexual process referred to as natural genetic transformation. Transformation is a bacterial process for transferring DNA from one cell to another, and is apparently an adaptation for repairing DNA damage in the recipient cell. In addition, plasmids can be exchanged through the use of a pilus in a process known as conjugation. The photosynthetic cyanobacteria are arguably the most successful bacteria, and changed the early atmosphere of the earth by oxygenating it. Stromatolites, structures made up of layers of calcium carbonate and trapped sediment left over from cyanobacteria and associated community bacteria, left behind extensive fossil records. The existence of stromatolites gives an excellent record as to the development of cyanobacteria, which are represented across the Archaean (4 billion to 2.5 billion years ago), Proterozoic (2.5 billion to 540 million years ago), and Phanerozoic (540 million years ago to present day) eons. Much of the fossilized stromatolites of the world can be found in Western Australia. There, some of the oldest stromatolites have been found, some dating back to about 3,430 million years ago. Clonal aging occurs naturally in bacteria, and is apparently due to the accumulation of damage that can happen even in the absence of external stressors. Archaea Hydrothermal vents release heat and hydrogen sulfide, allowing extremophiles to survive using chemolithotrophic growth. Archaea are generally similar in appearance to bacteria, hence their original classification as bacteria, but have significant molecular differences most notably in their membrane structure and ribosomal RNA. By sequencing the ribosomal RNA, it was found that the Archaea most likely split from bacteria and were the precursors to modern eukaryotes, and are actually more phylogenetically related to eukaryotes. As their name suggests, Archaea comes from a Greek word archaios, meaning original, ancient, or primitive. Some archaea inhabit the most biologically inhospitable environments on earth, and this is believed to in some ways mimic the early, harsh conditions that life was likely exposed to. Examples of these Archaean extremophiles are as follows: Thermophiles, optimum growth temperature of 50 °C-110 °C, including the genera Pyrobaculum, Pyrodictium, Pyrococcus, Thermus aquaticus and Melanopyrus. Psychrophiles, optimum growth temperature of less than 15 °C, including the genera Methanogenium and Halorubrum. Alkaliphiles, optimum growth pH of greater than 8, including the genus Natronomonas. Acidophiles, optimum growth pH of less than 3, including the genera Sulfolobus and Picrophilus. Piezophiles, (also known as barophiles), prefer high pressure up to 130 MPa, such as deep ocean environments, including the genera Methanococcus and Pyrococcus. Halophiles, grow optimally in high salt concentrations between 0.2 M and 5.2 M NaCl, including the genera Haloarcula, Haloferax, Halococcus. Methanogens are a significant subset of archaea and include many extremophiles, but are also ubiquitous in wetland environments as well as the ruminant and hindgut of animals. This process utilizes hydrogen to reduce carbon dioxide into methane, releasing energy into the usable form of adenosine triphosphate. They are the only known organisms capable of producing methane. Under stressful environmental conditions that cause DNA damage, some species of archaea aggregate and transfer DNA between cells. The function of this transfer appears to be to replace damaged DNA sequence information in the recipient cell by undamaged sequence information from the donor cell. Eukaryotes Eukaryotic cells contain membrane bound organelles. Some examples include mitochondria, a nucleus, or the Golgi apparatus. Prokaryotic cells probably transitioned into eukaryotic cells between 2.0 and 1.4 billion years ago. This was an important step in evolution. In contrast to prokaryotes, eukaryotes reproduce by using mitosis and meiosis. Sex appears to be a ubiquitous and ancient, and inherent attribute of eukaryotic life. Meiosis, a true sexual process, allows for efficient recombinational repair of DNA damage and a greater range of genetic diversity by combining the DNA of the parents followed by recombination. Metabolic functions in eukaryotes are more specialized as well by sectioning specific processes into organelles. The endosymbiotic theory holds that mitochondria and chloroplasts have bacterial origins. Both organelles contain their own sets of DNA and have bacteria-like ribosomes. It is likely that modern mitochondria were once a species similar to Rickettsia, with the parasitic ability to enter a cell. However, if the bacteria were capable of respiration, it would have been beneficial for the larger cell to allow the parasite to live in return for energy and detoxification of oxygen. Chloroplasts probably became symbionts through a similar set of events, and are most likely descendants of cyanobacteria. While not all eukaryotes have mitochondria or chloroplasts, mitochondria are found in most eukaryotes, and chloroplasts are found in all plants and algae. Photosynthesis and respiration are essentially the reverse of one another, and the advent of respiration coupled with photosynthesis enabled much greater access to energy than fermentation alone. Protozoa Protozoa are largely defined by their method of locomotion, including flagella, cilia, and pseudopodia. While there has been considerable debate on the classification of protozoa caused by their sheer diversity, in one system there are currently seven phyla recognized under the kingdom Protozoa: Euglenozoa, Amoebozoa, Choanozoa sensu Cavalier-Smith, Loukozoa, Percolozoa, Microsporidia and Sulcozoa. Protozoa, like plants and animals, can be considered heterotrophs or autotrophs. Autotrophs like Euglena are capable of producing their energy using photosynthesis, while heterotrophic protozoa consume food by either funneling it through a mouth-like gullet or engulfing it with pseudopods, a form of phagocytosis. While protozoa reproduce mainly asexually, some protozoa are capable of sexual reproduction. Protozoa with sexual capability include the pathogenic species Plasmodium falciparum, Toxoplasma gondii, Trypanosoma brucei, Giardia duodenalis and Leishmania species. Ciliophora, or ciliates, are a group of protists that utilize cilia for locomotion. Examples include Paramecium, Stentors, and Vorticella. Ciliates are widely abundant in almost all environments where water can be found, and the cilia beat rhythmically in order to propel the organism. Many ciliates have trichocysts, which are spear-like organelles that can be discharged to catch prey, anchor themselves, or for defense. Ciliates are also capable of sexual reproduction, and utilize two nuclei unique to ciliates: a macronucleus for normal metabolic control and a separate micronucleus that undergoes meiosis. Examples of such ciliates are Paramecium and Tetrahymena that likely employ meiotic recombination for repairing DNA damage acquired under stressful conditions. The Amebozoa utilize pseudopodia and cytoplasmic flow to move in their environment. Entamoeba histolytica is the cause of amebic dysentery. Entamoeba histolytica appears to be capable of meiosis. Unicellular algae Unicellular algae are plant-like autotrophs and contain chlorophyll. They include groups that have both multicellular and unicellular species: Euglenophyta, flagellated, mostly unicellular algae that occur often in fresh water. In contrast to most other algae, they lack cell walls and can be mixotrophic (both autotrophic and heterotrophic). An example is Euglena gracilis. Chlorophyta (green algae), mostly unicellular algae found in fresh water. The chlorophyta are of particular importance because they are believed to be most closely related to the evolution of land plants. Diatoms, unicellular algae that have siliceous cell walls. They are the most abundant form of algae in the ocean, although they can be found in fresh water as well. They account for about 40% of the world's primary marine production, and produce about 25% of the world's oxygen. Diatoms are very diverse, and comprise about 100,000 species. Dinoflagellates, unicellular flagellated algae, with some that are armored with cellulose. Dinoflagellates can be mixotrophic, and are the algae responsible for red tide. Some dinoflagellates, like Pyrocystis fusiformis, are capable of bioluminescence. Unicellular fungi Unicellular fungi include the yeasts. Fungi are found in most habitats, although most are found on land. Yeasts reproduce through mitosis, and many use a process called budding, where most of the cytoplasm is held by the mother cell. Saccharomyces cerevisiae ferments carbohydrates into carbon dioxide and alcohol, and is used in the making of beer and bread. S. cerevisiae is also an important model organism, since it is a eukaryotic organism that is easy to grow. It has been used to research cancer and neurodegenerative diseases as well as to understand the cell cycle. Furthermore, research using S. cerevisiae has played a central role in understanding the mechanism of meiotic recombination and the adaptive function of meiosis. Candida spp. are responsible for candidiasis, causing infections of the mouth and/or throat (known as thrush) and vagina (commonly called yeast infection). Macroscopic unicellular organisms Most unicellular organisms are of microscopic size and are thus classified as microorganisms. However, some unicellular protists and bacteria are macroscopic and visible to the naked eye. Examples include: Brefeldia maxima, a slime mold, examples have been reported up to a centimetre thick with a surface area of over a square metre and weighed up to around 20 kg Xenophyophores, protozoans of the phylum Foraminifera, are the largest examples known, with Syringammina fragilissima achieving a diameter of up to Nummulite, foraminiferans Valonia ventricosa, an alga of the class Chlorophyceae, can reach a diameter of Acetabularia, algae Caulerpa, algae, may grow to 3 metres long Gromia sphaerica, amoeba, Thiomargarita magnifica is the largest bacterium, reaching a length of up to 20 mm Epulopiscium fishelsoni, a bacterium Stentor, ciliates nicknamed trumpet animalcules Bursaria, largest colpodean ciliates. See also Abiogenesis Asexual reproduction Colonial organism Individuality in biology Largest organisms Modularity in biology Multicellular organism Sexual reproduction Superorganism References Microorganisms
0.777833
0.998386
0.776578
Environmental science
Environmental science is an interdisciplinary academic field that integrates physics, biology, meteorology, mathematics and geography (including ecology, chemistry, plant science, zoology, mineralogy, oceanography, limnology, soil science, geology and physical geography, and atmospheric science) to the study of the environment, and the solution of environmental problems. Environmental science emerged from the fields of natural history and medicine during the Enlightenment. Today it provides an integrated, quantitative, and interdisciplinary approach to the study of environmental systems. Environmental studies incorporates more of the social sciences for understanding human relationships, perceptions and policies towards the environment. Environmental engineering focuses on design and technology for improving environmental quality in every aspect. Environmental scientists seek to understand the earth's physical, chemical, biological, and geological processes, and to use that knowledge to understand how issues such as alternative energy systems, pollution control and mitigation, natural resource management, and the effects of global warming and climate change influence and affect the natural systems and processes of earth. Environmental issues almost always include an interaction of physical, chemical, and biological processes. Environmental scientists bring a systems approach to the analysis of environmental problems. Key elements of an effective environmental scientist include the ability to relate space, and time relationships as well as quantitative analysis. Environmental science came alive as a substantive, active field of scientific investigation in the 1960s and 1970s driven by (a) the need for a multi-disciplinary approach to analyze complex environmental problems, (b) the arrival of substantive environmental laws requiring specific environmental protocols of investigation and (c) the growing public awareness of a need for action in addressing environmental problems. Events that spurred this development included the publication of Rachel Carson's landmark environmental book Silent Spring along with major environmental issues becoming very public, such as the 1969 Santa Barbara oil spill, and the Cuyahoga River of Cleveland, Ohio, "catching fire" (also in 1969), and helped increase the visibility of environmental issues and create this new field of study. Terminology In common usage, "environmental science" and "ecology" are often used interchangeably, but technically, ecology refers only to the study of organisms and their interactions with each other as well as how they interrelate with environment. Ecology could be considered a subset of environmental science, which also could involve purely chemical or public health issues (for example) ecologists would be unlikely to study. In practice, there are considerable similarities between the work of ecologists and other environmental scientists. There is substantial overlap between ecology and environmental science with the disciplines of fisheries, forestry, and wildlife. History Ancient civilizations Historical concern for environmental issues is well documented in archives around the world. Ancient civilizations were mainly concerned with what is now known as environmental science insofar as it related to agriculture and natural resources. Scholars believe that early interest in the environment began around 6000 BCE when ancient civilizations in Israel and Jordan collapsed due to deforestation. As a result, in 2700 BCE the first legislation limiting deforestation was established in Mesopotamia. Two hundred years later, in 2500 BCE, a community residing in the Indus River Valley observed the nearby river system in order to improve sanitation. This involved manipulating the flow of water to account for public health. In the Western Hemisphere, numerous ancient Central American city-states collapsed around 1500 BCE due to soil erosion from intensive agriculture. Those remaining from these civilizations took greater attention to the impact of farming practices on the sustainability of the land and its stable food production. Furthermore, in 1450 BCE the Minoan civilization on the Greek island of Crete declined due to deforestation and the resulting environmental degradation of natural resources. Pliny the Elder somewhat addressed the environmental concerns of ancient civilizations in the text Naturalis Historia, written between 77 and 79 ACE, which provided an overview of many related subsets of the discipline. Although warfare and disease were of primary concern in ancient society, environmental issues played a crucial role in the survival and power of different civilizations. As more communities recognized the importance of the natural world to their long-term success, an interest in studying the environment came into existence. Beginnings of environmental science 18th century In 1735, the concept of binomial nomenclature is introduced by Carolus Linnaeus as a way to classify all living organisms, influenced by earlier works of Aristotle. His text, Systema Naturae, represents one of the earliest culminations of knowledge on the subject, providing a means to identify different species based partially on how they interact with their environment. 19th century In the 1820s, scientists were studying the properties of gases, particularly those in the Earth's atmosphere and their interactions with heat from the Sun. Later that century, studies suggested that the Earth had experienced an Ice Age and that warming of the Earth was partially due to what are now known as greenhouse gases (GHG). The greenhouse effect was introduced, although climate science was not yet recognized as an important topic in environmental science due to minimal industrialization and lower rates of greenhouse gas emissions at the time. 20th century In the 1900s, the discipline of environmental science as it is known today began to take shape. The century is marked by significant research, literature, and international cooperation in the field. In the early 20th century, criticism from dissenters downplayed the effects of global warming. At this time, few researchers were studying the dangers of fossil fuels. After a 1.3 degrees Celsius temperature anomaly was found in the Atlantic Ocean in the 1940s, however, scientists renewed their studies of gaseous heat trapping from the greenhouse effect (although only carbon dioxide and water vapor were known to be greenhouse gases then). Nuclear development following the Second World War allowed environmental scientists to intensively study the effects of carbon and make advancements in the field. Further knowledge from archaeological evidence brought to light the changes in climate over time, particularly ice core sampling. Environmental science was brought to the forefront of society in 1962 when Rachel Carson published an influential piece of environmental literature, Silent Spring. Carson's writing led the American public to pursue environmental safeguards, such as bans on harmful chemicals like the insecticide DDT. Another important work, The Tragedy of the Commons, was published by Garrett Hardin in 1968 in response to accelerating natural degradation. In 1969, environmental science once again became a household term after two striking disasters: Ohio's Cuyahoga River caught fire due to the amount of pollution in its waters and a Santa Barbara oil spill endangered thousands of marine animals, both receiving prolific media coverage. Consequently, the United States passed an abundance of legislation, including the Clean Water Act and the Great Lakes Water Quality Agreement. The following year, in 1970, the first ever Earth Day was celebrated worldwide and the United States Environmental Protection Agency (EPA) was formed, legitimizing the study of environmental science in government policy. In the next two years, the United Nations created the United Nations Environment Programme (UNEP) in Stockholm, Sweden to address global environmental degradation. Much of the interest in environmental science throughout the 1970s and the 1980s was characterized by major disasters and social movements. In 1978, hundreds of people were relocated from Love Canal, New York after carcinogenic pollutants were found to be buried underground near residential areas. The next year, in 1979, the nuclear power plant on Three Mile Island in Pennsylvania suffered a meltdown and raised concerns about the dangers of radioactive waste and the safety of nuclear energy. In response to landfills and toxic waste often disposed of near their homes, the official Environmental Justice Movement was started by a Black community in North Carolina in 1982. Two years later, the toxic methyl isocyanate gas was released to the public from a power plant disaster in Bhopal, India, harming hundreds of thousands of people living near the disaster site, the effects of which are still felt today. In a groundbreaking discovery in 1985, a British team of researchers studying Antarctica found evidence of a hole in the ozone layer, inspiring global agreements banning the use of chlorofluorocarbons (CFCs), which were previously used in nearly all aerosols and refrigerants. Notably, in 1986, the meltdown at the Chernobyl nuclear power plant in Ukraine released radioactive waste to the public, leading to international studies on the ramifications of environmental disasters. Over the next couple of years, the Brundtland Commission (previously known as the World Commission on Environment and Development) published a report titled Our Common Future and the Montreal Protocol formed the International Panel on Climate Change (IPCC) as international communication focused on finding solutions for climate change and degradation. In the late 1980s, the Exxon Valdez company was fined for spilling large quantities of crude oil off the coast of Alaska and the resulting cleanup, involving the work of environmental scientists. After hundreds of oil wells were burned in combat in 1991, warfare between Iraq and Kuwait polluted the surrounding atmosphere just below the air quality threshold s believed was life-threatening. 21st century Many niche disciplines of environmental science have emerged over the years, although climatology is one of the most known topics. Since the 2000s, environmental scientists have focused on modeling the effects of climate change and encouraging global cooperation to minimize potential damages. In 2002, the Society for the Environment as well as the Institute of Air Quality Management were founded to share knowledge and develop solutions around the world. Later, in 2008, the United Kingdom became the first country to pass legislation (the Climate Change Act) that aims to reduce carbon dioxide output to a specified threshold. In 2016 the Kyoto Protocol became the Paris Agreement, which sets concrete goals to reduce greenhouse gas emissions and restricts Earth's rise in temperature to a 2 degrees Celsius maximum. The agreement is one of the most expansive international efforts to limit the effects of global warming to date. Most environmental disasters in this time period involve crude oil pollution or the effects of rising temperatures. In 2010, BP was responsible for the largest American oil spill in the Gulf of Mexico, known as the Deepwater Horizon spill, which killed a number of the company's workers and released large amounts of crude oil into the water. Furthermore, throughout this century, much of the world has been ravaged by widespread wildfires and water scarcity, prompting regulations on the sustainable use of natural resources as determined by environmental scientists. The 21st century is marked by significant technological advancements. New technology in environmental science has transformed how researchers gather information about various topics in the field. Research in engines, fuel efficiency, and decreasing emissions from vehicles since the times of the Industrial Revolution has reduced the amount of carbon and other pollutants into the atmosphere. Furthermore, investment in researching and developing clean energy (i.e. wind, solar, hydroelectric, and geothermal power) has significantly increased in recent years, indicating the beginnings of the divestment from fossil fuel use. Geographic information systems (GIS) are used to observe sources of air or water pollution through satellites and digital imagery analysis. This technology allows for advanced farming techniques like precision agriculture as well as monitoring water usage in order to set market prices. In the field of water quality, developed strains of natural and manmade bacteria contribute to bioremediation, the treatment of wastewaters for future use. This method is more eco-friendly and cheaper than manual cleanup or treatment of wastewaters. Most notably, the expansion of computer technology has allowed for large data collection, advanced analysis, historical archives, public awareness of environmental issues, and international scientific communication. The ability to crowdsource on the Internet, for example, represents the process of collectivizing knowledge from researchers around the world to create increased opportunity for scientific progress. With crowdsourcing, data is released to the public for personal analyses which can later be shared as new information is found. Another technological development, blockchain technology, monitors and regulates global fisheries. By tracking the path of fish through global markets, environmental scientists can observe whether certain species are being overharvested to the point of extinction. Additionally, remote sensing allows for the detection of features of the environment without physical intervention. The resulting digital imagery is used to create increasingly accurate models of environmental processes, climate change, and much more. Advancements to remote sensing technology are particularly useful in locating the nonpoint sources of pollution and analyzing ecosystem health through image analysis across the electromagnetic spectrum. Lastly, thermal imaging technology is used in wildlife management to catch and discourage poachers and other illegal wildlife traffickers from killing endangered animals, proving useful for conservation efforts. Artificial intelligence has also been used to predict the movement of animal populations and protect the habitats of wildlife. Components Atmospheric sciences Atmospheric sciences focus on the Earth's atmosphere, with an emphasis upon its interrelation to other systems. Atmospheric sciences can include studies of meteorology, greenhouse gas phenomena, atmospheric dispersion modeling of airborne contaminants, sound propagation phenomena related to noise pollution, and even light pollution. Taking the example of the global warming phenomena, physicists create computer models of atmospheric circulation and infrared radiation transmission, chemists examine the inventory of atmospheric chemicals and their reactions, biologists analyze the plant and animal contributions to carbon dioxide fluxes, and specialists such as meteorologists and oceanographers add additional breadth in understanding the atmospheric dynamics. Ecology As defined by the Ecological Society of America, "Ecology is the study of the relationships between living organisms, including humans, and their physical environment; it seeks to understand the vital connections between plants and animals and the world around them." Ecologists might investigate the relationship between a population of organisms and some physical characteristic of their environment, such as concentration of a chemical; or they might investigate the interaction between two populations of different organisms through some symbiotic or competitive relationship. For example, an interdisciplinary analysis of an ecological system which is being impacted by one or more stressors might include several related environmental science fields. In an estuarine setting where a proposed industrial development could impact certain species by water and air pollution, biologists would describe the flora and fauna, chemists would analyze the transport of water pollutants to the marsh, physicists would calculate air pollution emissions and geologists would assist in understanding the marsh soils and bay muds. Environmental chemistry Environmental chemistry is the study of chemical alterations in the environment. Principal areas of study include soil contamination and water pollution. The topics of analysis include chemical degradation in the environment, multi-phase transport of chemicals (for example, evaporation of a solvent containing lake to yield solvent as an air pollutant), and chemical effects upon biota. As an example study, consider the case of a leaking solvent tank which has entered the habitat soil of an endangered species of amphibian. As a method to resolve or understand the extent of soil contamination and subsurface transport of solvent, a computer model would be implemented. Chemists would then characterize the molecular bonding of the solvent to the specific soil type, and biologists would study the impacts upon soil arthropods, plants, and ultimately pond-dwelling organisms that are the food of the endangered amphibian. Geosciences Geosciences include environmental geology, environmental soil science, volcanic phenomena and evolution of the Earth's crust. In some classification systems this can also include hydrology, including oceanography. As an example study, of soils erosion, calculations would be made of surface runoff by soil scientists. Fluvial geomorphologists would assist in examining sediment transport in overland flow. Physicists would contribute by assessing the changes in light transmission in the receiving waters. Biologists would analyze subsequent impacts to aquatic flora and fauna from increases in water turbidity. Regulations driving the studies In the United States the National Environmental Policy Act (NEPA) of 1969 set forth requirements for analysis of federal government actions (such as highway construction projects and land management decisions) in terms of specific environmental criteria. Numerous state laws have echoed these mandates, applying the principles to local-scale actions. The upshot has been an explosion of documentation and study of environmental consequences before the fact of development actions. One can examine the specifics of environmental science by reading examples of Environmental Impact Statements prepared under NEPA such as: Wastewater treatment expansion options discharging into the San Diego/Tijuana Estuary, Expansion of the San Francisco International Airport, Development of the Houston, Metro Transportation system, Expansion of the metropolitan Boston MBTA transit system, and Construction of Interstate 66 through Arlington, Virginia. In England and Wales the Environment Agency (EA), formed in 1996, is a public body for protecting and improving the environment and enforces the regulations listed on the communities and local government site. (formerly the office of the deputy prime minister). The agency was set up under the Environment Act 1995 as an independent body and works closely with UK Government to enforce the regulations. See also Environmental monitoring Environmental planning Environmental statistics Environmental informatics Glossary of environmental science List of environmental studies topics References External links Glossary of environmental terms – Global Development Research Center Earth sciences
0.777995
0.998083
0.776504
Chaos theory
Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas. Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as: Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes. Introduction Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random. Chaotic dynamics In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties: it must be sensitive to initial conditions, it must be topologically transitive, it must have dense periodic orbits. In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior. Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different. As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories). A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year. In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic. In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system. Non-periodicity A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior. Topological mixing Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system. Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity. Topological transitivity A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets. An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits. Density of periodic orbits For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example,  →  → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem). Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits. Strange attractors Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region. An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly. Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them. Coexisting attractors In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic". Minimum complexity of a chaotic system Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional. The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as: where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved. While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis. The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability. Infinite dimensional maps The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps: , where kernel is propagator derived as Green function of a relevant physical system, might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:. . Jerk systems In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems. A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits. One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is: Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes: In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative. Similar circuits only require one diode or no diodes at all. See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system. Spontaneous order Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system. Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions. Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect. History James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent. Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing. Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos. The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970. Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions. In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory. In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena. In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements. In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles. In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature. Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws. Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick. The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc. Lorenz's pioneering contributions to chaotic modeling Throughout his career, Professor Edward Lorenz authored a total of 61 research papers, out of which 58 were solely authored by him. Commencing with the 1960 conference in Japan, Lorenz embarked on a journey of developing diverse models aimed at uncovering the SDIC and chaotic features. A recent review of Lorenz's model progression spanning from 1960 to 2008 revealed his adeptness at employing varied physical systems to illustrate chaotic phenomena. These systems encompassed Quasi-geostrophic systems, the Conservative Vorticity Equation, the Rayleigh-Bénard Convection Equations, and the Shallow Water Equations. Moreover, Lorenz can be credited with the early application of the logistic map to explore chaotic solutions, a milestone he achieved ahead of his colleagues (e.g. Lorenz 1964). In 1972, Lorenz coined the term "butterfly effect" as a metaphor to discuss whether a small perturbation could eventually create a tornado with a three-dimensional, organized, and coherent structure. While connected to the original butterfly effect based on sensitive dependence on initial conditions, its metaphorical variant carries distinct nuances. To commemorate this milestone, a reprint book containing invited papers that deepen our understanding of both butterfly effects was officially published to celebrate the 50th anniversary of the metaphorical butterfly effect. A popular but inaccurate analogy for chaos The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore: For want of a nail, the shoe was lost. For want of a shoe, the horse was lost. For want of a horse, the rider was lost. For want of a rider, the battle was lost. For want of a battle, the kingdom was lost. And all for the want of a horseshoe nail. Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. In a recent study, the characteristic of the aforementioned verse was recently denoted as "finite-time sensitive dependence". Applications Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing. Cryptography Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient. Robotics Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model. Chaotic dynamics have been exhibited by passive walking biped robots. Biology For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling. As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population. Economics It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships. Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19. Finite predictability in weather and climate Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law. AI-extended modeling framework In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture"). Other areas In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately. Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior. Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so. In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r. Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model. By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable. Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right). Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics. See also Examples of chaotic systems Advected contours Arnold's cat map Bifurcation theory Bouncing ball dynamics Chua's circuit Cliodynamics Coupled map lattice Double pendulum Duffing equation Dynamical billiards Economic bubble Gaspard-Rice system Hénon map Horseshoe map List of chaotic maps Rössler attractor Standard map Swinging Atwood's machine Tilt A Whirl Other related topics Amplitude death Anosov diffeomorphism Catastrophe theory Causality Chaos as topological supersymmetry breaking Chaos machine Chaotic mixing Chaotic scattering Control of chaos Determinism Edge of chaos Emergence Mandelbrot set Kolmogorov–Arnold–Moser theorem Ill-conditioning Ill-posedness Nonlinear system Patterns in nature Predictability Quantum chaos Santa Fe Institute Shadowing lemma Synchronization of chaos Unintended consequence People Ralph Abraham Michael Berry Leon O. Chua Ivar Ekeland Doyne Farmer Martin Gutzwiller Brosl Hasslacher Michel Hénon Aleksandr Lyapunov Norman Packard Otto Rössler David Ruelle Oleksandr Mikolaiovich Sharkovsky Robert Shaw Floris Takens James A. Yorke George M. Zaslavsky References Further reading Articles Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication). Textbooks Semitechnical and popular works Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, . John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp. John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp. Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp. Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp. James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp. L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp. Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003. Hans Lauwerier, Fractals, Princeton University Press, 1991. Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996. David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994. Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp. Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021. Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991. Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994. Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984. David Ruelle, Chance and Chaos, Princeton University Press 1993. Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993. Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991. Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990. Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003. Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993. M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992. Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012. External links Nonlinear Dynamics Research Group with Animations in Flash The Chaos group at the University of Maryland The Chaos Hypertextbook. An introductory primer on chaos and fractals ChaosBook.org An advanced graduate textbook on chaos (no fractals) Society for Chaos Theory in Psychology & Life Sciences Nonlinear Dynamics Research Group at CSDC, Florence, Italy Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998. Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens Gleick's Chaos (excerpt) Systems Analysis, Modelling and Prediction Group at the University of Oxford A page about the Mackey-Glass equation High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos. Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience. "Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002) Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller Copyright note Complex systems theory Computational fields of study
0.776769
0.999624
0.776477