title
stringlengths
3
71
text
stringlengths
538
109k
relevans
float64
0.76
0.83
popularity
float64
0.92
1
ranking
float64
0.75
0.83
Compartmentalization (psychology)
Compartmentalization is a form of psychological defense mechanism in which thoughts and feelings that seem to conflict are kept separated or isolated from each other in the mind. Those with post traumatic stress disorder may use compartmentalization to separate positive and negative self aspects. It may be a form of mild dissociation; example scenarios that suggest compartmentalization include acting in an isolated moment in a way that logically defies one's own moral code, or dividing one's unpleasant work duties from one's desires to relax. Its purpose is to avoid cognitive dissonance, or the mental discomfort and anxiety caused by a person having conflicting values, cognitions, emotions, beliefs, etc. within themselves. Compartmentalization allows these conflicting ideas to co-exist by inhibiting direct or explicit acknowledgement and interaction between separate compartmentalized self-states. Psychoanalytic views Psychoanalysis considers that whereas isolation separates thoughts from feeling, compartmentalization separates different (incompatible) cognitions from each other. As a secondary, intellectual defense, it may be linked to rationalization. It is also related to the phenomenon of neurotic typing, whereby everything must be classified into mutually exclusive and watertight categories. It has been said that when thinking about death people end up compartmentalizing, and they are in a mode of denial and acceptance about it, but they both have the result of making the thinking individual very passive. Otto Kernberg has used the term "bridging interventions" for the therapist's attempts to straddle and contain contradictory and compartmentalized components of the patient's mind. Vulnerability Compartmentalization can be positive, negative, and integrated depending on the context and person. Compartmentalization may lead to hidden vulnerabilities related to self-organization and self-esteem in those who use it as a major defense mechanism. When a negative self-aspect is activated, it may cause a drop in self-esteem and mood. This drop in self-esteem and mood is what the observed vulnerability is attributed to. Social identity Conflicting social identities may be dealt with by compartmentalizing them and dealing with each only in a context-dependent way. Post Traumatic Stress Disorder (PTSD) and compartmentalization Those who have PTSD often compartmentalize positive and negative self-aspects more than those without PTSD; this helps keep the negative self-aspects from overtaking the positive self-aspects. Positive self-concept can be kept safe through the use of compartmentalization, specifically for those who have experienced sexual trauma and have, subsequently, been diagnosed with PTSD. Mindfulness and compartmentalization Mindfulness meditation may help reduce compartmentalized self-knowledge. Also, those who have greater trait mindfulness may have less negative self-concepts about themselves. Literary examples In his novel, The Human Factor, Graham Greene has one of his corrupt officials use the rectangular boxes of Ben Nicholson's art as a guide to avoiding moral responsibility for bureaucratic decision-making—a way to compartmentalize oneself within one's own separately colored box. Doris Lessing considered that the essential theme of The Golden Notebook was "that we must not divide things off, must not compartmentalise. 'Bound. Free. Good. Bad. Yes. No. Capitalism. Socialism. Sex. Love...'". See also Catharsis Confirmation bias Doublethink Idealization and devaluation Intellectualization Psychodynamics Rationalization (psychology) Sublimation Suspension of disbelief References Cognition Defence mechanisms
0.782305
0.995844
0.779054
Positive psychology
Positive psychology is a field of psychological theory and research of optimal human functioning of people, groups, and institutions. It studies "positive subjective experience, positive individual traits, and positive institutions... it aims to improve quality of life." Positive psychology began as a new domain of psychology in 1998 when Martin Seligman chose it as the theme for his term as president of the American Psychological Association. It is a reaction against past practices which tended to focus on mental illness and which emphasized maladaptive behavior and negative thinking. It builds on the humanistic movement of Abraham Maslow and Carl Rogers, which encourages an emphasis on happiness, well-being, and purpose. Positive psychology largely relies on concepts from the Western philosophical tradition, such as the Aristotelian concept of , which is typically rendered in English with the terms "flourishing", "the good life" or "happiness". Positive psychologists study empirically the conditions and processes that contribute to flourishing, subjective well-being, and happiness, often using these terms interchangeably. Positive psychologists suggest a number of factors that may contribute to happiness and subjective well-being, for example: social ties with a spouse, family, friends, colleagues, and wider networks; membership in clubs or social organizations; physical exercise; and the practice of meditation. Spiritual practice and religious commitment is another possible source for increased well-being. Happiness may rise with increasing income, though it may plateau or even fall when no further gains are made or after a certain cut-off amount. Positive psychology has practical application in various fields related to education, workplace, community development, and mental healthcare. This domain of psychology aims to enrich individuals' lives by promoting well-being and fostering positive experiences and characteristics, thus contributing to a more fulfilling and meaningful life. History Influences from ancient history Before the use of the term "positive psychology" there were researchers who focused on topics that would now be included under the umbrella of positive psychology. Some view positive psychology as a meeting of Eastern thought, such as Buddhism, and Western psychodynamic approaches. The historical roots of positive psychology are found in the teachings of Aristotle, whose Nicomachean Ethics is a description of the theory and practice of human flourishing—which he referred to as —of the tutelage necessary to achieve it, and of the psychological obstacles to its practice. It teaches the cultivation of virtues as the means of attaining happiness and well-being. Influences from Psychological Domains and Theoretical Foundations Scientific research on wellbeing dates back to the 1950s. Several humanistic psychologists, most notably Maslow, Carl Rogers, and Erich Fromm, developed theories and practices pertaining to human happiness and flourishing. More recently, positive psychologists have found empirical support for the humanistic theories of flourishing. In 1984, psychologist Ed Diener published his tripartite model of subjective well-being, which posited "three distinct but often related components of wellbeing: frequent positive affect, infrequent negative affect, and cognitive evaluations such as life satisfaction." In this model, cognitive, affective, and contextual factors contribute to subjective well-being. According to Diener and Suh, subjective well-being is "based on the idea that how each person thinks and feels about his or her life is important." Carol Ryff's six-factor model of psychological well-being was first published in 1989. It postulates that self-acceptance, personal growth, purpose in life, environmental mastery, autonomy, and positive relations with others are key to well-being. According to Corey Keyes, who collaborated with Carol Ryff and uses the term flourishing as a central concept, mental well-being has three components: hedonic (i.e. subjective or emotional), psychological, and social well-being. Hedonic well-being concerns emotional aspects of well-being, whereas psychological and social well-being, e.g. eudaimonic well-being, concerns skills, abilities, and optimal functioning. This tripartite model of mental well-being has received cross-cultural empirical support. Key Figures The positive psychology movement was first founded in 1998 by Martin Seligman. He was concerned about the fact that mainstream psychology was too focused on disease, disorders, and disabilities rather than wellbeing, resilience, and recovery. His goal was to apply the methodological, scientific, scholarly and organizational strengths of mainstream psychology to facilitate well-being rather than illness and disease. The field has been influenced by humanistic as well as psychodynamic approaches to treatment. Predating the use of the term "positive psychology", researchers within the field of psychology had focused on topics that would now be included under this new denomination. The term "positive psychology" dates at least to 1954, when Abraham Maslow's Motivation and Personality was published with a final chapter titled "Toward a Positive Psychology." In the second edition published in 1970, he removed that chapter, saying in the preface that "a positive psychology is at least available today though not very widely." There have been indications that psychologists since the 1950s have increasingly focused on promoting mental health rather than merely treating mental illness. From the beginning of psychology, the field addressed the human experience using the "Disease model," studying and identifying the dysfunction of a person. In the opening sentence of his book Authentic Happiness, Seligman claimed: "for the last half century psychology has been consumed with a single topic only—mental illness," expanding on Maslow's comments. He urged psychologists to continue the earlier missions of psychology of nurturing talent and improving normal life. Development The first positive psychology summit took place in 1999. The First International Conference on Positive Psychology took place in 2002. In September 2005, the first master's program in applied positive psychology (MAPP) was launched at the University of Pennsylvania. In 2006, a course on positive psychology at Harvard University was one of the most popular courses on offer. In June 2009, the First World Congress on Positive Psychology took place in Philadelphia. The field of positive psychology today is most advanced in the United States, Canada, Western Europe, and Australia. Core Concepts Definition Martin Seligman and Mihaly Csikszentmihalyi define positive psychology as "the scientific study of positive human functioning and flourishing on multiple levels that include the biological, personal, relational, institutional, cultural, and global dimensions of life." Core Principles Positive psychology concerns , a word that means human thriving or flourishing. A "good life" is defined by psychologists and philosophers as consisting of authentic expression of self, a sense of well-being, and active engagement in life. Positive psychology aims to complement and extend traditional problem-focused psychology. It concerns positive states (e.g. happiness), positive traits (e.g. talents, interests, strengths of character), positive relationships, and positive institutions and how these apply to physical health. Seligman proposes that a person can best promote their well-being by nurturing their character strengths. Seligman identifies other possible goals of positive psychology: families and schools that allow children to grow, workplaces that aim for satisfaction and high productivity, and teaching others about positive psychology. A basic premise of positive psychology is that human actions arise from our anticipations about the future; these anticipations are informed by our past experiences. Those who practice positive psychology attempt psychological interventions that foster positive attitudes toward one's subjective experiences, individual traits, and life events. The goal is to minimize pathological thoughts that may arise in a hopeless mindset and to develop a sense of optimism toward life. Positive psychologists seek to encourage acceptance of one's past, excitement and optimism about one's future, and a sense of contentment and well-being in the present. Happiness Happiness can be defined in two general ways: an enjoyable state of mind, and the living of an enjoyable life. Quality of life Quality of life is how well you are living and functioning in life. It encompasses more than just physical and mental well-being; it can also include socioeconomic factors. This term can be perceived differently in different cultures and regions around the world. Research topics According to Seligman and Christopher Peterson, positive psychology addresses three issues: positive emotions, positive individual traits, and positive institutions. Positive emotions concern being content with one's past, being happy in the present, and having hope for the future. Positive individual traits are one's strengths and virtues. Positive institutions are strengths to better a community of people. According to Peterson, positive psychologists are concerned with four topics: positive experiences, enduring psychological traits, positive relationships, and positive institutions. He also states that topics of interest to researchers in the field are states of pleasure or flow, values, strengths, virtues, talents, as well as the ways that these can be promoted by social systems and institutions. Theoretical Frameworks There is no accepted "gold standard" theory in positive psychology. The work of Seligman is regularly quoted, as is the work of Csikszentmihalyi, and older models of well-being, such as Ryff's six-factor model of psychological well-being and Diener's tripartite model of subjective well-being. Later, Paul Wong introduced the concept of second-wave positive psychology. Initial theory: three paths to happiness In Authentic Happiness (2002) Seligman proposed three kinds of a happy life that can be investigated: Pleasant life: research into the pleasant life, or the "life of enjoyment", examines how people optimally experience, forecast, and savor the positive feelings and emotions that are part of normal and healthy living (e.g., relationships, hobbies, interests, entertainment, etc.). Seligman says this most transient element of happiness may be the least important. Good Life: investigation of the beneficial effects of immersion, absorption, and flow felt by people when optimally engaged with their primary activities, is the study of the Good Life, or the "life of engagement". Flow is experienced when there is a match between a person's strengths and their current task, i.e., when one feels confident of accomplishing a chosen or assigned task. Related concepts include self-efficacy and play. Meaningful Life: inquiry into the meaningful life, or "life of affiliation", questions how people derive a positive sense of well-being, belonging, meaning, and purpose from being part of and contributing back to something larger and more enduring than themselves (e.g., nature, social groups, organizations, movements, traditions, belief systems). PERMA In Flourish (2011), Seligman argued that the last category of his proposed three kinds of a happy life, "meaningful life", can be considered as three different categories. The resulting summary for this theory is the mnemonic acronym PERMA: Positive Emotions, Engagement, Relationships, Meaning and purpose, and Accomplishments. Positive emotions include a wide range of feelings, not just happiness and joy, but excitement, satisfaction, pride, and awe, amongst others. These are connected to positive outcomes, such as longer life and healthier social relationships. Engagement refers to involvement in activities that draw and build upon one's interests. Csikszentmihalyi explains true engagement as flow, a state of deep effortless involvement, a feeling of intensity that leads to a sense of ecstasy and clarity. The task being done must call upon a particular skill and it should be possible while being a little bit challenging. Engagement involves passion for and concentration on the task at hand—complete absorption and loss of self-consciousness. Relationships are essential in fueling positive emotions, whether they are work-related, familial, romantic, or platonic. As Peterson puts it, "other people matter." Humans receive, share, and spread positivity to others through relationships. Relationships are important in bad times and good times. Relationships can be strengthened by reacting to one another positively. Typically positive things take place in the presence of other people. Meaning is also known as purpose, and answers the question of "why?" Discovering a clear "why" puts everything into context from work to relationships to other parts of life. Finding meaning is learning that there is something greater than oneself. Working with meaning drives people to continue striving for a desirable goal. Accomplishments are the pursuit of success and mastery. Unlike the other parts of PERMA, they are sometimes pursued even when accomplishments do not result in positive emotions, meaning, or relationships. Accomplishments can activate other elements of PERMA, such as pride, under positive emotion. Accomplishments can be individual or community-based, fun-based, or work-based. Each of the five PERMA elements was selected according to three criteria: It contributes to well-being. It is pursued for its own sake. It is defined and measured independently of the other elements. Character Strengths and Virtues The Character Strengths and Virtues (CSV) handbook (2004) was the first attempt by Seligman and Peterson to identify and classify positive psychological traits of human beings. Much like the Diagnostic and Statistical Manual of Mental Disorders (DSM) of general psychology, the CSV provided a theoretical framework to assist in understanding strengths and virtues and for developing practical applications for positive psychology. It identified six classes of virtues (i.e., "core virtues"), underlying 24 measurable character strengths. The CSV suggested these six virtues have a historical basis in the vast majority of cultures and that they can lead to increased happiness when built upon. Notwithstanding numerous cautions and caveats, this suggestion of universality leads to three theories: The study of positive human qualities broadens the scope of psychological research to include mental wellness. The leaders of the positive psychology movement challenge moral relativism by suggesting people are "evolutionarily predisposed" toward certain virtues. Virtue has a biological basis. The organization of the six virtues and 24 strengths is as follows: Wisdom and knowledge: creativity, curiosity, open-mindedness, love of learning, perspective, innovation, prudence Courage: bravery, persistence, vitality, zest Humanity: love, kindness, social intelligence Justice: citizenship, fairness, leadership, integrity, excellence Temperance: forgiveness and mercy, humility, self-control Transcendence: appreciation of beauty, gratitude, hope, humor, spirituality Subsequent research challenged the need for six virtues. Instead, researchers suggested the 24 strengths are more accurately grouped into just three or four categories: Intellectual Strengths, Interpersonal Strengths, and Temperance Strengths, or alternatively, Interpersonal Strengths, Fortitude, Vitality, and Cautiousness. These strengths, and their classifications, have emerged independently elsewhere in literature on values. Paul Thagard described some examples. Flow In the 1970s, Csikszentmihalyi began studying flow, a state of absorption in which one's abilities are well-matched to the demands at-hand. He often refers to it as "optimal experience". Flow is characterized by intense concentration, loss of self-awareness, a feeling of being perfectly challenged (neither bored nor overwhelmed), and a sense that "time is flying." Flow is intrinsically rewarding; it can also assist in the achievement of goals (e.g., winning a game) or improving skills (e.g., becoming a better chess player). Anyone can experience flow and it can be felt in different domains, such as play, creativity, and work. Flow is achieved when the challenge of the situation meets one's personal abilities. A mismatch of challenge for someone of low skills results in a state of anxiety and feeling overwhelmed; insufficient challenge for someone highly skilled results in boredom. A good example of this would be an adult reading a children's book. They would not feel challenged enough to be engaged or motivated in the reading. Csikszentmihalyi explained this using various combinations of challenge and skills to predict psychological states. These four states included the following: Apathy low challenge and low skill(s) Relaxation low challenge and high skill(s) Anxiety high challenge and low skill(s) Flow high challenge and high skill(s) Accordingly, an adult reading a children's book would most likely be in the relaxation state. The adult has no need to worry that the task will be more than they can handle. Challenge is a well-founded explanation for how one enters the flow state and employs intense concentration. However, other factors contribute. For example, one must be intrinsically motivated to participate in the activity/challenge. If the person is not interested in the task, then there is no possibility of their being absorbed into the flow state. Benefits Flow can help in parenting children. When flow is enhanced parents and children, the parents can better thrive in their roles as a parents. A parenting style that is positively oriented results in children who experience lower levels of stress and improved well-being. Flow also has benefits in a school setting. When students are in a state of flow they are fully engaged, leading to better retention of information. Students who experience flow have a more enjoyable and rewarding experience. This state can also reduce stress, which helps with students' mental health and well-being. This increases resilience and helps students to overcome challenges or setbacks by teaching them a growth mindset. Most teachers and parents want students become more engaged and interested in the classroom. The design of the education system was not able to account for such needs. One school implemented a program called PASS. They acknowledged that students needed more challenge and individual advancement; they referred to this as sport culture. This PASS program integrated an elective class into which students could immerse themselves. Such activities included self-paced learning, mastery-based learning, performance learning, and so on. Flow benefits general well-being. It is a positive and intrinsically motivating experience. It is known to "produce intense feelings of enjoyment". It can improve our lives by making them happier and more meaningful. Csikszentmihalyi discovered that our personal growth and development generates happiness. Flow is positive experience because it promotes that opportunity for personal development. Negatives While flow can be beneficial to students, students who experience flow can become overly focused on a particular task. This can lead to students neglecting other important aspects of their learning. In positive psychology there can be misunderstandings on what clinicians and people define as positive. In certain instances, positive qualities, such as optimism, can be detrimental to health, and therefore appear as a negative quality. Alternatively, negative processes, such as anxiety, can be conducive to health and stability and thus would appear as positive qualities. A second wave of positive psychology has further identified and characterized "positive" and "negative" complexes through the use of critical and dialectical thinking. Researchers in 2016 chose to identify these characteristics via two complexes: post-traumatic growth and love as well as optimism vs. pessimism. Second-wave positive psychology Paul Wong introduced the idea of a second wave of positive psychology, focused on the pursuit of meaning in life, which he contrasted with the pursuit of happiness in life. Ivtzan, Lomas, Hefferon, and Worth have recast positive psychology as being about positive outcome or positive mental health, and have explored the positive outcomes of embracing negative emotions and pessimism. Second-wave positive psychology proposes that it is better to accept and transform the meaning of suffering than it is to avoid suffering. In 2016, Lomas and Itzvan proposed that human flourishing (their goal for positive psychology) is about embracing dialectic interplay of positive and negative. Phenomena cannot be determined to be positive or negative independent of context. Some of their examples included: the dialectic of optimism and pessimism Optimism is associated with longevity, but strategic pessimism can lead to more effective planning and decision making. the dialectic of self-esteem and humility Self-esteem is related to well-being, but pursuit of self-esteem can increase depression. Humility can be either low self-opinion or it can lead to prosocial action. the dialectic of forgiveness and anger Forgiveness has been associated with well-being, but people who are more forgiving of abuse may suffer prolonged abuse. While anger has been presented as a destructive emotion, it can also be a moral emotion and drawn upon to confront injustices. In 2019, Wong proposed four principles of second-wave positive psychology: accepting and confronting with courage the reality that life is full of evil and suffering sustainable wellbeing can only be achieved through overcoming suffering and the dark side of life recognizing that everything in life comes in polarities and the importance of achieving an adaptive balance through dialectics learning from indigenous psychology, such as the ancient wisdom of finding deep joy in bad situations Second-wave positive psychology is sometimes abbreviated as PP 2.0. Third-wave positive psychology The third wave of positive psychology emphasizes going beyond the individual to take a deeper look at the groups and systems in which we live. It also promotes becoming more interdisciplinary and multicultural and incorporates more methodologies. In broadening the scope and exploring the systemic and socio-cultural dimensions of people's lived realities, there are four specific things to focus on: The focus of enquiry: becoming more interested in emergent paradigms like "systems-informed positive psychology" which incorporates principles and concepts from the systems sciences to optimize human social systems and the individuals within them. Disciplines: becoming more multi- and interdisciplinary, as reflected in hybrid formulations like positive education, which combines traditional education and research based ways of increasing happiness and overall well-being. Cultural contexts: becoming more multicultural and global Methodologies: embracing other paradigms and ways of knowing, such as qualitative and mixed methods approaches rather than relying solely on quantitative research. Research and Evidence Subject-matter and methodology development expanded the field of positive psychology beyond its core theories and methods. Positive psychology is now a global area of study, with various national indices tracking citizens' happiness ratings. Research findings Research in positive psychology, well-being, and happiness, and the theories of Diener, Ryff, Keyes, and Seligman cover a broad range of topics including "the biological, personal, relational, institutional, cultural, and global dimensions of life." A meta-analysis of 49 studies showed that Positive Psychology Interventions (PPI) produced improvements in well-being and lower depression levels; the PPIs studied included writing gratitude letters, learning optimistic thinking, replaying positive life experiences, and socializing with others. In a later meta-analysis of 39 studies with 6,139 participants, the outcomes were positive. Three to six months after a PPI the effects on subjective well-being and psychological well-being were still significant. However the positive effect was weaker than in the earlier meta-analysis; the authors concluded that this was because they only used higher-quality studies. The PPIs they considered included counting blessings, kindness practices, making personal goals, showing gratitude, and focusing on personal strengths. Another review of PPIs found that over 78% of intervention studies were conducted in Western countries. In the textbook Positive Psychology: The Science of Happiness, authors Compton and Hoffman give the "Top Down Predictors" of well-being as high self esteem, optimism, self efficacy, a sense of meaning in life, and positive relationships with others. The personality traits most associated with well-being are extraversion, agreeability, and low levels of neuroticism. In a study published in 2020, students were enrolled in a positive psychology course that focused on improving happiness and well-being through teaching about positive psychology. The participants answered questions pertaining to the five PERMA categories. At the end of the semester those same students reported significantly higher scores in all categories (p<.001) except for engagement which was significant at p<.05. The authors stated, “Not only do students learn and get credit, there is also a good chance that many will reap the benefits in what is most important to them—their health, happiness, and well-being.” A systematic review was conducted to explore the impact of positive psychology interventions on breast cancer patients' mental and physical well-being. The review analyzed multiple studies that examined interventions such as mindfulness, gratitude practices, and strengths-based approaches in improving quality of life for those diagnosed with breast cancer. Results consistently demonstrated that these interventions significantly reduced symptoms of anxiety, depression, and stress, while fostering resilience, optimism, and emotional well-being. Furthermore, positive psychology approaches were found to enhance patients' adherence to treatment and improve their ability to cope with the challenges of illness and recovery. This review highlights the growing evidence for incorporating positive psychology into breast cancer care, underscoring its potential to support both mental health and holistic recovery in patients. A systematic review and meta-analysis was conducted using various positive psychology techniques to enhance the well-being of individuals with psychiatric and somatic disorders, including breast cancer patients. Key methods included mindfulness-based interventions, gratitude exercises, and strength identification, which aimed to build emotional resilience. Additionally, practices like savoring, cognitive reappraisal, and self-compassion were employed to foster positive emotions and coping strategies. These interventions significantly contributed to reducing distress and promoting overall psychological health by encouraging patients to focus on positive aspects of life despite their challenges. Academic methods Quantitative Quantitative methods in positive psychology include p-technique factor analysis, dynamic factor analysis, interindividual differences and structural equation modeling, spectral analysis and item response models, dynamic systems analysis, latent growth analysis, latent-class models, hierarchical linear modeling, measurement invariance, experimental methods, behavior genetics, and integration of quantitative and qualitative approaches. Qualitative Grant J. Rich explored the use of qualitative methodology to study positive psychology. Rich addresses the popularity of quantitative methods in studying the empirical questions that positive psychology presents. He argues that there is an "overemphasis" on quantitative methods and suggests implementing qualitative methods, such as semi-structured interviews, observations, fieldwork, creative artwork, and focus groups. Rich states that qualitative approaches will further promote the "flourishing of positive psychology" and encourages such practice. Behavioral interventions Changing happiness levels through interventions is a further methodological advancement in the study of positive psychology, and has been the focus of various academic and scientific psychological publications. Happiness-enhancing interventions include expressing kindness, gratitude, optimism, humility, awe, and mindfulness. One behavioral experiment used two six-week interventions: one involving the performance of acts of kindness, and one focused on gratitude which emphasized the counting of one's blessings. The study participants who went through the behavioral interventions reported higher levels of happiness and well-being than those who did not participate in either intervention. Another study found that the interventions of expressing optimism and expressing gratitude enhanced subjective well-being in participants who took part in the intervention for eight months. The researchers concluded that interventions are "most successful when participants know about, endorse, and commit to the intervention." The article provides support that when people enthusiastically take part in behavioral interventions, such as expression of optimism and gratitude, they may increase happiness and subjective well-being. Another study examined the interaction effects between gratitude and humility through behavior interventions. The interventions were writing a gratitude letter and writing a 14-day diary. In both interventions, the researchers found that gratitude and humility are connected and are "mutually reinforcing." The study also discusses how gratitude, and its associated humility, may lead to more positive emotional states and subjective well-being. A series of experiments showed a positive effect of awe on subjective well-being. People who felt awe also reported feeling they had more time, more preference for experiential expenditures than material expenditures, and greater life satisfaction. Experiences that heighten awe may lead to higher levels of life satisfaction and, in turn, higher levels of happiness and subjective well-being. Mindfulness interventions may also increase subjective well-being in people who mindfully meditate. Being mindful in meditation includes awareness and observation of one's meditation practice, with non-reactive and non-judgmental sentiments during meditation. National indices of happiness The creation of various national indices of happiness have expanded the field of positive psychology to a global scale. In a January 2000 article in American Psychologist, psychologist Ed Diener argued for the creation of a national happiness index in the United States. Such an index could provide measurements of happiness, or subjective well-being, within the United States and across many other countries in the world. Diener argued that national indices would be helpful markers or indicators of population happiness, providing a sense of current ratings and a tracker of happiness across time. Diener proposed that the national index include various sub-measurements of subjective well-being, including "pleasant affect, unpleasant affect, life satisfaction, fulfillment, and more specific states such as stress, affection, trust, and joy." In 2012, the first World Happiness Report was published. The World Happiness Report was initiated by the UN General Assembly in June 2011, when it passed the Bhutanese Resolution. The Bhutanese Resolution called for nations across the world to "give more importance to happiness and well-being in determining how to achieve and measure social and economic development." The data for the World Happiness Reports is collected in partnership with the Gallup World Poll's life evaluations and annual happiness rankings. The World Happiness Report bases its national rankings on how happy constituents believe themselves to be. The first World Happiness Report, published in 2012, detailed the state of world happiness, the causes of happiness and misery, policy implications from happiness reports, and three case studies of subjective well-being for 1) Bhutan and its Gross National Happiness index, 2) the U.K. Office for National Statistics Experience, and 3) happiness in the member countries within the OECD. The 2020 World Happiness Report, the eighth in the series of reports, was the first to include happiness rankings of cities across the world, in addition to rankings of 156 countries. The city of Helsinki, Finland was reported as the city with the highest subjective well-being ranking, and the country of Finland was reported as the country with the highest subjective well-being ranking. The 2020 report provided insights on happiness based on environmental conditions, social conditions, urban-rural happiness differentials, and sustainable development. It also provided possible explanations for why Nordic countries have consistently ranked in the top ten happiest countries in the World Happiness Report, such as Nordic countries' high-quality government benefits and protections to its citizens, including welfare benefits and well-operated democratic institutions, as well as social connections, bonding, and trust. Additional national well-being indices and reported statistics include the Gallup Global Emotions Report, Sharecare Community Well-Being Index, Global Happiness Council's Global Happiness and Well-being Policy Report, Happy Planet Index, OECD Better Life Index, and UN Human Development Reports. Applications Positive psychology influenced other academic fields of study and scholarship, notably organizational behavior, education, and psychiatry. Positive organizational scholarship Positive organizational scholarship (POS), also referred to as positive organizational behavior (POB), began as an application of positive psychology to the field of organizational behavior. An early use of the term was in Positive Organizational Scholarship: Foundations of a New Discipline (2003), edited by Ross School of Business professors Kim S. Cameron, Jane E. Dutton, and Robert E. Quinn. The editors promote "the best of the human condition", such as goodness, compassion, resilience, and positive human potential, as an organizational goal as important as financial success. The goal of POS is to study the factors that create positive work experiences and successful, people-oriented outcomes. The 2011 volume The Oxford Handbook of Positive Organizational Scholarship, covers such topics as positive human resource practices, positive organizational practices, and positive leadership and change. It applies positive psychology to the workplace context, covering areas such as positive individual attributes, positive emotions, strengths and virtues, and positive relationships. The editors of that volume define POS this way: Psychiatry Positive psychology influenced psychiatry and led to more widespread promotion of practices including well-being therapy, positive psychotherapy, and an integration of positive psychology in therapeutic practice. Benefits of positive influences can be seen in practices like positive psychological interventions (PPIs). It is an intervention designed to promote positive outcomes by implying positive activities such as imposing adaptive personality traits to be praised and encouraged. Research has found that PPIs has the potential to improve medical and psychiatric disorders of individuals with depression and suicidal levels. Department of Developmental Services (DDS) did a research on a 52-year-old single gay man with bipolar II disorder and attention deficit hyperactive disorder (ADHD) with low self-esteem. In addition, there were other focuses of this research: promoting kindness behavior, identifying, and encouraging internal strengths to reach a fulfilling life. PPIs increases the individuality's purpose with constant strengthen on personality traits and behavior positivity. Psychoanalysis can be used to treat mental illnesses such as depression as it focuses on identifying the conscious and unconscious motivations and behaviors affecting one's life. Treatment using psychoanalysis runs in a longer duration by practicing free association to promote personal growth. Cognitive behavioral therapy (CBT), a common type of psychotherapy in America, focuses on both psychoanalysis and behavioral analysis: psychoanalysts focusing on conscious and unconscious motivations, behaviorists focusing on objectively measured behaviors. Therapy such as exposure therapy helps people with specific aspects of depression, or more specific cases like social phobia. Positive psychology may assist those recovering from traumatic brain injury (TBI). TBI rehabilitation practices rely on bettering the patient's life by getting them to engage (or re-engage) in normal everyday practices, an idea which is related to tenets of positive psychology. While the empirical evidence for positive psychology is limited, positive psychology's focus on small successes, optimism, and prosocial behavior promises to improve the social and emotional well-being of TBI patients. Media and Industry Positive psychology is a subject of popular books and films, and influences the wellness industry. Books Several popular psychology books have been written for a general audience. Ilona Boniwell's Positive Psychology in a Nutshell provided a summary of the research. According to Boniwell, well-being is related to optimism, extraversion, social connections (i.e., close friendships), being married, having engaging work, religion or spirituality, leisure, good sleep and exercise, social class (through lifestyle differences and better coping methods), and subjective health (what you think about your health). Boniwell writes that well-being is not related to age, physical attractiveness, money (once basic needs are met), gender (women are more often depressed but also more often joyful), educational level, having children, moving to a sunnier climate, crime prevention, housing, and objective health (what doctors say). Sonja Lyubomirsky's The How of Happiness provides advice on how to improve happiness. According to this book, people should create new habits, seek out new emotions, use variety and timing to prevent hedonic adaptation, and enlist others to support the creation of those new habits. Lyubomirsky recommends twelve happiness activities, including savoring life, learning to forgive, and living in the present. Stumbling on Happiness by Daniel Gilbert shares positive psychology research suggesting that people are often poor at predicting what will make them happy and that people are prone to misevaluating the causes of their happiness. He notes that the subjectivity of well-being and happiness often is the most difficult challenge to overcome in predicting future happiness, as our future selves may have different perspectives on life than our current selves. Films The film industry noticed positive psychology, and films have spurred new research within positive psychology. Happy is a full-length documentary film covering positive psychology and neuroscience. It highlights case studies on happiness across diverse cultures and geographies. The film features interviews with notable positive psychologists and scholars, including Gilbert, Diener, Lyubomirsky, and Csikszentmihalyi. For several years, the Positive Psychology News website included a section on Positive Psychology Movie Awards that highlighted feature films that featured messages of positive psychology. The VIA Institute has researched positive psychology as represented in feature films. Contemporary and popular films that promote or represent character strengths are the basis for various academic articles. Wellness industry The growing popularity and attention given to positive psychology research has influenced industry growth, development, and consumption of products and services meant to cater to wellness and well-being. According to the Global Wellness Institute, as of 2020, the global wellness economy is valued at trillion; the key sectors of the industry included Nutrition, Personal Care and Beauty, and Physical activity, while the Mental wellness and Public health sectors made up over billion. Companies highlight happiness and well-being in their marketing strategies. Food and beverage companies such as Coca-Cola and Pocky—whose motto is "Share happiness!"—emphasize happiness in their commercials, branding, and descriptions. CEOs at retail companies such as Zappos have profited by publishing books detailing how they deliver happiness, while Amazon's logo features a dimpled smile. Criticism and Limitations Many aspects of positive psychology have been criticized. Reality distortion Positive Illusions In 1988, psychologists Shelley E. Taylor and Jonathan D. Brown co-authored a Psychological Bulletin article that coined the phrase positive illusions. Positive illusions are the cognitive processes people engage in when they self-aggrandize or self-enhance. They are unrealistically positive or self-affirming attitudes that individuals hold about themselves, their position, or their environment. They are attitudes of extreme optimism that endure in the face of facts and real conditions. Taylor and Brown suggest that positive illusions protect people from negative feedback that they might receive, and this, in turn, preserves their psychological adaptation and subjective well-being. However, later research found that positive illusions and related attitudes lead to psychological maladaptive conditions such as poorer social relationships, expressions of narcissism, and negative workplace outcomes, thus reducing the positive effects that positive illusions have on subjective well-being, overall happiness, and life satisfaction. Kirk Schneider, editor of the Journal of Humanistic Psychology, pointed to research showing high positivity correlates with positive illusion, which distorts reality. High positivity or flourishing could make one incapable of psychological growth, unable to self-reflect, and prone to holding racial biases. By contrast, negativity, sometimes evidenced in mild to moderate depression, is correlated with less distortion of reality. Therefore, Schneider argues, negativity might play an important role: engaging in conflict and acknowledging appropriate negativity, including certain negative emotions like guilt, might better promote flourishing. Schneider wrote: "perhaps genuine happiness is not something you aim at, but is... a by-product of a life well lived—and a life well lived does not settle on the programmed or neatly calibrated." Role of negativity Barbara S. Held, a professor at Bowdoin College, argues that positive psychology has faults: negative side effects, negativity within the positive psychology movement, and the division in the field of psychology caused by differing opinions of psychologists on positive psychology. She notes the movement's lack of consistency regarding the role of negativity. She also raises issues with the simplistic approach taken by some psychologists in the application of positive psychology. A "one size fits all" approach is arguably not beneficial; she suggests a need for individual differences to be incorporated into its application. By teaching young people that being confident and optimistic leads to success, when they are unsuccessful they may believe this is because they are insecure or pessimistic. This could lead them to believe that any negative internal thought or feeling they may experience is damaging to their happiness and should be steered clear of completely. Held prefers the Second Wave Positive Psychology message of embracing the dialectic nature of positive and negative, and questions the need to call it "positive" psychology. Toxic positivity One critical response to positive psychology concerns "toxic positivity". Toxic positivity is when people do not fully acknowledge, process, or manage the entire spectrum of human emotion, including anger and sadness. This genre of criticism argues that positive psychology places too much importance on "upbeat thinking, while shunting challenging and difficult experiences to the side." People who engage in a constant chase for positive experiences or states of high subjective well-being may inadvertently stigmatize negative emotional conditions such as depression, or may suppress natural emotional responses, such as sadness, regret, or stress. Furthermore, by not allowing negative emotional states to be experienced, or by suppressing and hiding negative emotional responses, people may experience harmful physical, cardiovascular, and respiratory consequences. Opponents of toxic positivity advocate accepting and fully experiencing negative emotional states. Methodological and philosophical critiques Richard Lazarus critiqued positive psychology's methodological and philosophical components. He holds that giving more detail and insight into the positive is not bad, but not at the expense of the negative, because the two (positive and negative) are inseparable. Among his critiques: Positive psychology's use of correlational and cross-sectional research designs to indicate causality between the movement's ideas and healthy lives may hide other factors and time differences that account for observed differences. Emotions cannot be categorized dichotomously into positive and negative; emotions are subjective and rich in social/relational meaning. Emotions are fluid, meaning that the context they appear in changes over time. Lazarus states that "all emotions have the potential of being either one or the other, or both, on different occasions, and even on the same occasion when an emotion is experienced by different persons" Individual differences are neglected in most social science research. Many research designs focus on the statistical significance of the groups while overlooking differences among people. Social science researchers' tend to not adequately define and measure emotions. Most assessments are quick checklists and do not provide adequate debriefing. Many researchers do not differentiate between fluid emotional states and relatively stable personality traits. Lazarus holds that positive psychology claims to be new and innovative but the majority of research on stress and coping theory makes many of the same claims as positive psychology. The movement attempts to uplift and reinforce the positive aspects of one's life, but everyone in life experiences stress and hardship. Coping through these events should not be regarded as adapting to failures but as successfully navigating stress, but the movement doesn't hold that perspective. Another critique of positive psychology is that it has been developed from a Eurocentric worldview. Intersectionality has become a methodological concern regarding studies within Positive Psychology. A literature review conducted in 2022 noted several criticisms of the area, including lack of conceptual thinking, problematic measurements, poor replication of results, self-isolation from mainstream psychology, decontextualization, and being used for capitalism. Narrow focus In 2003, Ian Sample noted that "Positive psychologists also stand accused of burying their heads in the sand and ignoring that depressed, even merely unhappy people, have real problems that need dealing with." He quoted Steven Wolin, a clinical psychiatrist at George Washington University, as saying that the study of positive psychology is just a reiteration of older ways of thinking, and that there is not much scientific research to support the efficacy of this method. Psychological researcher Shelly Gable retorts that positive psychology is just bringing a balance to a side of psychology that is glaringly understudied. She points to imbalances favoring research into negative psychological well-being in cognitive psychology, health psychology, and social psychology. Psychologist Jack Martin maintains that positive psychology is not unique in its optimistic approach to emotional well-being, stating that other forms of psychology, such as counseling and educational psychology, are also interested in positive human fulfillment. He says while positive psychology pushes for schools to be more student-centered and able to foster positive self-images in children, a lack of focus on self-control may prevent children from making full contributions to society. If positive psychology is not implemented correctly, it can cause more harm than good. This is the case, for example, when interventions in school are coercive (in the sense of being imposed on everyone without regard for the child's reason for negativity) and fail to take each student's context into account. Applications and Misapplications The US Army's Comprehensive Soldier Fitness program The Comprehensive Soldier Fitness (CSF) program was established in 2008 by then-Chief of Staff of the United States Army, General George W. Casey, Jr., in an effort to address increasing rates of drug abuse, family violence, PTSD, and suicide among soldiers. The Army contracted with Seligman's Positive Psychology Center at the University of Pennsylvania to supply a program based on the center's Penn Resiliency Program, which was designed for 10- to 14-year-old children. Although Seligman proposed starting with a small-scale pilot-test, General Casey insisted on immediately rolling out the CSF to the entire Army. Interviewed for the journal Monitor on Psychology of the American Psychological Association, Seligman said that "This is the largest study—1.1 million soldiers—psychology has ever been involved in." According to journalist Jesse Singal, "It would become one of the largest mental-health interventions geared at a single population in the history of humanity, and possibly the most expensive." Some psychologists criticized the CSF for various reasons. Nicholas J.L. Brown wrote: "The idea that techniques that have demonstrated, at best, marginal effects in reducing depressive symptoms in school-age children could also prevent the onset of a condition that is associated with some of the most extreme situations with which humans can be confronted is a remarkable one that does not seem to be backed up by empirical evidence." Stephen Soldz of the Boston Graduate School of Psychoanalysis cited Seligman's acknowledgment that the CSF is a gigantic study rather than a program based on proven techniques, and questioned the ethics of requiring soldiers to participate in research without informed consent. Soldz also criticized the CSF training for trying to build up-beat attitudes toward combat: "Might soldiers who have been trained to resiliently view combat as a growth opportunity be more likely to ignore or under-estimate real dangers, thereby placing themselves, their comrades, or civilians at heightened risk of harm?" In 2021 the Chronicle of Higher Education carried a debate between Singal and Seligman about whether, with the CSF well into its second decade, there was any solid evidence of its effectiveness. Singal cited studies that, he said, failed to find any measurable benefits from such positive psychology techniques, and he criticized the Army's own reports as methodologically unsound and lacking peer review. Seligman said that Singal had misinterpreted the studies and ignored the Army's positive feedback from soldiers, one of whom told Seligman that "if I had had this training years ago, it would have saved my marriage." Equality Gaps and Underrepresentation Demographic Diversity Positive psychology has historically been critiqued for its lack of demographic diversity, both in terms of its research populations and its theoretical frameworks. Much of the early research in positive psychology was conducted predominantly with Western, educated, industrialized, rich, and democratic (WEIRD) populations, leading to concerns about the generalizability of its findings across different demographic groups. Recent studies have highlighted the need for more inclusive research that encompasses a broader range of cultural and socioeconomic backgrounds to ensure that positive psychology interventions are applicable and effective for diverse populations. Cultural Sensitivity The concept of cultural sensitivity is crucial for positive psychology, yet it has faced criticism for insufficient consideration of cultural contexts. Positive psychology's principles, such as subjective well-being and character strengths, may not universally apply or be valued equally across all cultures. For instance, in collectivist cultures, individuals prioritize collective well-being over individual happiness, and thus, frameworks like the PERMA model may need adaptation to reflect these values. Additionally, there is a call for more cross-cultural research to validate the applicability of positive psychology interventions globally and to integrate culturally relevant practices and perspectives. Accessibility Accessibility issues are a significant concern within positive psychology. Interventions and practices derived from positive psychology may not be equally accessible to all populations, particularly those from marginalized or lower socioeconomic backgrounds. There is evidence that socioeconomic factors can impact the effectiveness and availability of positive psychology interventions, potentially exacerbating existing inequalities. To address these gaps, there is a growing emphasis on developing and implementing positive psychology practices that are affordable and accessible to diverse communities, including those with limited resources. Future Directions Positive psychology continues to evolve with ongoing research and practical applications expanding its scope and impact. Future directions in positive psychology focus on integrating emerging insights, addressing criticism, and exploring new areas of application to enhance well-being and human flourishing. Cross-cultural research is essential to explore how positive psychology practices perform across different cultural settings and to refine interventions for global application. Some researchers have written that addressing criticisms related to methodological rigor, theoretical clarity, and practical applicability should be a priority, with a focus on enhancing the robustness of positive psychology research methodologies, improving the replicability of findings, and addressing the critiques related to the movement's focus and applicability. See also Unconditional positive regard Precursors Needs and Motives (Henry Murray) Various Notes References Further reading External links Origins Resources University of Pennsylvania, Authentic Happiness, website of Martin Seligman Various The Karma of Happiness: A Buddhist Monk Looks at Positive Psychology by Thanissaro Bhikkhu Happiness Psychological schools Positive criminology Well-being
0.780895
0.997467
0.778917
Physiology
Physiology (; ) is the scientific study of functions and mechanisms in a living system. As a subdiscipline of biology, physiology focuses on how organisms, organ systems, individual organs, cells, and biomolecules carry out chemical and physical functions in a living system. According to the classes of organisms, the field can be divided into medical physiology, animal physiology, plant physiology, cell physiology, and comparative physiology. Central to physiological functioning are biophysical and biochemical processes, homeostatic control mechanisms, and communication between cells. Physiological state is the condition of normal function. In contrast, pathological state refers to abnormal conditions, including human diseases. The Nobel Prize in Physiology or Medicine is awarded by the Royal Swedish Academy of Sciences for exceptional scientific achievements in physiology related to the field of medicine. Foundations Because physiology focuses on the functions and mechanisms of living organisms at all levels, from the molecular and cellular level to the level of whole organisms and populations, its foundations span a range of key disciplines: Anatomy is the study of the structure and organization of living organisms, from the microscopic level of cells and tissues to the macroscopic level of organs and systems. Anatomical knowledge is important in physiology because the structure and function of an organism are often dictated by one another. Biochemistry is the study of the chemical processes and substances that occur within living organisms. Knowledge of biochemistry provides the foundation for understanding cellular and molecular processes that are essential to the functioning of organisms. Biophysics is the study of the physical properties of living organisms and their interactions with their environment. It helps to explain how organisms sense and respond to different stimuli, such as light, sound, and temperature, and how they maintain homeostasis, or a stable internal environment. Genetics is the study of heredity and the variation of traits within and between populations. It provides insights into the genetic basis of physiological processes and the ways in which genes interact with the environment to influence an organism's phenotype. Evolutionary biology is the study of the processes that have led to the diversity of life on Earth. It helps to explain the origin and adaptive significance of physiological processes and the ways in which organisms have evolved to cope with their environment. Subdisciplines There are many ways to categorize the subdisciplines of physiology: based on the taxa studied: human physiology, animal physiology, plant physiology, microbial physiology, viral physiology based on the level of organization: cell physiology, molecular physiology, systems physiology, organismal physiology, ecological physiology, integrative physiology based on the process that causes physiological variation: developmental physiology, environmental physiology, evolutionary physiology based on the ultimate goals of the research: applied physiology (e.g., medical physiology), non-applied (e.g., comparative physiology) Subdisciplines by level of organisation Cell physiology Although there are differences between animal, plant, and microbial cells, the basic physiological functions of cells can be divided into the processes of cell division, cell signaling, cell growth, and cell metabolism. Subdisciplines by taxa Plant physiology Plant physiology is a subdiscipline of botany concerned with the functioning of plants. Closely related fields include plant morphology, plant ecology, phytochemistry, cell biology, genetics, biophysics, and molecular biology. Fundamental processes of plant physiology include photosynthesis, respiration, plant nutrition, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, seed germination, dormancy, and stomata function and transpiration. Absorption of water by roots, production of food in the leaves, and growth of shoots towards light are examples of plant physiology. Animal physiology Human physiology Human physiology is the study of how the human body's systems and functions work together to maintain a stable internal environment. It includes the study of the nervous, endocrine, cardiovascular, respiratory, digestive, and urinary systems, as well as cellular and exercise physiology. Understanding human physiology is essential for diagnosing and treating health conditions and promoting overall wellbeing. It seeks to understand the mechanisms that work to keep the human body alive and functioning, through scientific enquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal level of focus of physiology is at the level of organs and systems within systems. The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical. Changes in physiology can impact the mental functions of individuals. Examples of this would be the effects of certain medications or toxic levels of substances. Change in behavior as a result of these substances is often used to assess the health of individuals. Much of the foundation of knowledge in human physiology was provided by animal experimentation. Due to the frequent connection between form and function, physiology and anatomy are intrinsically linked and are studied in tandem as part of a medical curriculum. Subdisciplines by research objective Comparative physiology Involving evolutionary physiology and environmental physiology, comparative physiology considers the diversity of functional characteristics across organisms. History The classical era The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. Galen was also the founder of experimental physiology. And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine. Early modern period Jean Fernel (1497–1558), a French physician, introduced the term "physiology". Galen, Ibn al-Nafis, Michael Servetus, Realdo Colombo, Amato Lusitano and William Harvey, are credited as making important discoveries in the circulation of the blood. Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium), and a thermoscope to measure temperature. In 1791 Luigi Galvani described the role of electricity in the nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law. In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils). In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased the death rate from surgery by a substantial amount. The Physiological Society was founded in London in 1876 as a dining club. The American Physiological Society (APS) is a nonprofit organization that was founded in 1887. The Society is, "devoted to fostering education, scientific research, and dissemination of information in the physiological sciences." In 1891, Ivan Pavlov performed research on "conditional responses" that involved dogs' saliva production in response to a bell and visual stimuli. In the 19th century, physiological knowledge began to accumulate at a rapid rate, in particular with the 1838 appearance of the Cell theory of Matthias Schleiden and Theodor Schwann. It radically stated that organisms are made up of units called cells. Claude Bernard's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), which would later be taken up and championed as "homeostasis" by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology. Nineteenth-century physiologists such as Michael Foster, Max Verworn, and Alfred Binet, based on Haeckel's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, later renamed in the 20th century as cell biology. Late modern period In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology. Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew. Most recently, evolutionary physiology has become a distinct subdiscipline. In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated. In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle, known today as the sliding filament theory. Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). If physiology is perhaps less visible nowadays than during the golden age of the 19th century, it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience, endocrinology, and immunology. Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains. Notable physiologists Women in physiology Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society, for example, was founded in 1887 and included only men in its ranks. In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. Hyde, a representative of the American Association of University Women and a global advocate for gender equality in education, attempted to promote gender equality in every aspect of science and medicine. Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society, which had been founded in 1876. On 3 July 1915, six women were officially admitted: Florence Buchanan, Winifred Cullis, Ruth Skelton, Sarah C. M. Sowton, Constance Leetham Terry, and Enid M. Tribe. The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society." Prominent women physiologists include: Bodil Schmidt-Nielsen, the first woman president of the American Physiological Society in 1975. Gerty Cori, along with her husband Carl Cori, received the Nobel Prize in Physiology or Medicine in 1947 for their discovery of the phosphate-containing form of glucose known as glycogen, as well as its function within eukaryotic metabolic mechanisms for energy production. Moreover, they discovered the Cori cycle, also known as the Lactic acid cycle, which describes how muscle tissue converts glycogen into lactic acid via lactic acid fermentation. Barbara McClintock was rewarded the 1983 Nobel Prize in Physiology or Medicine for the discovery of genetic transposition. McClintock is the only female recipient who has won an unshared Nobel Prize. Gertrude Elion, along with George Hitchings and Sir James Black, received the Nobel Prize for Physiology or Medicine in 1988 for their development of drugs employed in the treatment of several major diseases, such as leukemia, some autoimmune disorders, gout, malaria, and viral herpes. Linda B. Buck, along with Richard Axel, received the Nobel Prize in Physiology or Medicine in 2004 for their discovery of odorant receptors and the complex organization of the olfactory system. Françoise Barré-Sinoussi, along with Luc Montagnier, received the Nobel Prize in Physiology or Medicine in 2008 for their work on the identification of the Human Immunodeficiency Virus (HIV), the cause of Acquired Immunodeficiency Syndrome (AIDS). Elizabeth Blackburn, along with Carol W. Greider and Jack W. Szostak, was awarded the 2009 Nobel Prize for Physiology or Medicine for the discovery of the genetic composition and function of telomeres and the enzyme called telomerase. See also Outline of physiology Biochemistry Biophysics Cytoarchitecture Defense physiology Ecophysiology Exercise physiology Fish physiology Insect physiology Human body Molecular biology Metabolome Neurophysiology Pathophysiology Pharmacology Physiome American Physiological Society International Union of Physiological Sciences The Physiological Society Brazilian Society of Physiology References Bibliography Human physiology Widmaier, E.P., Raff, H., Strang, K.T. Vander's Human Physiology. 11th Edition, McGraw-Hill, 2009. Marieb, E.N. Essentials of Human Anatomy and Physiology. 10th Edition, Benjamin Cummings, 2012. Animal physiology Hill, R.W., Wyse, G.A., Anderson, M. Animal Physiology, 3rd ed. Sinauer Associates, Sunderland, 2012. Moyes, C.D., Schulte, P.M. Principles of Animal Physiology, second edition. Pearson/Benjamin Cummings. Boston, MA, 2008. Randall, D., Burggren, W., and French, K. Eckert Animal Physiology: Mechanism and Adaptation, 5th Edition. W.H. Freeman and Company, 2002. Schmidt-Nielsen, K. Animal Physiology: Adaptation and Environment. Cambridge & New York: Cambridge University Press, 1997. Withers, P.C. Comparative animal physiology. Saunders College Publishing, New York, 1992. Plant physiology Larcher, W. Physiological plant ecology (4th ed.). Springer, 2001. Salisbury, F.B, Ross, C.W. Plant physiology. Brooks/Cole Pub Co., 1992 Taiz, L., Zieger, E. Plant Physiology (5th ed.), Sunderland, Massachusetts: Sinauer, 2010. Fungal physiology Griffin, D.H. Fungal Physiology, Second Edition. Wiley-Liss, New York, 1994. Protistan physiology Levandowsky, M. Physiological Adaptations of Protists. In: Cell physiology sourcebook: essentials of membrane biophysics. Amsterdam; Boston: Elsevier/AP, 2012. Levandowski, M., Hutner, S.H. (eds). Biochemistry and physiology of protozoa. Volumes 1, 2, and 3. Academic Press: New York, NY, 1979; 2nd ed. Laybourn-Parry J. A Functional Biology of Free-Living Protozoa. Berkeley, California: University of California Press; 1984. Algal physiology Lobban, C.S., Harrison, P.J. Seaweed ecology and physiology. Cambridge University Press, 1997. Stewart, W. D. P. (ed.). Algal Physiology and Biochemistry. Blackwell Scientific Publications, Oxford, 1974. Bacterial physiology El-Sharoud, W. (ed.). Bacterial Physiology: A Molecular Approach. Springer-Verlag, Berlin-Heidelberg, 2008. Kim, B.H., Gadd, M.G. Bacterial Physiology and Metabolism. Cambridge, 2008. Moat, A.G., Foster, J.W., Spector, M.P. Microbial Physiology, 4th ed. Wiley-Liss, Inc. New York, NY, 2002. External links physiologyINFO.org – public information site sponsored by the American Physiological Society Branches of biology
0.779989
0.99859
0.778889
Usability
Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use. The object of use can be a software application, website, book, tool, machine, process, vehicle, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, and others. It is widely used in consumer electronics, communication, and knowledge transfer objects (such as a cookbook, a document or online help) and mechanical objects such as a door handle or a hammer. Usability includes methods of measuring usability, such as needs analysis and the study of the principles behind an object's perceived efficiency or elegance. In human-computer interaction and computer science, usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability considers user satisfaction and utility as quality components, and aims to improve user experience through iterative design. Introduction The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example: More efficient to use—takes less time to accomplish a particular task Easier to learn—operation can be learned by observing the object More satisfying to use Complex computer systems find their way into everyday life, and at the same time the market is saturated with competing brands. This has made usability more popular and widely recognized in recent years, as companies see the benefits of researching and developing their products with user-oriented methods instead of technology-oriented methods. By understanding and researching the interaction between product and user, the usability expert can also provide insight that is unattainable by traditional company-oriented market research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated. A method called contextual inquiry does this in the naturally occurring context of the users own environment. In the user-centered design paradigm, the product is designed with its intended users in mind at all times. In the user-driven or participatory design paradigm, some of the users become actual or de facto members of the design team. The term user friendly is often used as a synonym for usable, though it may also refer to accessibility. Usability describes the quality of user experience across websites, software, products, and environments. There is no consensus about the relation of the terms ergonomics (or human factors) and usability. Some think of usability as the software specialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters (e.g., turning a door handle) and usability focusing on psychological matters (e.g., recognizing that a door can be opened by turning its handle). Usability is also important in website development (web usability). According to Jakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after scanning the home page—for a few seconds at most." Otherwise, most casual users simply leave the site and browse or shop elsewhere. Usability can also include the concept of prototypicality, which is how much a particular thing conforms to the expected shared norm, for instance, in website design, users prefer sites that conform to recognised design norms. Definition ISO defines usability as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." The word "usability" also refers to methods for improving ease-of-use during the design process. Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman have written (separately) about a framework of system acceptability, where usability is a part of "usefulness" and is composed of: Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency: Once users have learned the design, how quickly can they perform tasks? Memorability: When users return to the design after a period of not using it, how easily can they re-establish proficiency? Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors? Satisfaction: How pleasant is it to use the design? Usability is often associated with the functionalities of the product (cf. ISO definition, below), in addition to being solely a characteristic of the user interface (cf. framework of system acceptability, also below, which separates usefulness into usability and utility). For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be considered unusable according to the former view, and lacking in utility according to the latter view. When evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the Interface". Each component may be measured subjectively against criteria, e.g., Principles of User Interface Design, to provide a metric, often expressed as a percentage. It is important to distinguish between usability testing and usability engineering. Usability testing is the measurement of ease of use of a product or piece of software. In contrast, usability engineering (UE) is the research and design process that ensures a product with good usability. Usability is a non-functional requirement. As with other non-functional requirements, usability cannot be directly measured but must be quantified by means of indirect measures or attributes such as, for example, the number of reported problems with ease-of-use of a system. Intuitive interaction or intuitive use The term intuitive is often listed as a desirable trait in usable interfaces, sometimes used as a synonym for learnable. In the past, Jef Raskin discouraged using this term in user interface design, claiming that easy to use interfaces are often easy because of the user's exposure to previous similar systems, thus the term 'familiar' should be preferred. As an example: Two vertical lines "||" on media player buttons do not intuitively mean "pause"—they do so by convention. This association between intuitive use and familiarity has since been empirically demonstrated in multiple studies by a range of researchers across the world, and intuitive interaction is accepted in the research community as being use of an interface based on past experience with similar interfaces or something else, often not fully conscious, and sometimes involving a feeling of "magic" since the course of the knowledge itself may not be consciously available to the user . Researchers have also investigated intuitive interaction for older people, people living with dementia, and children. Some have argued that aiming for "intuitive" interfaces (based on reusing existing skills with interaction systems) could lead designers to discard a better design solution only because it would require a novel approach and to stick with boring designs. However, applying familiar features into a new interface has been shown not to result in boring design if designers use creative approaches rather than simple copying. The throwaway remark that "the only intuitive interface is the nipple; everything else is learned." is still occasionally mentioned. Any breastfeeding mother or lactation consultant will tell you this is inaccurate and the nipple does in fact require learning on both sides. In 1992, Bruce Tognazzini even denied the existence of "intuitive" interfaces, since such interfaces must be able to intuit, i.e., "perceive the patterns of the user's behavior and draw inferences." Instead, he advocated the term "intuitable," i.e., "that users could intuit the workings of an application by seeing it and using it". However, the term intuitive interaction has become well accepted in the research community over the past 20 or so years and, although not perfect, it should probably be accepted and used. ISO standards ISO/TR 16982:2002 standard ISO/TR 16982:2002 ("Ergonomics of human-system interaction—Usability methods supporting human-centered design") is an International Standards Organization (ISO) standard that provides information on human-centered usability methods that can be used for design and evaluation. It details the advantages, disadvantages, and other factors relevant to using each usability method. It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context. The main users of ISO/TR 16982:2002 are project managers. It therefore addresses technical human factors and ergonomics issues only to the extent necessary to allow managers to understand their relevance and importance in the design process as a whole. The guidance in ISO/TR 16982:2002 can be tailored for specific design situations by using the lists of issues characterizing the context of use of the product to be delivered. Selection of appropriate usability methods should also take account of the relevant life-cycle process. ISO/TR 16982:2002 is restricted to methods that are widely used by usability specialists and project managers. It does not specify the details of how to implement or carry out the usability methods described. ISO 9241 standard ISO 9241 is a multi-part standard that covers a number of aspects of people working with computers. Although originally titled Ergonomic requirements for office work with visual display terminals (VDTs), it has been retitled to the more generic Ergonomics of Human System Interaction. As part of this change, ISO is renumbering some parts of the standard so that it can cover more topics, e.g. tactile and haptic interaction. The first part to be renumbered was part 10 in 2006, now part 110. IEC 62366 IEC 62366-1:2015 + COR1:2016 & IEC/TR 62366-2 provide guidance on usability engineering specific to a medical device. Designing for usability Any system or device designed for use by people should be easy to use, easy to learn, easy to remember (the instructions), and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles Early focus on end users and the tasks they need the system/device to do Empirical measurement using quantitative or qualitative measures Iterative design, in which the designers work in a series of stages, improving the design each time Early focus on users and tasks The design team should be user-driven and it should be in direct contact with potential users. Several evaluation methods, including personas, cognitive modeling, inspection, inquiry, prototyping, and testing methods may contribute to understanding potential users and their perceptions of how well the product or process works. Usability considerations, such as who the users are and their experience with similar systems must be examined. As part of understanding users, this knowledge must "...be played against the tasks that the users will be expected to perform." This includes the analysis of what tasks the users will perform, which are most important, and what decisions the users will make while using your system. Designers must understand how cognitive and emotional characteristics of users will relate to a proposed system. One way to stress the importance of these issues in the designers' minds is to use personas, which are made-up representative users. See below for further discussion of personas. Another more expensive but more insightful method is to have a panel of potential users work closely with the design team from the early stages. Empirical measurement Test the system early on, and test the system on real users using behavioral measurements. This includes testing the system for both learnability and usability. (See Evaluation Methods). It is important in this stage to use quantitative usability specifications such as time and errors to complete tasks and number of users to test, as well as examine performance and attitudes of the users testing the system. Finally, "reviewing or demonstrating" a system before the user tests it can result in misleading results. The emphasis of empirical measurement is on measurement, both informal and formal, which can be carried out through a variety of evaluation methods. Iterative design Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented. The key requirements for Iterative Design are: identification of required changes, an ability to make changes, and a willingness to make changes. When a problem is encountered, there is no set method to determine the correct solution. Rather, there are empirical methods that can be used during system development or after the system is delivered, usually a more inopportune time. Ultimately, iterative design works towards meeting goals such as making the system user friendly, easy to use, easy to operate, simple, etc. Evaluation methods There are a variety of usability evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. For a brief overview of methods, see Comparison of usability evaluation methods or continue reading below. Usability methods can be further classified into the subcategories below. Cognitive modeling methods Cognitive modeling involves creating a computational model to estimate how long it takes people to perform a given task. Models are based on psychological principles and experimental studies to determine times for cognitive processing and motor movements. Cognitive models can be used to improve user interfaces or predict problem errors and pitfalls during the design process. A few examples of cognitive models include: Parallel design With parallel design, several people create an initial design from the same set of requirements. Each person works independently, and when finished, shares concepts with the group. The design team considers each solution, and each designer uses the best ideas to further improve their own solution. This process helps generate many different, diverse ideas, and ensures that the best ideas from each design are integrated into the final concept. This process can be repeated several times until the team is satisfied with the final concept. GOMS GOMS stands for goals, operators, methods, and selection rules. It is a family of techniques that analyzes the user complexity of interactive systems. Goals are what the user must accomplish. An operator is an action performed in pursuit of a goal. A method is a sequence of operators that accomplish a goal. Selection rules specify which method satisfies a given goal, based on context. Human processor model Sometimes it is useful to break a task down and analyze each individual aspect separately. This helps the tester locate specific areas for improvement. To do this, it is necessary to understand how the human brain processes information. A model of the human processor is shown below. Many studies have been done to estimate the cycle times, decay times, and capacities of each of these processors. Variables that affect these can include subject age, aptitudes, ability, and the surrounding environment. For a younger adult, reasonable estimates are: Long-term memory is believed to have an infinite capacity and decay time. Keystroke level modeling Keystroke level modeling is essentially a less comprehensive version of GOMS that makes simplifying assumptions in order to reduce calculation time and complexity. Inspection methods These usability evaluation methods involve observation of users by an experimenter, or the testing and evaluation of a program by an expert reviewer. They provide more quantitative data as tasks can be timed and recorded. Card sorts Card sorting is a way to involve users in grouping information for a website's usability review. Participants in a card sorting session are asked to organize the content from a Web site in a way that makes sense to them. Participants review items from a Web site and then group these items into categories. Card sorting helps to learn how users think about the content and how they would organize the information on the Web site. Card sorting helps to build the structure for a Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that information is organized on the site in a way that is logical to users. Tree tests Tree testing is a way to evaluate the effectiveness of a website's top-down organization. Participants are given "find it" tasks, then asked to drill down through successive text lists of topics and subtopics to find a suitable answer. Tree testing evaluates the findability and labeling of topics in a site, separate from its navigation controls or visual design. Ethnography Ethnographic analysis is derived from anthropology. Field observations are taken at a site of a possible user, which track the artifacts of work such as Post-It notes, items on desktop, shortcuts, and items in trash bins. These observations also gather the sequence of work and interruptions that determine the user's typical day. Heuristic evaluation Heuristic evaluation is a usability engineering method for finding and assessing usability problems in a user interface design as part of an iterative design process. It involves having a small set of evaluators examining the interface and using recognized usability principles (the "heuristics"). It is the most popular of the usability inspection methods, as it is quick, cheap, and easy. Heuristic evaluation was developed to aid in the design of computer user-interface design. It relies on expert reviewers to discover usability problems and then categorize and rate them by a set of principles (heuristics.) It is widely used based on its speed and cost-effectiveness. Jakob Nielsen's list of ten heuristics is the most commonly used in industry. These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines. Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design that prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use: Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. Thus, by determining which guidelines are violated, the usability of a device can be determined. Usability inspection Usability inspection is a review of a system based on a set of guidelines. The review is conducted by a group of experts who are deeply familiar with the concepts of usability in design. The experts focus on a list of areas in design that have been shown to be troublesome for users. Pluralistic inspection Pluralistic Inspections are meetings where users, developers, and human factors people meet together to discuss and evaluate step by step of a task scenario. As more people inspect the scenario for problems, the higher the probability to find problems. In addition, the more interaction in the team, the faster the usability issues are resolved. Consistency inspection In consistency inspection, expert designers review products or projects to ensure consistency across multiple products to look if it does things in the same way as their own designs. Activity Analysis Activity analysis is a usability method used in preliminary stages of development to get a sense of situation. It involves an investigator observing users as they work in the field. Also referred to as user observation, it is useful for specifying user requirements and studying currently used tasks and subtasks. The data collected are qualitative and useful for defining the problem. It should be used when you wish to frame what is needed, or "What do we want to know?" Inquiry methods The following usability evaluation methods involve collecting qualitative data from users. Although the data collected is subjective, it provides valuable information on what the user wants. Task analysis Task analysis means learning about users' goals and users' ways of working. Task analysis can also mean figuring out what more specific tasks users must do to meet those goals and what steps they must take to accomplish those tasks. Along with user and task analysis, a third analysis is often used: understanding users' environments (physical, social, cultural, and technological environments). Focus groups A focus group is a focused discussion where a moderator leads a group of participants through a set of questions on a particular topic. Although typically used as a marketing tool, focus groups are sometimes used to evaluate usability. Used in the product definition stage, a group of 6 to 10 users are gathered to discuss what they desire in a product. An experienced focus group facilitator is hired to guide the discussion to areas of interest for the developers. Focus groups are typically videotaped to help get verbatim quotes, and clips are often used to summarize opinions. The data gathered is not usually quantitative, but can help get an idea of a target group's opinion. Questionnaires/surveys Surveys have the advantages of being inexpensive, require no testing equipment, and results reflect the users' opinions. When written carefully and given to actual users who have experience with the product and knowledge of design, surveys provide useful feedback on the strong and weak areas of the usability of a design. This is a very common method and often does not appear to be a survey, but just a warranty card. Prototyping methods It is often very difficult for designers to conduct usability tests with the exact system being designed. Cost constraints, size, and design constraints usually lead the designer to creating a prototype of the system. Instead of creating the complete final system, the designer may test different sections of the system, thus making several small models of each component of the system. Prototyping is an attitude and an output, as it is a process for generating and reflecting on tangible ideas by allowing failure to occur early. prototyping helps people to see what could be of communicating a shared vision, and of giving shape to the future. The types of usability prototypes may vary from using paper models, index cards, hand drawn models, or storyboards. Prototypes are able to be modified quickly, often are faster and easier to create with less time invested by designers and are more apt to change design; although sometimes are not an adequate representation of the whole system, are often not durable and testing results may not be parallel to those of the actual system. The Tool Kit Approach This tool kit is a wide library of methods that used the traditional programming language and it is primarily developed for computer programmers. The code created for testing in the tool kit approach can be used in the final product. However, to get the highest benefit from the tool, the user must be an expert programmer. The Parts Kit Approach The two elements of this approach include a parts library and a method used for identifying the connection between the parts.  This approach can be used by almost anyone and it is a great asset for designers with repetitive tasks. Animation Language Metaphor This approach is a combination of the tool kit approach and the part kit approach. Both the dialogue designers and the programmers are able to interact with this prototyping tool. Rapid prototyping Rapid prototyping is a method used in early stages of development to validate and refine the usability of a system. It can be used to quickly and cheaply evaluate user-interface designs without the need for an expensive working model. This can help remove hesitation to change the design, since it is implemented before any real programming begins. One such method of rapid prototyping is paper prototyping. Testing methods These usability evaluation methods involve testing of subjects for the most quantitative data. Usually recorded on video, they provide task completion time and allow for observation of attitude. Regardless to how carefully a system is designed, all theories must be tested using usability tests. Usability tests involve typical users using the system (or product) in a realistic environment [see simulation]. Observation of the user's behavior, emotions, and difficulties while performing different tasks, often identify areas of improvement for the system. Metrics While conducting usability tests, designers must use usability metrics to identify what it is they are going to measure, or the usability metrics. These metrics are often variable, and change in conjunction with the scope and goals of the project. The number of subjects being tested can also affect usability metrics, as it is often easier to focus on specific demographics. Qualitative design phases, such as general usability (can the task be accomplished?), and user satisfaction are also typically done with smaller groups of subjects. Using inexpensive prototypes on small user groups provides more detailed information, because of the more interactive atmosphere, and the designer's ability to focus more on the individual user. As the designs become more complex, the testing must become more formalized. Testing equipment will become more sophisticated and testing metrics become more quantitative. With a more refined prototype, designers often test effectiveness, efficiency, and subjective satisfaction, by asking the user to complete various tasks. These categories are measured by the percent that complete the task, how long it takes to complete the tasks, ratios of success to failure to complete the task, time spent on errors, the number of errors, rating scale of satisfactions, number of times user seems frustrated, etc. Additional observations of the users give designers insight on navigation difficulties, controls, conceptual models, etc. The ultimate goal of analyzing these metrics is to find/create a prototype design that users like and use to successfully perform given tasks. After conducting usability tests, it is important for a designer to record what was observed, in addition to why such behavior occurred and modify the model according to the results. Often it is quite difficult to distinguish the source of the design errors, and what the user did wrong. However, effective usability tests will not generate a solution to the problems, but provide modified design guidelines for continued testing. Remote usability testing Remote usability testing (also known as unmoderated or asynchronous usability testing) involves the use of a specially modified online survey, allowing the quantification of user testing studies by providing the ability to generate large sample sizes, or a deep qualitative analysis without the need for dedicated facilities. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas. There are two types, quantitative or qualitative. Quantitative use large sample sized and task based surveys. These types of studies are useful for validating suspected usability issues. Qualitative studies are best used as exploratory research, in small sample sizes but frequent, even daily iterations. Qualitative usually allows for observing respondent's screens and verbal think aloud commentary (Screen Recording Video, SRV), and for a richer level of insight also include the webcam view of the respondent (Video-in-Video, ViV, sometimes referred to as Picture-in-Picture, PiP) Remote usability testing for mobile devices The growth in mobile and associated platforms and services (e.g.: Mobile gaming has experienced 20x growth in 2010–2012) has generated a need for unmoderated remote usability testing on mobile devices, both for websites but especially for app interactions. One methodology consists of shipping cameras and special camera holding fixtures to dedicated testers, and having them record the screens of the mobile smart-phone or tablet device, usually using an HD camera. A drawback of this approach is that the finger movements of the respondent can obscure the view of the screen, in addition to the bias and logistical issues inherent in shipping special hardware to selected respondents. A newer approach uses a wireless projection of the mobile device screen onto the computer desktop screen of the respondent, who can then be recorded through their webcam, and thus a combined Video-in-Video view of the participant and the screen interactions viewed simultaneously while incorporating the verbal think aloud commentary of the respondents. Thinking aloud The Think aloud protocol is a method of gathering data that is used in both usability and psychology studies. It involves getting a user to verbalize their thought processes (i.e. expressing their opinions, thoughts, anticipations, and actions) as they perform a task or set of tasks. As a widespread method of usability testing, think aloud provides the researchers with the ability to discover what user really think during task performance and completion. Often an instructor is present to prompt the user into being more vocal as they work. Similar to the Subjects-in-Tandem method, it is useful in pinpointing problems and is relatively simple to set up. Additionally, it can provide insight into the user's attitude, which can not usually be discerned from a survey or questionnaire. RITE method Rapid Iterative Testing and Evaluation (RITE) is an iterative usability method similar to traditional "discount" usability testing. The tester and team must define a target population for testing, schedule participants to come into the lab, decide on how the users behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g., think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as 1 participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users. Subjects-in-tandem or co-discovery Subjects-in-tandem (also called co-discovery) is the pairing of subjects in a usability test to gather important information on the ease of use of a product. Subjects tend to discuss the tasks they have to accomplish out loud and through these discussions observers learn where the problem areas of a design are. To encourage co-operative problem-solving between the two subjects, and the attendant discussions leading to it, the tests can be designed to make the subjects dependent on each other by assigning them complementary areas of responsibility (e.g. for testing of software, one subject may be put in charge of the mouse and the other of the keyboard.) Component-based usability testing Component-based usability testing is an approach which aims to test the usability of elementary units of an interaction system, referred to as interaction components. The approach includes component-specific quantitative measures based on user interaction recorded in log files, and component-based usability questionnaires. Other methods Cognitive walkthrough Cognitive walkthrough is a method of evaluating the user interaction of a working prototype or final product. It is used to evaluate the system's ease of learning. Cognitive walkthrough is useful to understand the user's thought processes and decision making when interacting with a system, specially for first-time or infrequent users. Benchmarking Benchmarking creates standardized test materials for a specific type of design. Four key characteristics are considered when establishing a benchmark: time to do the core task, time to fix errors, time to learn applications, and the functionality of the system. Once there is a benchmark, other designs can be compared to it to determine the usability of the system. Many of the common objectives of usability studies, such as trying to understand user behavior or exploring alternative designs, must be put aside. Unlike many other usability methods or types of labs studies, benchmark studies more closely resemble true experimental psychology lab studies, with greater attention to detail on methodology, study protocol and data analysis. Meta-analysis Meta-analysis is a statistical procedure to combine results across studies to integrate the findings. This phrase was coined in 1976 as a quantitative literature review. This type of evaluation is very powerful for determining the usability of a device because it combines multiple studies to provide very accurate quantitative support. Persona Personas are fictitious characters created to represent a site or product's different user types and their associated demographics and technographics. Alan Cooper introduced the concept of using personas as a part of interactive design in 1998 in his book The Inmates Are Running the Asylum, but had used this concept since as early as 1975. Personas are a usability evaluation method that can be used at various design stages. The most typical time to create personas is at the beginning of designing so that designers have a tangible idea of who the users of their product will be. Personas are the archetypes that represent actual groups of users and their needs, which can be a general description of person, context, or usage scenario. This technique turns marketing data on target user population into a few physical concepts of users to create empathy among the design team, with the final aim of tailoring a product more closely to how the personas will use it. To gather the marketing data that personas require, several tools can be used, including online surveys, web analytics, customer feedback forms, and usability tests, and interviews with customer-service representatives. Benefits The key benefits of usability are: Higher revenues through increased sales Increased user efficiency and user satisfaction Reduced development costs Reduced support costs Corporate integration An increase in usability generally positively affects several facets of a company's output quality. In particular, the benefits fall into several common areas: Increased productivity Decreased training and support costs Increased sales and revenues Reduced development time and costs Reduced maintenance costs Increased customer satisfaction Increased usability in the workplace fosters several responses from employees: "Workers who enjoy their work do it better, stay longer in the face of temptation, and contribute ideas and enthusiasm to the evolution of enhanced productivity." To create standards, companies often implement experimental design techniques that create baseline levels. Areas of concern in an office environment include (though are not necessarily limited to): Working posture Design of workstation furniture Screen displays Input devices Organization issues Office environment Software interface By working to improve said factors, corporations can achieve their goals of increased output at lower costs, while potentially creating optimal levels of customer satisfaction. There are numerous reasons why each of these factors correlates to overall improvement. For example, making software user interfaces easier to understand reduces the need for extensive training. The improved interface tends to lower the time needed to perform tasks, and so would both raise the productivity levels for employees and reduce development time (and thus costs). Each of the aforementioned factors are not mutually exclusive; rather they should be understood to work in conjunction to form the overall workplace environment. In the 2010s, usability is recognized as an important software quality attribute, earning its place among more traditional attributes such as performance, robustness and aesthetic appearance. Various academic programs focus on usability. Several usability consultancy companies have emerged, and traditional consultancy and design firms offer similar services. There is some resistance to integrating usability work in organisations. Usability is seen as a vague concept, it is difficult to measure and other areas are prioritised when IT projects run out of time or money. Professional development Usability practitioners are sometimes trained as industrial engineers, psychologists, kinesiologists, systems design engineers, or with a degree in information architecture, information or library science, or Human-Computer Interaction (HCI). More often though they are people who are trained in specific applied fields who have taken on a usability focus within their organization. Anyone who aims to make tools easier to use and more effective for their desired function within the context of work or everyday living can benefit from studying usability principles and guidelines. For those seeking to extend their training, the User Experience Professionals' Association offers online resources, reference lists, courses, conferences, and local chapter meetings. The UXPA also sponsors World Usability Day each November. Related professional organizations include the Human Factors and Ergonomics Society (HFES) and the Association for Computing Machinery's special interest groups in Computer Human Interaction (SIGCHI), Design of Communication (SIGDOC) and Computer Graphics and Interactive Techniques (SIGGRAPH). The Society for Technical Communication also has a special interest group on Usability and User Experience (UUX). They publish a quarterly newsletter called Usability Interface. See also Accessibility Chief experience officer (CXO) Design for All (inclusion) Experience design Fitts's law Form follows function Gemba or customer visit GOMS Gotcha (programming) GUI Human factors Information architecture Interaction design Interactive systems engineering Internationalization Learnability List of human-computer interaction topics List of system quality attributes Machine-Readable Documents Natural mapping (interface design) Non-functional requirement RITE method System Usability Scale Universal usability Usability goals Usability testing Usability engineering User experience User experience design Web usability World Usability Day References Further reading R. G. Bias and D. J. Mayhew (eds) (2005), Cost-Justifying Usability: An Update for the Internet Age, Morgan Kaufmann Donald A. Norman (2013), The Design of Everyday Things, Basic Books, Donald A. Norman (2004), Emotional Design: Why we love (or hate) everyday things, Basic Books, Jakob Nielsen (1994), Usability Engineering, Morgan Kaufmann Publishers, Jakob Nielsen (1994), Usability Inspection Methods, John Wiley & Sons, Ben Shneiderman, Software Psychology, 1980, External links Usability.gov Human–computer interaction Technical communication Information architecture Software quality
0.785718
0.991262
0.778852
Ontology
Ontology is the philosophical study of being. As one of the most fundamental concepts, being encompasses all of reality and every entity within it. To articulate the basic structure of being, ontology examines what all entities have in common and how they are divided into fundamental classes, known as categories. An influential distinction is between particular and universal entities. Particulars are unique, non-repeatable entities, like the person Socrates. Universals are general, repeatable entities, like the color green. Another contrast is between concrete objects existing in space and time, like a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality, employing categories such as substance, property, relation, state of affairs, and event. Ontologists disagree about which entities exist on the most basic level. Platonic realism asserts that universals have objective existence. Conceptualism says that universals only exist in the mind while nominalism denies their existence. There are similar disputes about mathematical objects, unobservable objects assumed by scientific theories, and moral facts. Materialism says that, fundamentally, there is only matter while dualism asserts that mind and matter are independent principles. According to some ontologists, there are no objective answers to ontological questions but only perspectives shaped by different linguistic practices. Ontology uses diverse methods of inquiry. They include the analysis of concepts and experience, the use of intuitions and thought experiments, and the integration of findings from natural science. Applied ontology employs ontological theories and principles to study entities belonging to a specific area. It is of particular relevance to information and computer science, which develop conceptual frameworks of limited domains. These frameworks are used to store information in a structured way, such as a college database tracking academic activities. Ontology is closely related to metaphysics and relevant to the fields of logic, theology, and anthropology. The origins of ontology lie in the ancient period with speculations about the nature of being and the source of the universe, including ancient Indian, Chinese, and Greek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name. Definition Ontology is the study of being. It is the branch of philosophy that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. It aims to discover the foundational building blocks of the world and characterize reality as a whole in its most general aspects. In this regard, ontology contrasts with individual sciences like biology and astronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena. In some contexts, the term ontology refers not to the general study of being but to a specific ontological theory within this discipline. It can also mean a conceptual scheme or inventory of a particular domain. Ontology is closely related to metaphysics but the exact relation of these two disciplines is disputed. According to a traditionally influential characterization, metaphysics is the study of fundamental reality in the widest sense while ontology is the subdiscipline of metaphysics that restricts itself to the most general features of reality. This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, like God, mind, and value. A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory. Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being. It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms. The word ontology has its roots in the ancient Greek terms (, meaning ) and (, meaning ), literally, . The ancient Greeks did not use the term ontology, which was coined by philosophers in the 17th century. Basic concepts Being Being, or existence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing the whole of reality and every entity within it. In its widest sense, being only contrasts with non-being or nothingness. It is controversial whether a more substantial analysis of the concept or meaning of being is possible. One proposal understands being as a property possessed by every entity. Critics of this view argue that an entity without being cannot have any properties, meaning that being cannot be a property since properties presuppose being. A different suggestion says that all beings share a set of essential features. According to the Eleatic principle, "power is the mark of being", meaning that only entities with a causal influence truly exist. According to a controversial proposal by philosopher George Berkeley, all existence is mental, expressed in his slogan "to be is to be perceived". Depending on the context, the term being is sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and impermanent and is distinguished from becoming, which implies change. Another contrast is between being, as what truly exists, and phenomena, as what merely appears to exist. In some contexts, being expresses the fact that something is while essence expresses its qualities or what it is like. Ontologists often divide being into fundamental classes or highest kinds, called categories of being. Proposed categories include substance, property, relation, state of affairs, and event. They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category. Some philosophers, like Aristotle, say that entities belonging to different categories exist in distinct ways. Others, like John Duns Scotus, insist that there are no differences in the mode of being, meaning that everything exists in the same way. A related dispute is whether some entities have a higher degree of being than others, an idea already found in Plato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees. The relation between being and non-being is a frequent topic in ontology. Influential issues include the status of nonexistent objects and why there is something rather than nothing. Particulars and universals A central distinction in ontology is between particular and universal entities. Particulars, also called individuals, are unique, non-repeatable entities, like Socrates, the Taj Mahal, and Mars. Universals are general, repeatable entities, like the color green, the form circularity, and the virtue courage. Universals express aspects or features shared by particulars. For example, Mount Everest and Mount Fuji are particulars characterized by the universal mountain. Universals can take the form of properties or relations. Properties express what entities are like. They are features or qualities possessed by an entity. Properties are often divided into essential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it. For instance, having three sides is an essential property of a triangle while being red is an accidental property. Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group. For example, being a city is a property while being east of is a relation, as in "Kathmandu is a city" and "Kathmandu is east of New Delhi". Relations are often divided into internal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation of resemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations. Substances play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the property green and acquires the property red. States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individual Socrates and the property wise. States of affairs that correspond to reality are called facts. Facts are truthmakers of statements, meaning that whether a statement is true or false depends on the underlying facts. Events are particular entities that occur in time, like the fall of the Berlin Wall and the first moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet. Complex events, also called processes, are composed of a sequence of events. Concrete and abstract objects Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set of integers. They lack causal powers and do not undergo changes. It is controversial whether or in what sense abstract objects exist and how people can know about them. Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and pages between them. Each of these components is itself constituted of smaller parts, like molecules, atoms, and elementary particles. Mereology studies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to a different view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another. The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it. Abstract objects are closely related to fictional and intentional objects. Fictional objects are entities invented in works of fiction. They can be things, like the One Ring in J. R. R. Tolkien's book series The Lord of the Rings, and people, like the Monkey King in the novel Journey to the West. Some philosophers say that fictional objects are one type of abstract object, existing outside space and time. Others understand them as artifacts that are created as the works of fiction are written. Intentional objects are entities that exist within mental states, like perceptions, beliefs, and desires. For example, if a person thinks about the Loch Ness Monster then the Loch Ness Monster is the intentional object of this thought. People can think about existing and non-existing objects, making it difficult to assess the ontological status of intentional objects. Other concepts Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity. For instance, the surface of an apple cannot exist without the apple. An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level. It is closely related to metaphysical grounding, which is the relation between a ground and facts it explains. An ontological commitment of a person or a theory is an entity that exists according to them. For instance, a person who believes in God has an ontological commitment to God. Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, the Quine–Putnam indispensability argument defends mathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers. Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible that extraterrestrial life exists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Doha is the capital of Qatar". Ontologists often use the concept of possible worlds to analyze possibility and necessity. A possible world is a complete and consistent way how things could have been. For example, Haruki Murakami was born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea, possible world semantics says that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds. In ontology, identity means that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also called exact similarity and indiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year". Branches There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole. Pure ontology contrasts with applied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science. It considers ontological problems in regard to specific entities such as matter, mind, numbers, God, and cultural artifacts. Social ontology, a major subfield of applied ontology, studies social kinds, like money, gender, society, and language. It aims to determine the nature and essential features of these concepts while also examining their mode of existence. According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars. In the fields of computer science, information science, and knowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way. A related application in genetics is Gene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases. Formal ontology is the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools of formal logic to express their findings in an abstract and general manner. Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area. Examples are ideal spatial beings in the area of geometry and living beings in the area of biology. Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization. Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion. Metaontology studies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists". It is closely related to fundamental ontology, an approach developed by philosopher Martin Heidegger that seeks to uncover the meaning of being. Schools of thought Realism and anti-realism The term realism is used for various theories that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true. This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other. According to philosopher Rudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework. In a more narrow sense, realism refers to the existence of certain types of entities. Realists about universals say that universals have mind-independent existence. According to Platonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universal red could exist by itself even if there were no red objects in the world. Aristotelian realism, also called moderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them. Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world. Nominalists defend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects. Mathematical realism, a closely related view in the philosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence of mathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible to empirical observation. Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and game formalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation. Modal realism is the theory that in addition to the actual world, there are countless possible worlds as real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by our counterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects. Scientific realists say that the scientific description of the world is an accurate representation of reality. It is of particular relevance in regard to things that cannot be directly observed by humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature. Scientific anti-realism says that scientific theories are not descriptions of reality but instruments to predict observations and the outcomes of experiments. Moral realists claim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right. Moral anti-realists either claim that moral principles are subjective and differ between persons and cultures, a position known as moral relativism, or outright deny the existence of moral facts, a view referred to as moral nihilism. By number of categories Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class. For example, some forms of nominalism state that only concrete particulars exist while some forms of bundle theory state that only properties exist. Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything. The closely related discussion between monism and dualism is about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level. Materialism is an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states. Idealists take the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds. Neutral monism occupies a middle ground by saying that both mind and matter are derivative phenomena. Dualists state that mind and matter exist as independent principles, either as distinct substances or different types of properties. In a slightly different sense, monism contrasts with pluralism as a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality. Pluralism is more commonly accepted and says that several distinct entities exist. By fundamental categories The historically influential substance-attribute ontology is a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances. The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless or bare particular that merely supports the properties. Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality. Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible. According to process ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change. Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According to trope bundle theory, properties are particular entities that belong to a single bundle. Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level. Ontic structural realism agrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate. Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact that the Earth is a planet consists of the particular object the Earth and the property being a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts. In the history of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested by Aristotle, whose system includes ten categories: substance, quantity, quality, relation, place, date, posture, state, action, and passion. An early influential system of categories in Indian philosophy, first proposed in the Vaisheshika school, distinguishes between six categories: substance, quality, motion, universal, individuator, and inherence. Immanuel Kant's transcendental idealism includes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality. In more recent philosophy, theories of categories were developed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Others The dispute between constituent and relational ontologies concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties. Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities. One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture. Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists. The ontological theories of endurantism and perdurantism aim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed of temporal parts and, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves. Differential ontology is a poststructuralist approach interested in the relation between the concepts of identity and difference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things. Object-oriented ontology belongs to the school of speculative realism and examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception. Methods Methods of ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied by metaontology. Conceptual analysis is a method to understand ontological concepts and clarify their meaning. It proceeds by analyzing their component parts and the necessary and sufficient conditions under which a concept applies to an entity. This information can help ontologists decide whether a certain type of entity, such as numbers, exists. Eidetic variation is a related method in phenomenological ontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential. The transcendental method begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or which conditions are required for this entity to exist. Another approach is based on intuitions in the form of non-inferential impressions about the correctness of general principles. These principles can be used as the foundation on which an ontological system is built and expanded using deductive reasoning. A further intuition-based method relies on thought experiments to evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employing counterfactual thinking to assess the consequences of this situation. For example, some ontologists examine the relation between mind and matter by imagining creatures identical to humans but without consciousness. Naturalistic methods rely on the insights of the natural sciences to determine what exists. According to an influential approach by Willard Van Orman Quine, ontology can be conducted by analyzing the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them. Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction. The principle of Ockham's Razor says that simple theories are preferable. A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities. Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations. A further factor is how close a theory is to common sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue. In applied ontology, ontological engineering is the process of creating and refining conceptual models of specific domains. Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in a formal language to ensure precision and, in some cases, automatic computability. In the following review phase, the validity of the ontology is assessed using test data. Various more specific instructions for how to carry out the different steps have been suggested. They include the Cyc method, Grüninger and Fox's methodology, and so-called METHONTOLOGY. In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch. Related fields Ontology overlaps with many disciplines, including logic, the study of correct reasoning. Ontologists often employ logical systems to express their insights, specifically in the field of formal ontology. Of particular interest to them is the existential quantifier, which is used to express what exists. In first-order logic, for example, the formula states that dogs exist. Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being. Doubts about the accuracy of natural language have led some ontologists to seek a new formal language, termed ontologese, for a better representation of the fundamental structure of reality. Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which builds databases to store this information and defines computational processes to automatically transform and use it. For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name. In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help of upper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, like Suggested Upper Merged Ontology and Basic Formal Ontology. Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework. Protein Ontology is a formal framework for the standardized representation of protein-related entities and their relationships. Gene Ontology and Sequence Ontology serve a similar purpose in the field of genetics. Environment Ontology is a knowledge representation focused on ecosystems and environmental processes. Friend of a Friend provides a conceptual framework to represent relations between people and their interests and activities. The topic of ontology has received increased attention in anthropology since the 1990s, sometimes termed the "ontological turn". This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook of Indigenous people and how it differs from a Western perspective. As an example of this contrast, it has been argued that various indigenous communities ascribe intentionality to non-human entities, like plants, forests, or rivers. This outlook is known as animism and is also found in Native American ontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature. Ontology is closely related to theology and its interest in the existence of God as an ultimate entity. The ontological argument, first proposed by Anselm of Canterbury, attempts to prove the existence of the divine. It defines God as the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence. Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming it ontotheology. History The roots of ontology in ancient philosophy are speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in the Upanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what sense ultimate reality is one or many. Samkhya, the first orthodox school of Indian philosophy, formulated an atheist dualist ontology based on the Upanishads, identifying pure consciousness and matter as its two foundational principles. The later Vaisheshika school proposed a comprehensive system of categories. In ancient China, Laozi's (6th century BCE) Taoism examines the underlying order of the universe, known as Tao, and how this order is shaped by the interaction of two basic forces, yin and yang. The philosophical movement of Xuanxue emerged in the 3rd century CE and explored the relation between being and non-being. Starting in the 6th century BCE, Presocratic philosophers in ancient Greece aimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things. Parmenides (c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being. Inspired by Presocratic philosophy, Plato (427–347 BCE) developed his theory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms. Aristotle (384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being. The school of Neoplatonism arose in the 3rd century CE and proposed an ineffable source of everything, called the One, which is more basic than being itself. The problem of universals was an influential topic in medieval ontology. Boethius (477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspired Peter Abelard (1079–1142 CE), who proposed that universals exist only in the mind. Thomas Aquinas (1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence and essence, between substance and accidents, and between matter and form. He also discussed the transcendentals, which are the most general properties or modes of being. John Duns Scotus (1266–1308) argued that all entities, including God, exist in the same way and that each entity has a unique essence, called haecceity. William of Ockham (c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known as Ockham's razor. In Arabic-Persian philosophy, Avicenna (980–1037 CE) combined ontology with theology. He identified God as a necessary being that is the source of everything else, which only has contingent existence. In 8th-century Indian philosophy, the school of Advaita Vedanta emerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is an illusion. Starting in the 13th century CE, the Navya-Nyāya school built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation. 9th-century China saw the emergence of Neo-Confucianism, which developed the idea that a rational principle, known as li, is the ground of being and order of the cosmos. René Descartes (1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact. Rejecting Descartes's dualism, Baruch Spinoza (1632–1677) proposed a monist ontology according to which there is only a single entity that is identical to God and nature. Gottfried Wilhelm Leibniz (1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another. John Locke (1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties. Christian Wolff (1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry. George Berkeley (1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds. Immanuel Kant (1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system of pure concepts of understanding. Influenced by Kant's philosophy, Georg Wilhelm Friedrich Hegel (1770–1831) linked ontology and logic. He said that being and thought are identical and examined their foundational structures. Arthur Schopenhauer (1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of a blind and irrational will. Francis Herbert Bradley (1846–1924) saw absolute spirit as the ultimate and all-encompassing reality while denying that there are any external relations. At the beginning of the 20th century, Edmund Husserl (1859–1938) developed phenomenology and employed its method, the description of experience, to address ontological problems. This idea inspired his student Martin Heidegger (1889–1976) to clarify the meaning of being by exploring the mode of human existence. Jean-Paul Sartre responded to Heidegger's philosophy by examining the relation between being and nothingness from the perspective of human existence, freedom, and consciousness. Based on the phenomenological method, Nicolai Hartmann (1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual. Alexius Meinong (1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being. Arguing against this theory, Bertrand Russell (1872–1970) formulated a fact ontology known as logical atomism. This idea was further refined by the early Ludwig Wittgenstein (1889–1951) and inspired D. M. Armstrong's (1926–2014) ontology. Alfred North Whitehead (1861–1947), by contrast, developed a process ontology. Rudolf Carnap (1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework. He had a strong influence on Willard Van Orman Quine (1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems. Quine's student David Lewis (1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world. Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains. See also References Notes Citations Sources External links
0.779228
0.999506
0.778844
Human services
Human services is an interdisciplinary field of study with the objective of meeting human needs through an applied knowledge base, focusing on prevention as well as remediation of problems, and maintaining a commitment to improving the overall quality of life of service populations The process involves the study of social technologies (practice methods, models, and theories), service technologies (programs, organizations, and systems), and scientific innovations designed to ameliorate problems and enhance the quality of life of individuals, families and communities to improve the delivery of service with better coordination, accessibility and accountability. The mission of human services is to promote a practice that involves simultaneously working at all levels of society (whole-person approach) in the process of promoting the autonomy of individuals or groups, making informal or formal human services systems more efficient and effective, and advocating for positive social change within society. Human services practitioners strive to advance the autonomy of service users through civic engagement, education, health promotion and social change at all levels of society. Practitioners also engage in advocating so human systems remain accessible, integrated, efficient and effective. Human services academic programs can be easily accessible in colleges and universities, which award degrees at the associate, baccalaureate, and graduate levels. Human services programs are in countries all around the world. History United States Human services has its roots in charitable activities of religious and civic organizations that date back to the Colonial period. However, the academic discipline of human services did not start until the 1960s. At that time, a group of college academics started the new human services movement and began to promote the adoption of a new ideology about human service delivery and professionalism among traditional helping disciplines. The movement's major goal was to make service delivery more efficient, effective, and humane. The other goals dealt with the reeducation of traditional helping professionals to have a greater appreciation of the individual as a whole person (humanistic psychology) and to be accountable to the communities they serve (postmodernism). Furthermore, professionals would learn to take responsibility at all levels of government, use systems approaches to consider human problems, and be involved in progressive social change. Traditional academic programs such as education, nursing, social work, law and medicine were resistant to the new human services movement's ideology because it appeared to challenge their professional status. Changing the traditional concept of professionalism involved rethinking consumer control and the distribution of power. The new movement also called on human service professionals to work for social change. It was proposed that reducing monopolistic control on professionals could result in democratization of knowledge, thus leading to said professionals counteracting dominant establishments and advocating on behalf of their clients and communities. The movement also hoped that human service delivery systems would become integrated, comprehensive, and more accessible, which would make them more humane for service users. Ultimately, the resistance from traditional helping professions served as the impetus for a group of educators in higher education to start the new academic discipline of human services. Some maintain that the human services discipline has a concrete identity as a profession that supplements and complements other traditional professions. Yet other professionals and scholars have not agreed upon an authoritative definition for human services. Academic programs United States Development Chenault and Burnford argued that human services programs must inform and train students at the graduate or postgraduate level if human services hoped to be considered a professional discipline. A progressive graduate human services program was established by Audrey Cohen (1931–1996), who was considered an innovative educator for her time. The Audrey Cohen College of Human Services, now called the Metropolitan College of New York, offered one of the first graduate programs in 1974. In the same time period, Springfield College in Massachusetts became a major force in preserving human services as an academic discipline. Currently, Springfield College is one of the oldest and largest human services program in the United States. Manpower studies in the 1960s and 70s had shown that there would be a shortage of helping professionals in an array of service delivery areas. In turn, some educators proposed that the training of nonprofessionals (e.g., mental health technicians) could bridge this looming personnel shortage. One of the earliest educational initiatives to develop undergraduate curricula was undertaken by the Southern Regional Education Board (SREB), which was funded by the National Institute on Health. Professionals of the SREB Undergraduate Social Welfare Manpower Project helped colleges develop new social welfare programs, which later became known as human services. Some believed community college human services programs were the most expedient way to train paraprofessionals for direct service jobs in areas such as mental health. Currently, a large percentage of human services programs are run at the community college level. The development of community college human services programs was supported with government funding that was earmarked for the federal new careers initiatives. In turn, the federally funded New Careers Program was created to produce a nonprofessional career track for economically disadvantaged, underemployed, and unemployed adults as a strategy to eradicate poverty within society and to end a critical shortage of health-care personnel. Graduates from these programs successfully acquired employment as paraprofessionals, but there were limitations to their upward mobility within social service agencies because they lacked a graduate or professional degree. Current programs Currently, there are academic programs in human services at the associate, baccalaureate, and graduate levels. There are approximately 600 human services programs throughout the United States. An online directory of human services programs lists many (but not all) of the programs state y state in conjunction with their accreditation status from the Council for Standards in Human Services Education (CSHSE). The CSHSE offers accreditation for human services programs in higher education. The accreditation process is voluntary and labor-intensive; it is designed to assure the quality, consistency, and relevance of human service education through research-based standards and a peer-review process. According to the CSHSE's webpage there are only 43 accredited human services programs in the United States. Human services curricula are based on an interdisciplinary knowledge foundation that allows students to consider practical solutions from multiple disciplinary perspectives. Across the curriculum human services students are often taught to view human problems from a socioecological perspective (developed by Urie Bronfenbrenner) that involves viewing human strengths and problems as interconnected to a family unit, community, and society. This perspective is considered a "whole-person perspective". Overall, undergraduate programs prepare students to be human services generalists while master's programs prepare students to be human services administrators, and doctoral programs prepare students to be researcher-analysts and college-level educators. Research in this field focuses on an array of topics that deal with direct service issues, case management, organizational change, management of human service organizations, advocacy, community organizing, community development, social welfare policy, service integration, multiculturalism, integration of technology, poverty issues, social justice, development, and social change strategies. Certification and continuing education United States The Center for Credentialing & Education (CCE) conceptualized the Human Services-Board Certified Practitioner (HS-BCP) credential with the assistance of the National Organization for Human Services (NOHS) and the Council for Standards in Human Service Education (CSHSE). The credential was created for human services practitioners seeking to advance their careers by acquiring independent verification of their practical knowledge and educational background. Graduates from human services programs can obtain a Human Services Board Certified Practitioner (HS-BCP) credential offered by the Center for Credentialing & Education (CCE). The HS-BCP certification ensures that human services practitioners offer quality services, are competent service providers, are committed to high standards, and adhere to the NOHS Ethical Standards of Human Service Professionals, as well as to help solidify the professional identity of human services practitioners. HS-BCPE Experience Requirements for the certification: HS-BCP applicants must meet post-graduation experience requirements to be eligible to take the examination. However, graduates of a CSHSE accredited degree program may sit for the HS-BCP exam without verifying their human services work experience. Otherwise experience requirements for candidates not from a CSHSE accredited program are as follows: Associate degree with post degree experience requires three years, including a minimum of 4,500 hours; Bachelor's Degree with post degree experience requires two years, including a minimum of 3,000 hours; Master's or Doctorate with post degree experience requires one year, including a minimum of 1,500 hours. The HS-BCP exam is designed to verify a candidate's human services knowledge. The exam was created as a collaborative effort of human services subject-matter experts and normed on a population of professionals in the field. The HS-BCP exam covers the following areas: Assessment, treatment planning, and outcome evaluation Theoretical orientation/interventions Case management, professional practice, and ethics Administration, program development/evaluation, and supervision Tools and methodology There are numerous different tools and methods utilized in human services. For example, qualitative and quantitative surveys are administered to define community problems that need addressing. These surveys can narrow down what service is needed, who would receive it, for how long, and where the problem is concentrated. Additional necessary skills include strong communication and professional coordination- since networking is crucial for obtaining and transporting resources to areas of need. Lack of these skills could lead to dangerous consequences as a communities needs are not adequately met. Furthermore, research is a key component to the successful conduct of human service. Both theoretical and empirical research is required if one is to pursue a career in human services because being uninformed can leave communities in confusion and disarray- thus perpetuating the problem that was supposed to be resolved. In relation to social work, a professional must be unbiased and patient because they will be closely working with a vast and diverse population who are often in extremely dire situations. Allowing one's personal beliefs to bleed into their human service profession could negatively impact the quality of and or limit the scope of potential outreach. Employment outlook United States Currently, the three major employment roles played by human services graduates include providing direct service, performing administrative work, and working in the community. According to the Occupational Outlook Handbook, published by the US Department of Labor, the employment of human service assistants is anticipated to grow by 34% through 2016, which is faster than average for all occupations. There are several different occupations for individuals with post-secondary degrees. Specialization is crucial when applying for a human service career because many different job occupations and skills fall under the broad scope of human services, especially if said job is related to social work. This is because many different types of people require different types of aid. For example, a child would need special attention compared to an adult- and would visit a professional who has trained directly with younger people. Furthermore, an alcoholic or addict would specifically need a professional rehabilitation counselor. On the other hand, a victim of a natural disaster would need a crisis support worker for immediate assistance. Other examples of human service jobs include but are not limited to; criminology, community service, housing, health, therapy, and sociology. Professional organizations North America There are several different professional human services organizations for professionals, educators, and students to join across North America. United States The National Organization for Human Services (NOHS) is a professional organization open to educators, professionals, and students interested in current issues in the field of human services. NOHS sponsors an annual conference in different parts of the United States. In addition, there are four independent human services regional organizations: (a) Mid-Atlantic Consortium for Human Services, (b) Midwest Organization for Human Services, (c) New England Organization for Human Service, and the (d) Northwest Human Services Association. All the regional organizations are also open to educators, professionals, students and each regional organization has an annual conference in different locations throughout their region such as universities or institutions. Human services special interest groups also exist within the American Society for Public Administration (ASPA) and the American Educational Research Association (AERA). The ASPA subsection is named the Section on Health and Human Services Administration and its purpose is to foster the development of knowledge, understanding and practice in the fields of health and human services administration and to foster professional growth and communication among academics and practitioners in these fields. Fields of health and human services administration share a common and unique focus on improving the quality of life through client-centered policies and service transactions. The AERA special interest group is named the Education, Health and Human Service Linkages. Its purpose is to create a community of researchers and practitioners interested in developing knowledge about comprehensive school health, school linked services, and initiatives that support children and their families. This subgroup also focuses on interpersonal collaboration, integration of services, and interdisciplinary approaches. The group's interests encompass interrelated policy, practice, and research that challenge efforts to create viable linkages among these three distinct areas. The American Public Human Services Association (APHSA) is a nonprofit organization that pursues distinction in health and human services by working with policymakers, supporting state and local agencies, and working with partners to promote innovative, integrative and efficient solutions in health and human services policy and practice. APHSA has individual and student memberships. Canada The Canadian Institute for Human Services is an advocacy, education and action-research organization for the advancement of health equity, progressive education and social innovation. The institute collaborates with researchers, field practitioners, community organizations, socially conscious companies—along with various levels of government and educational institutions—to ensure the Canadian health and human services sector remains accountable to the greater good of Canadian civil society rather than short-term professional, business or economic gains. See also References Further reading Brager, G., & Holloway, S. (1978). Changing human services organizations: Political and practice. New York, NY: The Free Press. Bronfenbrenner, U. (2005). Making human beings human: Biological perspectives on human development. Thousand Oaks, CA: Sage Publications. Cimbala, P.A., & Miller, R.M. (1999). The Freedman's Bureau and Reconstruction. New York, NY: Fordham University Press. Colman, P. (2007). Breaking the chains: The crusade of Dorothea Lynde Dix. New York, NY: ASJA Press. De Tocqueville, A. (2006). Democracy in America (G. Lawrence, Trans.). New York, NY: Harper Perennial Modern Classic (Original work published 1832). Friedman, L. J. (2003). Giving and caring in early America 1601-1861. In L.J. Friedman, & M.D. McGarvie, Charity, philanthropy, and civility in American history (pp. 23–48). Cambridge, UK: Cambridge University Press. Hasenfeld, Y. (1992). The nature of human service organizations. In Y. Hasenfeld, Human Services as Complex Organizations (pp. 3–23). Newbury Park, CA: Sage Publications. Marshall, J. (2011). The life of George Washington. Fresno, CA: Edwards Publishing House. Nellis, E.G., & Decker, A.D. (2001). The eighteenth-century records of the Boston overseers of the poor. Charlottesville, VA: University of Virginia Press. Neukrug, E. (2016). Theory, practice, and trends in human services: An introduction (6th ed.). Belmont, CA: Cengage. Slack, P. (1995). The English Poor Law, 1531-1782. Cambridge, UK: Cambridge University Press. Trattner, W.I. (1999). From Poor Law to welfare state: A History of social welfare in America. New York, NY: The Free Press. Academic disciplines Community building Human sciences
0.78304
0.994612
0.778821
Psychopathy
Psychopathy, or psychopathic personality, is a personality construct characterized by impaired empathy and remorse, in combination with traits of boldness, disinhibition, and egocentrism. These traits are often masked by superficial charm and immunity to stress, which create an outward appearance of apparent normalcy. Hervey M. Cleckley, an American psychiatrist, influenced the initial diagnostic criteria for antisocial personality reaction/disturbance in the Diagnostic and Statistical Manual of Mental Disorders (DSM), as did American psychologist George E. Partridge. The DSM and International Classification of Diseases (ICD) subsequently introduced the diagnoses of antisocial personality disorder (ASPD) and dissocial personality disorder (DPD) respectively, stating that these diagnoses have been referred to (or include what is referred to) as psychopathy or sociopathy. The creation of ASPD and DPD was driven by the fact that many of the classic traits of psychopathy were impossible to measure objectively. Canadian psychologist Robert D. Hare later re-popularized the construct of psychopathy in criminology with his Psychopathy Checklist. Although no psychiatric or psychological organization has sanctioned a diagnosis titled "psychopathy", assessments of psychopathic characteristics are widely used in criminal justice settings in some nations and may have important consequences for individuals. The study of psychopathy is an active field of research. The term is also used by the general public, popular press, and in fictional portrayals. While the abbreviated term "psycho" is often employed in common usage in general media along with "crazy", "insane", and "mentally ill", there is a categorical difference between psychosis and psychopathy. History Etymology The word psychopathy is a joining of the Greek words psyche "soul" and pathos "suffering, feeling". The first documented use is from 1847 in Germany as psychopatisch, and the noun psychopath has been traced to 1885. In medicine, patho- has a more specific meaning of disease (Thus pathology has meant the study of disease since 1610, and psychopathology has meant the study of mental disorder in general since 1847. A sense of "a subject of pathology, morbid, excessive" is attested from 1845, including the phrase pathological liar from 1891 in the medical literature). The term psychopathy initially had a very general meaning referring to all sorts of mental disorders and social aberrations, popularised from 1891 in Germany by Koch's concept of "psychopathic inferiority". Some medical dictionaries still define psychopathy in both a narrow and broad sense, such as MedlinePlus from the U.S. National Library of Medicine. On the other hand, Stedman's Medical Dictionary defines "psychopath" only as a "former designation" for a person with an antisocial type of personality disorder. The term psychosis was also used in Germany from 1841, originally in a very general sense. The suffix -ωσις (-osis) meant in this case "abnormal condition". This term or its adjective psychotic would come to refer to the more severe mental disturbances and then specifically to mental states or disorders characterized by hallucinations, delusions or in some other sense markedly out of touch with reality. The slang term psycho has been traced to a shortening of the adjective psychopathic from 1936, and from 1942 as a shortening of the noun psychopath, but it is also used as shorthand for psychotic or crazed. The media usually uses the term psychopath to designate any criminal whose offenses are particularly abhorrent and unnatural, but that is not its original or general psychiatric meaning. Sociopathy The word element socio- has been commonly used in compound words since around 1880. The term sociopathy may have been first introduced in 1909 in Germany by biological psychiatrist Karl Birnbaum and in 1930 in the US by educational psychologist George E. Partridge, as an alternative to the concept of psychopathy. It was used to indicate that the defining feature is violation of social norms, or antisocial behavior, and may be social or biological in origin. The terms sociopathy and psychopathy were once used interchangeably in relation to antisocial personality disorder, though this usage is outdated in medicine and psychiatry. Psychopathy, however, is a highly popular construct in the psychology literature. Furthermore, the DSM-5 introduced the dimensional model of personality disorders in Section III, which includes a specifier for psychopathic traits. According to the DSM, psychopathy is not a standalone diagnosis, but the authors attempted to measure "psychopathic traits" via a specifier. In one study, the "Psychopathic Features Specifier" has been modeled on Factor 1 of the Psychopathic Personality Inventory, known as Fearless Dominance. To some, it is evidence of psychopathy not being a more extreme version of ASPD, but as an emergent compound trait that manifests when Antisocial Personality Disorder is present in combination with high levels of Fearless Dominance (or Boldness as it's known in the Triarchic Model). Analyses showed that this Section III ASPD greatly outperformed Section II ASPD in predicting scores on Hare’s (2003) Psychopathy Checklist-Revised. Section III ASPD including the 'Psychopathic Traits Specifier' can be seen on page 765 of the DSM-5 or Page 885 of the DSM-5-TR. The term is used in various ways in contemporary usage. Robert Hare stated in the popular science book Snakes in Suits that sociopathy and psychopathy are often used interchangeably, but in some cases the term sociopathy is preferred because it is less likely than is psychopathy to be confused with psychosis, whereas in other cases the two terms may be used with different meanings that reflect the user's views on its origins and determinants. Hare contended that the term sociopathy is preferred by those who see the causes as due to social factors and early environment, and the term psychopathy is preferred by those who believe that there are psychological, biological, and genetic factors involved in addition to environmental factors. Hare also provides his own definitions: he describes psychopathy as lacking a sense of empathy or morality, but sociopathy as only differing from the average person in the sense of right and wrong. Precursors Ancient writings that have been connected to psychopathic traits include Deuteronomy and a description of an unscrupulous man by the Greek philosopher Theophrastus around 300 BC. The concept of psychopathy has been indirectly connected to the early 19th century work of Pinel (1801; "mania without delirium") and Pritchard (1835; "moral insanity"), although historians have largely discredited the idea of a direct equivalence. Psychopathy originally described any illness of the mind, but found its application to a narrow subset of mental conditions when it was used toward the end of the 19th century by the German psychiatrist Julius Koch (1891) to describe various behavioral and moral dysfunction in the absence of an obvious mental illness or intellectual disability. He applied the term psychopathic inferiority to various chronic conditions and character disorders, and his work would influence the later conception of the personality disorder. The term psychopathic came to be used to describe a diverse range of dysfunctional or antisocial behavior and mental and sexual deviances, including at the time homosexuality. It was often used to imply an underlying "constitutional" or genetic origin. Disparate early descriptions likely set the stage for modern controversies about the definition of psychopathy. 20th century An influential figure in shaping modern American conceptualizations of psychopathy was American psychiatrist Hervey Cleckley. In his classic monograph, The Mask of Sanity (1941), Cleckley drew on a small series of vivid case studies of psychiatric patients at a Veterans Administration hospital in Georgia to provide a description for psychopathy. Cleckley used the metaphor of the "mask" to refer to the tendency of psychopaths to appear confident, personable, and well-adjusted compared to most psychiatric patients, while revealing underlying pathology through their actions over time. Cleckley formulated sixteen criteria for psychopathy. The Scottish psychiatrist David Henderson had also been influential in Europe from 1939 in narrowing the diagnosis. The diagnostic category of sociopathic personality in early editions of the Diagnostic and Statistical Manual (DSM) had some key similarities to Cleckley's ideas, though in 1980 when renamed Antisocial Personality Disorder some of the underlying personality assumptions were removed. In 1980, Canadian psychologist Robert D. Hare introduced an alternative measure, the "Psychopathy Checklist" (PCL) based largely on Cleckley's criteria, which was revised in 1991 (PCL-R), and is the most widely used measure of psychopathy. There are also several self-report tests, with the Psychopathic Personality Inventory (PPI) used more often among these in contemporary adult research. Famous individuals have sometimes been diagnosed, albeit at a distance, as psychopaths. As one example out of many possible from history, in a 1972 version of a secret report originally prepared for the Office of Strategic Services in 1943, and which may have been intended to be used as propaganda, non-medical psychoanalyst Walter C. Langer suggested Adolf Hitler was probably a psychopath. However, others have not drawn this conclusion; clinical forensic psychologist Glenn Walters argues that Hitler's actions do not warrant a diagnosis of psychopathy as, although he showed several characteristics of criminality, he was not always egocentric, callously disregarding of feelings or lacking impulse control, and there is no proof he could not learn from mistakes. Definition Concepts There are multiple conceptualizations of psychopathy, including Cleckleyan psychopathy (Hervey Cleckley's conception entailing bold, disinhibited behavior, and "feckless disregard") and criminal psychopathy (a meaner, more aggressive and disinhibited conception explicitly entailing persistent and sometimes serious criminal behavior). The latter conceptualization is typically used as the modern clinical concept and assessed by the Psychopathy Checklist. The label "psychopath" may have implications and stigma related to decisions about punishment severity for criminal acts, medical treatment, civil commitments, etc. Efforts have therefore been made to clarify the meaning of the term. It has been suggested that those who share the same emotional deficiencies and psychopathic features, but are properly socialized, should not be designated as 'psychopaths'. The triarchic model suggests that different conceptions of psychopathy emphasize three observable characteristics to various degrees. Analyses have been made with respect to the applicability of measurement tools such as the Psychopathy Checklist (PCL, PCL-R) and Psychopathic Personality Inventory (PPI) to this model. Boldness. Low fear including stress-tolerance, toleration of unfamiliarity and danger, and high self-confidence and social assertiveness. The PCL-R measures this relatively poorly and mainly through Facet 1 of Factor 1. Similar to PPI fearless dominance. May correspond to differences in the amygdala and other neurological systems associated with fear. Disinhibition. Poor impulse control including problems with planning and foresight, lacking affect and urge control, demand for immediate gratification, and poor behavioral restraints. Similar to PCL-R Factor 2 and PPI impulsive antisociality. May correspond to impairments in frontal lobe systems that are involved in such control. Meanness. Lacking empathy and close attachments with others, disdain of close attachments, use of cruelty to gain empowerment, exploitative tendencies, defiance of authority, and destructive excitement seeking. The PCL-R in general is related to this but in particular some elements in Factor 1. Similar to PPI, but also includes elements of subscales in impulsive antisociality. Psychopathy has been conceptualized as a hybrid condition marked by a paradoxical combination of superficial charm, poise, emotional resilience, and venturesomeness on the outside but deep-seated affective disturbances and impulse control deficits on the inside. From this perspective, psychopathy is at least in part characterized by psychologically adaptive traits. Furthermore, according to this view, psychopathy may be linked to at least some interpersonally successful outcomes, such as effective leadership, business accomplishments, and heroism. Measurement An early and influential analysis from Harris and colleagues indicated that a discrete category, or taxon, may underlie PCL-R psychopathy, allowing it to be measured and analyzed. However, this was only found for the behavioral Factor 2 items they identified, child problem behaviors; adult criminal behavior did not support the existence of a taxon. Marcus, John, and Edens more recently performed a series of statistical analyses on PPI scores and concluded that psychopathy may best be conceptualized as having a "dimensional latent structure" like depression. Marcus et al. repeated the study on a larger sample of prisoners, using the PCL-R and seeking to rule out other experimental or statistical issues that may have produced the previously different findings. They again found that the psychopathy measurements do not appear to be identifying a discrete type (a taxon). They suggest that while for legal or other practical purposes an arbitrary cut-off point on trait scores might be used, there is actually no clear scientific evidence for an objective point of difference by which to label some people "psychopaths"; in other words, a "psychopath" may be more accurately described as someone who is "relatively psychopathic". The PCL-R was developed for research, not clinical forensic diagnosis, and even for research purposes to improve understanding of the underlying issues, it is necessary to examine dimensions of personality in general rather than only a constellation of traits. The PCL-R test has been used to determine "true" or primary psychopaths (individuals that score a 30 or higher on the PCL-R test). Primary psychopaths are distinguished from secondary psychopaths, and contrast with those who are legitimately considered antisocial. Personality dimensions Studies have linked psychopathy to alternative dimensions such as antagonism (high), conscientiousness (low) and anxiousness (low). Psychopathy has also been linked to high psychoticism—a theorized dimension referring to tough, aggressive or hostile tendencies. Aspects of this that appear associated with psychopathy are lack of socialization and responsibility, impulsivity, sensation-seeking (in some cases), and aggression. Otto Kernberg, from a particular psychoanalytic perspective, believed psychopathy should be considered as part of a spectrum of pathological narcissism, that would range from narcissistic personality on the low end, malignant narcissism in the middle, and psychopathy at the high end. Psychopathy, narcissism and Machiavellianism, three personality traits that are together referred to as the dark triad, share certain characteristics, such as a callous-manipulative interpersonal style. The dark tetrad refers to these traits with the addition of sadism. Several psychologists have asserted that subclinical psychopathy and Machiavellianism are more or less interchangeable. There is a subscale on the Psychopathic Personality Inventory (PPI) dubbed "Machiavellian Egocentricity". Delroy Paulhus has asserted that the difference that most miss is that while both are characterized by manipulativeness and unemotionality, psychopaths tend to be more reckless. One study asserted that "the ability to adapt, reappraise and reassess a situation may be key factors differentiating Machiavellianism from psychopathy, for example". Psychopathy and machiavellianism were also correlated similarly in responses to affective stimuli, and both are negatively correlated with recognition of facial emotions. Many have suggested merging the dark triad traits (especially Machiavellianism and psychopathy) into one construct, given empirical studies which show immense overlap. Criticism of current conceptions The current conceptions of psychopathy have been criticized for being poorly conceptualized, highly subjective, and encompassing a wide variety of underlying disorders. Dorothy Otnow Lewis has written: Half of the Hare Psychopathy Checklist consists of symptoms of mania, hypomania, and frontal-lobe dysfunction, which frequently results in underlying disorders being dismissed. Hare's conception of psychopathy has also been criticized for being reductionist, dismissive, tautological, and ignorant of context as well as the dynamic nature of human behavior. Some have called for rejection of the concept altogether, due to its vague, subjective and judgmental nature that makes it prone to misuse. A systematic review determined that the PCL is weakly predictive of criminal behavior, but not of lack of conscience, or treatment and rehabilitation outcomes. These findings contradict widespread beliefs amongst professionals in forensics. Psychopathic individuals do not show regret or remorse. This was thought to be due to an inability to generate this emotion in response to negative outcomes. However, in 2016, people with antisocial personality disorder and dissocial personality disorder were found to experience regret, but did not use the regret to guide their choice in behavior. There was no lack of regret but a problem to think through a range of potential actions and estimating the outcome values. In an experiment published in March 2007 at the University of Southern California neuroscientist Antonio R. Damasio and his colleagues showed that subjects with damage to the ventromedial prefrontal cortex lack the ability to empathically feel their way to moral answers, and that when confronted with moral dilemmas, these brain-damaged patients coldly came up with "end-justifies-the-means" answers, leading Damasio to conclude that the point was not that they reached immoral conclusions, but that when they were confronted by a difficult issue – in this case as whether to shoot down a passenger plane hijacked by terrorists before it hits a major city – these patients appear to reach decisions without the anguish that afflicts those with typically functioning brains. According to Adrian Raine, a clinical neuroscientist also at the University of Southern California, one of this study's implications is that society may have to rethink how it judges immoral people: "Psychopaths often feel no empathy or remorse. Without that awareness, people relying exclusively on reasoning seem to find it harder to sort their way through moral thickets. Does that mean they should be held to different standards of accountability?" Signs and symptoms Socially, psychopathy typically involves extensive callous and manipulative self-serving behaviors with no regard for others, and often is associated with repeated delinquency, crime and violence. Mentally, impairments in processes related to affect and cognition, particularly socially related mental processes, have also been found. Developmentally, symptoms of psychopathy have been identified in young children with conduct disorder, and suggests at least a partial constitutional factor that influences its development. Primary features Disagreement exists over which features should be considered as part of psychopathy, with researchers identifying around 40 traits supposedly indicative of the construct, though the following characteristics are almost universally considered central. Core traits Cooke and Michie (2001) proposed a three-factor model of the Psychopathy Checklist-Revised which has seen widespread application in other measures (e.g. Youth Psychopathic Traits Inventory, Antisocial Process Screening Device). Arrogant and deceitful interpersonal style: impression management or superficial charm, inflated and grandiose sense of self-worth, pathological lying/deceit, and manipulation for personal gain. Deficient affective experience: lack of remorse or guilt, shallow affect (coldness and unemotionality), callousness and lack of empathy, and failure to accept responsibility for own actions. Impulsive and irresponsible lifestyle: impulsivity, sensation-seeking and risk-taking, irresponsible and unreliable behavior, financially parasitic lifestyle and lack of realistic, long-term goals. Low anxiety and fearlessness Cleckley's (1941) original description of psychopathy included the absence of nervousness and neurotic disorders, and later theorists referred to psychopaths as fearless or thick-skinned. While it is often claimed that the PCL-R does not include low anxiety or fearlessness, such features do contribute to the scoring of the Facet 1 (interpersonal) items, mainly through self-assurance, unrealistic optimism, brazenness and imperturbability. Indeed, while self-report studies have been inconsistent using the two-factor model of the PCL-R, studies which separate Factor 1 into interpersonal and affective facets, more regularly show modest associations between Facet 1 and low anxiety, boldness and fearless dominance (especially items assessing glibness/charm and grandiosity). When both psychopathy and low anxiety/boldness are measured using interviews, both interpersonal and affective facets are both associated with fearlessness and lack of internalizing disorders. The importance of low anxiety/fearlessness to psychopathy has historically been underscored through behavioral and physiological studies showing diminished responses to threatening stimuli (interpersonal and affective facets both contributing). However, it is not known whether this is reflected in reduced experience of state fear or where it reflects impaired detection and response to threat-related stimuli. Moreover, such deficits in threat responding are known to be reduced or even abolished when attention is focused on the threatening stimuli. Offending Criminality In terms of simple correlations, the PCL-R manual states an average score of 22.1 has been found in North American prisoner samples, and that 20.5% scored 30 or higher. An analysis of prisoner samples from outside North America found a somewhat lower average value of 17.5. Studies have found that psychopathy scores correlated with repeated imprisonment, detention in higher security, disciplinary infractions, and substance misuse. Psychopathy, as measured with the PCL-R in institutional settings, shows in meta-analyses small to moderate effect sizes with institutional misbehavior, postrelease crime, or postrelease violent crime with similar effects for the three outcomes. Individual studies give similar results for adult offenders, forensic psychiatric samples, community samples, and youth. The PCL-R is poorer at predicting sexual re-offending. This small to moderate effect appears to be due largely to the scale items that assess impulsive behaviors and past criminal history, which are well-established but very general risk factors. The aspects of core personality often held to be distinctively psychopathic generally show little or no predictive link to crime by themselves. For example, Factor 1 of the PCL-R and Fearless dominance of the PPI-R have smaller or no relationship to crime, including violent crime. In contrast, Factor 2 and Impulsive antisociality of the PPI-R are associated more strongly with criminality. Factor 2 has a relationship of similar strength to that of the PCL-R as a whole. The antisocial facet of the PCL-R is still predictive of future violence after controlling for past criminal behavior which, together with results regarding the PPI-R which by design does not include past criminal behavior, suggests that impulsive behaviors is an independent risk factor. Thus, the concept of psychopathy may perform poorly when attempted to be used as a general theory of crime. Violence Studies have suggested a strong correlation between psychopathy scores and violence, and the PCL-R emphasizes features that are somewhat predictive of violent behavior. Researchers, however, have noted that psychopathy is dissociable from and not synonymous with violence. It has been suggested that psychopathy is associated with "instrumental aggression", also known as predatory, proactive, or "cold blooded" aggression, a form of aggression characterized by reduced emotion and conducted with a goal differing from but facilitated by the commission of harm. One conclusion in this regard was made by a 2002 study of homicide offenders, which reported that the homicides committed by homicidal offenders with psychopathy were almost always (93.3%) primarily instrumental, significantly more than the proportion (48.4%) of those committed by non-psychopathic homicidal offenders, with the instrumentality of the homicide also correlated with the total PCL-R score of the offender as well as their scores on the Factor 1 "interpersonal-affective" dimension. However, contrary to the equating of this to mean exclusively "in cold blood", more than a third of the homicides committed by psychopathic offenders involved some component of emotional reactivity as well. In any case, FBI profilers indicate that serious victim injury is generally an emotional offense, and some research supports this, at least with regard to sexual offending. One study has found more serious offending by non-psychopathic offenders on average than by offenders with psychopathy (e.g. more homicides versus more armed robbery and property offenses) and another that the Affective facet of the PCL-R predicted reduced offense seriousness. Studies on perpetrators of domestic violence find that abusers have high rates of psychopathy, with the prevalence estimated to be at around 15-30%. Furthermore, the commission of domestic violence is correlated with Factor 1 of the PCL-R, which describes the emotional deficits and the callous and exploitative interpersonal style found in psychopathy. The prevalence of psychopathy among domestic abusers indicate that the core characteristics of psychopathy, such as callousness, remorselessness, and a lack of close interpersonal bonds, predispose those with psychopathy to committing domestic abuse, and suggest that the domestic abuses committed by these individuals are callously perpetrated (i.e. instrumentally aggressive) rather than a case of emotional aggression and therefore may not be amenable to the types of psychosocial interventions commonly given to domestic abuse perpetrators. Some clinicians suggest that assessment of the construct of psychopathy does not necessarily add value to violence risk assessment. A large systematic review and meta-regression found that the PCL performed the poorest out of nine tools for predicting violence. In addition, studies conducted by the authors or translators of violence prediction measures, including the PCL, show on average more positive results than those conducted by more independent investigators. There are several other risk assessment instruments which can predict further crime with an accuracy similar to the PCL-R and some of these are considerably easier, quicker, and less expensive to administer. This may even be done automatically by a computer simply based on data such as age, gender, number of previous convictions and age of first conviction. Some of these assessments may also identify treatment change and goals, identify quick changes that may help short-term management, identify more specific kinds of violence that may be at risk, and may have established specific probabilities of offending for specific scores. Nonetheless, the PCL-R may continue to be popular for risk assessment because of its pioneering role and the large amount of research done using it. The Federal Bureau of Investigation reports that psychopathic behavior is consistent with traits common to some serial killers, including sensation seeking, a lack of remorse or guilt, impulsivity, the need for control, and predatory behavior. It has also been found that the homicide victims of psychopathic offenders were disproportionately female in comparison to the more equitable gender distribution of victims of non-psychopathic offenders. Sexual offending Psychopathy has been associated with commission of sexual crime, with some researchers arguing that it is correlated with a preference for violent sexual behavior. A 2011 study of conditional releases for Canadian male federal offenders found that psychopathy was related to more violent and non-violent offences but not more sexual offences. For child molesters, psychopathy was associated with more offences. A study on the relationship between psychopathy scores and types of aggression in a sample of sexual murderers, in which 84.2% of the sample had PCL-R scores above 20 and 47.4% above 30, found that 82.4% of those with scores above 30 had engaged in sadistic violence (defined as enjoyment indicated by self-report or evidence) compared to 52.6% of those with scores below 30, and total PCL-R and Factor 1 scores correlated significantly with sadistic violence. Despite this, it is reported that offenders with psychopathy (both sexual and non-sexual offenders) are about 2.5 times more likely to be granted conditional release compared to non-psychopathic offenders. Hildebrand and colleagues (2004) have uncovered an interaction between psychopathy and deviant sexual interests, wherein those high in psychopathy who also endorsed deviant sexual interests were more likely to recidivate sexually. A subsequent meta-analysis has consolidated such a result. In considering the issue of possible reunification of some sex offenders into homes with a non-offending parent and children, it has been advised that any sex offender with a significant criminal history should be assessed on the PCL-R, and if they score 18 or higher, then they should be excluded from any consideration of being placed in a home with children under any circumstances. There is, however, increasing concern that PCL scores are too inconsistent between different examiners, including in its use to evaluate sex offenders. Other offending The possibility of psychopathy has been associated with organized crime, economic crime and war crimes. Terrorists are sometimes considered psychopathic, and comparisons may be drawn with traits such as antisocial violence, a selfish world view that precludes the welfare of others, a lack of remorse or guilt, and blame externalization. However, John Horgan, author of The Psychology of Terrorism, argues that such comparisons could also then be drawn more widely: for example, to soldiers in wars. Coordinated terrorist activity requires organization, loyalty and ideological fanaticism often to the extreme of sacrificing oneself for an ideological cause. Traits such as a self-centered disposition, unreliability, poor behavioral controls, and unusual behaviors may disadvantage or preclude psychopathic individuals in conducting organized terrorism. It may be that a significant portion of people with psychopathy are socially successful and tend to express their antisocial behavior through more covert avenues such as social manipulation or white collar crime. Such individuals are sometimes referred to as "successful psychopaths", and may not necessarily always have extensive histories of traditional antisocial behavior as characteristic of traditional psychopathy. Childhood and adolescent precursors The PCL:YV is an adaptation of the PCL-R for individuals aged 13–18 years. It is, like the PCL-R, done by a trained rater based on an interview and an examination of criminal and other records. The "Antisocial Process Screening Device" (APSD) is also an adaptation of the PCL-R. It can be administered by parents or teachers for individuals aged 6–13 years. High psychopathy scores for both juveniles (as measured with these instruments) and adults (as measured with the PCL-R and other measurement tools) have similar associations with other variables, including similar ability in predicting violence and criminality. Juvenile psychopathy may also be associated with more negative emotionality such as anger, hostility, anxiety, and depression. Psychopathic traits in youth typically comprise three factors: callous/unemotional, narcissism, and impulsivity/irresponsibility. There is positive correlation between early negative life events of the ages 0–4 and the emotion-based aspects of psychopathy. There are moderate to high correlations between psychopathy rankings from late childhood to early adolescence. The correlations are considerably lower from early- or mid-adolescence to adulthood. In one study most of the similarities were on the Impulsive- and Antisocial-Behavior scales. Of those adolescents who scored in the top 5% highest psychopathy scores at age 13, less than one third (29%) were classified as psychopathic at age 24. Some recent studies have also found poorer ability at predicting long-term, adult offending. Conduct disorder Conduct disorder is diagnosed based on a prolonged pattern of antisocial behavior in childhood and/or adolescence, and may be seen as a precursor to ASPD. Some researchers have speculated that there are two subtypes of conduct disorder which mark dual developmental pathways to adult psychopathy. The DSM allows differentiating between childhood onset before age 10 and adolescent onset at age 10 and later. Childhood onset is argued to be more due to a personality disorder caused by neurological deficits interacting with an adverse environment. For many, but not all, childhood onset is associated with what is in Terrie Moffitt's developmental theory of crime referred to as "life-course- persistent" antisocial behavior as well as poorer health and economic status. Adolescent onset is argued to more typically be associated with short-term antisocial behavior. It has been suggested that the combination of early-onset conduct disorder and ADHD may be associated with life-course-persistent antisocial behaviors as well as psychopathy. There is evidence that this combination is more aggressive and antisocial than those with conduct disorder alone. However, it is not a particularly distinct group since the vast majority of young children with conduct disorder also have ADHD. Some evidence indicates that this group has deficits in behavioral inhibition, similar to that of adults with psychopathy. They may not be more likely than those with conduct disorder alone to have the interpersonal/affective features and the deficits in emotional processing characteristic of adults with psychopathy. Proponents of different types/dimensions of psychopathy have seen this type as possibly corresponding to adult secondary psychopathy and increased disinhibition in the triarchic model. The DSM-5 includes a specifier for those with conduct disorder who also display a callous, unemotional interpersonal style across multiple settings and relationships. The specifier is based on research which suggests that those with conduct disorder who also meet criteria for the specifier tend to have a more severe form of the disorder with an earlier onset as well as a different response to treatment. Proponents of different types/dimensions of psychopathy have seen this as possibly corresponding to adult primary psychopathy and increased boldness and/or meanness in the triarchic model. Mental traits Cognition Dysfunctions in the prefrontal cortex and amygdala regions of the brain have been associated with specific learning impairments in psychopathy. Damage to the ventromedial prefrontal cortex, which regulates the activity in the amygdala, leads to common characteristics in psychopathic individuals.Since the 1980s, scientists have linked traumatic brain injury, including damage to these regions, with violent and psychopathic behavior. Patients with damage in such areas resembled "psychopathic individuals" whose brains were incapable of acquiring social and moral knowledge; those who acquired damage as children may have trouble conceptualizing social or moral reasoning, while those with adult-acquired damage may be aware of proper social and moral conduct but be unable to behave appropriately. Dysfunctions in the amygdala and ventromedial prefrontal cortex may also impair stimulus-reinforced learning in psychopaths, whether punishment-based or reward-based. People scoring 25 or higher in the PCL-R, with an associated history of violent behavior, appear to have significantly reduced mean microstructural integrity in their uncinate fasciculus—white matter connecting the amygdala and orbitofrontal cortex. There is evidence from DT-MRI of breakdowns in the white matter connections between these two important areas. Although some studies have suggested inverse relationships between psychopathy and intelligence, including with regards to verbal IQ, Hare and Neumann state that a large literature demonstrates at most only a weak association between psychopathy and IQ, noting that the early pioneer Cleckley included good intelligence in his checklist due to selection bias (since many of his patients were "well educated and from middle-class or upper-class backgrounds") and that "there is no obvious theoretical reason why the disorder described by Cleckley or other clinicians should be related to intelligence; some psychopaths are bright, others less so". Studies also indicate that different aspects of the definition of psychopathy (e.g. interpersonal, affective (emotion), behavioral and lifestyle components) can show different links to intelligence, and the result can depend on the type of intelligence assessment (e.g. verbal, creative, practical, analytical). Emotion recognition and empathy A large body of research suggests that psychopathy is associated with atypical responses to distress cues from other people, more precisely an impaired emotional empathy in the recognition of, and response to, facial expressions, body gestures and vocal tones of fear, sadness, pain and happiness. This impaired recognition and reduced autonomic responsiveness might be partly accounted for by a decreased activation of the fusiform and extrastriate cortical regions. The underlying biological surfaces for processing expressions of happiness are functionally intact in psychopaths, although less responsive than those of controls. The neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear. The overall pattern of results across studies indicates that people diagnosed with psychopathy demonstrate reduced MRI, fMRI, aMRI, PET, and SPECT activity in areas of the brain. Research has also shown that an approximate 18% smaller amygdala size contributes to a significantly lower emotional sensation in regards to fear, sadness, amongst other negative emotions, which may likely be the reason as to why psychopathic individuals have lower empathy. Some recent fMRI studies have reported that emotion perception deficits in psychopathy are pervasive across emotions (positives and negatives). Studies on children with psychopathic tendencies have also shown such associations. Meta-analyses have also found evidence of impairments in both vocal and facial emotional recognition for several emotions (i.e., not only fear and sadness) in both adults and children/adolescents. Moral judgment Psychopathy has been associated with amorality—an absence of, indifference towards, or disregard for moral beliefs. There are few firm data on patterns of moral judgment. Studies of developmental level (sophistication) of moral reasoning found all possible results—lower, higher or the same as non-psychopaths. Studies that compared judgments of personal moral transgressions versus judgments of breaking conventional rules or laws found that psychopaths rated them as equally severe, whereas non-psychopaths rated the rule-breaking as less severe. A study comparing judgments of whether personal or impersonal harm would be endorsed in order to achieve the rationally maximum (utilitarian) amount of welfare found no significant differences between subjects high and low in psychopathy. However, a further study using the same tests found that prisoners scoring high on the PCL were more likely to endorse impersonal harm or rule violations than non-psychopathic controls were. The psychopathic offenders who scored low in anxiety were also more willing to endorse personal harm on average. Assessing accidents, where one person harmed another unintentionally, psychopaths judged such actions to be more morally permissible. This result has been considered a reflection of psychopaths' failure to appreciate the emotional aspect of the victim's harmful experience. Cause Behavioral genetic studies have identified potential genetic and non-genetic contributors to psychopathy, including influences on brain function. Proponents of the triarchic model believe that psychopathy results from the interaction of genetic predispositions and an adverse environment. What is adverse may differ depending on the underlying predisposition: for example, it is hypothesized that persons having high boldness may respond poorly to punishment but may respond better to rewards and secure attachments. Genetic Genetically informed studies of the personality characteristics typical of individuals with psychopathy have found moderate genetic (as well as non-genetic) influences. On the PPI, fearless dominance and impulsive antisociality were similarly influenced by genetic factors and uncorrelated with each other. Genetic factors may generally influence the development of psychopathy while environmental factors affect the specific expression of the traits that predominate. A study on a large group of children found more than 60% heritability for "callous-unemotional traits" and that conduct disorder among children with these traits has a higher heritability than among children without these traits. Environment A study by Farrington of a sample of London males followed between age 8 and 48 included studying which factors scored 10 or more on the PCL:SV at age 48. The strongest factors included having a convicted parent, being physically neglected, low involvement of the father with the boy, low family income, and coming from a disrupted family. Other significant factors included poor supervision, abuse, harsh discipline, large family size, delinquent sibling, young mother, depressed mother, low social class, and poor housing. There has also been association between psychopathy and detrimental treatment by peers. However, it is difficult to determine the extent of an environmental influence on the development of psychopathy because of evidence of its strong heritability. Brain injury Researchers have linked head injuries with psychopathy and violence. Since the 1980s, scientists have associated traumatic brain injury, such as damage to the prefrontal cortex, including the orbitofrontal cortex, with psychopathic behavior and a deficient ability to make morally and socially acceptable decisions, a condition that has been termed "acquired sociopathy", or "pseudopsychopathy". Individuals with damage to the area of the prefrontal cortex known as the ventromedial prefrontal cortex show remarkable similarities to diagnosed psychopathic individuals, displaying reduced autonomic response to emotional stimuli, deficits in aversive conditioning, similar preferences in moral and economic decision making, and diminished empathy and social emotions like guilt or shame. These emotional and moral impairments may be especially severe when the brain injury occurs at a young age. Children with early damage in the prefrontal cortex may never fully develop social or moral reasoning and become "psychopathic individuals ... characterized by high levels of aggression and antisocial behavior performed without guilt or empathy for their victims". Additionally, damage to the amygdala may impair the ability of the prefrontal cortex to interpret feedback from the limbic system, which could result in uninhibited signals that manifest in violent and aggressive behavior. Childhood trauma Other theories Evolutionary explanations Psychopathy is associated with several adverse life outcomes as well as increased risk of disability and death due to factors such as violence, accidents, homicides, and suicides. This, in combination with the evidence for genetic influences, is evolutionarily puzzling and may suggest that there are compensating evolutionary advantages, and researchers within evolutionary psychology have proposed several evolutionary explanations. According to one hypothesis, some traits associated with psychopathy may be socially adaptive, and psychopathy may be a frequency-dependent, socially parasitic strategy, which may work as long as there is a large population of altruistic and trusting individuals, relative to the population of psychopathic individuals, to be exploited. It is also suggested that some traits associated with psychopathy such as early, promiscuous, adulterous, and coercive sexuality may increase reproductive success. Robert Hare has stated that many psychopathic males have a pattern of mating with and quickly abandoning women, and thereby have a high fertility rate, resulting in children that may inherit a predisposition to psychopathy. Criticism includes that it may be better to look at the contributing personality factors rather than treat psychopathy as a unitary concept due to poor testability. Furthermore, if psychopathy is caused by the combined effects of a very large number of adverse mutations then each mutation may have such a small effect that it escapes natural selection. The personality is thought to be influenced by a very large number of genes and may be disrupted by random mutations, and psychopathy may instead be a product of a high mutation load. Psychopathy has alternatively been suggested to be a spandrel, a byproduct, or side-effect, of the evolution of adaptive traits rather than an adaptation in itself. Mechanisms Psychological Some laboratory research demonstrates correlations between psychopathy and atypical responses to aversive stimuli, including weak conditioning to painful stimuli and poor learning of avoiding responses that cause punishment, as well as low reactivity in the autonomic nervous system as measured with skin conductance while waiting for a painful stimulus but not when the stimulus occurs. While it has been argued that the reward system functions normally, some studies have also found reduced reactivity to pleasurable stimuli. According to the response modulation hypothesis, psychopathic individuals have also had difficulty switching from an ongoing action despite environmental cues signaling a need to do so. This may explain the difficulty responding to punishment, although it is unclear if it can explain findings such as deficient conditioning. There may be methodological issues regarding the research. While establishing a range of idiosyncrasies on average in linguistic and affective processing under certain conditions, this research program has not confirmed a common pathology of psychopathy. Neurological Thanks to advancing MRI studies, experts are able to visualize specific brain differences and abnormalities of individuals with psychopathy in areas that control emotions, social interactions, ethics, morality, regret, impulsivity and conscience within the brain. Blair, a researcher who pioneered research into psychopathic tendencies stated, "With regard to psychopathy, we have clear indications regarding why the pathology gives rise to the emotional and behavioral disturbance and important insights into the neural systems implicated in this pathology". Dadds et al., remarks that despite a rapidly advancing neuroscience of empathy, little is known about the developmental underpinnings of the psychopathic disconnect between affective and cognitive empathy. A 2008 review by Weber et al. suggested that psychopathy is sometimes associated with brain abnormalities in prefrontal-temporo-limbic regions that are involved in emotional and learning processes, among others. Neuroimaging studies have found structural and functional differences between those scoring high and low on the PCL-R in a 2011 review by Skeem et al. stating that they are "most notably in the amygdala, hippocampus and parahippocampal gyri, anterior and posterior cingulate cortex, striatum, insula, and frontal and temporal cortex". The amygdala and frontal areas have been suggested as particularly important. People scoring 25 or higher in the PCL-R, with an associated history of violent behavior, appear on average to have significantly reduced microstructural integrity between the white matter connecting the amygdala and orbitofrontal cortex (such as the uncinate fasciculus). The evidence suggested that the degree of abnormality was significantly related to the degree of psychopathy and may explain the offending behaviors. Furthermore, changes in the amygdala have been associated with "callous-unemotional" traits in children. However, the amygdala has also been associated with positive emotions, and there have been inconsistent results in the studies in particular areas, which may be due to methodological issues. Others have cast doubt on the amygdala as important for psychopathy, with one meta-analysis suggesting that most studies on the amygdala and psychopathy find no effect and that studies finding a negative effect (that psychopaths display less amygdala activity) have lower statistical power. Some of these findings are consistent with other research and theories. For example, in a neuroimaging study of how individuals with psychopathy respond to emotional words, widespread differences in activation patterns have been shown across the temporal lobe when psychopathic criminals were compared to "normal" volunteers, which is consistent with views in clinical psychology. Additionally, the notion of psychopathy being characterized by low fear is consistent with findings of abnormalities in the amygdala, since deficits in aversive conditioning and instrumental learning are thought to result from amygdala dysfunction, potentially compounded by orbitofrontal cortex dysfunction, although the specific reasons are unknown. Considerable research has documented the presence of the two subtypes of primary and secondary psychopathy. Proponents of the primary-secondary psychopathy distinction and triarchic model argue that there are neurological differences between these subgroups of psychopathy which support their views. For instance, the boldness factor in the triarchic model is argued to be associated with reduced activity in the amygdala during fearful or aversive stimuli and reduced startle response, while the disinhibition factor is argued to be associated with impairment of frontal lobe tasks. There is evidence that boldness and disinhibition are genetically distinguishable. Biochemical High levels of testosterone combined with low levels of cortisol and/or serotonin have been theorized as contributing factors. Testosterone is "associated with approach-related behavior, reward sensitivity, and fear reduction", and injecting testosterone "shift[s] the balance from punishment to reward sensitivity", decreases fearfulness, and increases "responding to angry faces". Some studies have found that high testosterone levels are associated with antisocial and aggressive behaviors, yet other research suggests that testosterone alone does not cause aggression but increases dominance-seeking. It is unclear from studies if psychopathy correlates with high testosterone levels, but a few studies have found that disruption of serotonin neurotransmission disrupts cortisol reactivity to a stress-inducing speech task. Thus, dysregulation of serotonin in the brain may contribute to the low cortisol levels observed in psychopathy. Cortisol increases withdrawal behavior and sensitivity to punishment and aversive conditioning, which are abnormally low in individuals with psychopathy and may underlie their impaired aversion learning and disinhibited behavior. High testosterone levels combined with low serotonin levels are associated with "impulsive and highly negative reactions", and may increase violent aggression when an individual is provoked or becomes frustrated. Several animal studies note the role of serotonergic functioning in impulsive aggression and antisocial behavior. However, some studies on animal and human subjects have suggested that the emotional-interpersonal traits and predatory aggression of psychopathy, in contrast to impulsive and reactive aggression, is related to increased serotoninergic functioning. A study by Dolan and Anderson, regarding the relationship between serotonin and psychopathic traits in a sample of personality disordered offenders, found that serotonin functioning as measured by prolactin response, while inversely associated with impulsive and antisocial traits, were positively correlated with arrogant and deceitful traits, and, to a lesser extent, callous and remorseless traits. Bariş Yildirim theorizes that the 5-HTTLPR "long" allele, which is generally regarded as protective against internalizing disorders, may interact with other serotoninergic genes to create a hyper-regulation and dampening of affective processes that results in psychopathy's emotional impairments. Furthermore, the combination of the 5-HTTLPR long allele and high testosterone levels has been found to result in a reduced response to threat as measured by cortisol reactivity, which mirrors the fear deficits found in those with psychopathy. Studies have suggested other correlations. Psychopathy was associated in two studies with an increased ratio of HVA (a dopamine metabolite) to 5-HIAA (a serotonin metabolite). Studies have found that individuals with the traits meeting criteria for psychopathy show a greater dopamine response to potential "rewards" such as monetary promises or taking drugs such as amphetamines. This has been theoretically linked to increased impulsivity. A 2010 British study found that a large 2D:4D digit ratio, an indication of high prenatal estrogen exposure, was a "positive correlate of psychopathy in females, and a positive correlate of callous affect (psychopathy sub-scale) in males". Findings have also shown monoamine oxidase A to affect the predictive ability of the PCL-R. Monoamine oxidases (MAOs) are enzymes that are involved in the breakdown of neurotransmitters such as serotonin and dopamine and are, therefore, capable of influencing feelings, mood, and behavior in individuals. Findings suggest that further research is needed in this area. Diagnosis Tools Psychopathy Checklist Psychopathy is most commonly assessed with the Psychopathy Checklist, Revised (PCL-R), created by Robert D. Hare based on Cleckley's criteria from the 1940s, criminological concepts such as those of William and Joan McCord, and his own research on criminals and incarcerated offenders in Canada. The PCL-R is widely used and is referred to by some as the "gold standard" for assessing psychopathy. There are nonetheless numerous criticisms of the PCL-R as a theoretical tool and in real-world usage. Psychopathic Personality Inventory Unlike the PCL, the Psychopathic Personality Inventory (PPI) was developed to comprehensively index personality traits without explicitly referring to antisocial or criminal behaviors themselves. It is a self-report scale that was developed originally for non-clinical samples (e.g. university students) rather than prisoners, though may be used with the latter. It was revised in 2005 to become the PPI-R and now comprises 154 items organized into eight subscales. The item scores have been found to group into two overarching and largely separate factors (unlike the PCL-R factors), Fearless-Dominance and Impulsive Antisociality, plus a third factor, Coldheartedness, which is largely dependent on scores on the other two. Factor 1 is associated with social efficacy while Factor 2 is associated with maladaptive tendencies. A person may score at different levels on the different factors, but the overall score indicates the extent of psychopathic personality. Triarchic Psychopathy Measure The Triarchic Psychopathy Measure, otherwise known as the TriPM, is a 58-item, self-report assessment that measures psychopathy within the three traits identified in the triarchic model: boldness, meanness and disinhibition. Each trait is measured on separate subscales and added up resulting in a total psychopathy score. The TriPM includes various components of other measures for assessing psychopathy, including meanness and disinhibition patterns within the psychopathic personality. However, there are differing approaches in the measurement of the boldness construct. The boldness construct is used to highlighting the social and interpersonal implications of the psychopathic personality. DSM and ICD There are currently two widely established systems for classifying mental disorders—the International Classification of Diseases (ICD) produced by the World Health Organization (WHO) and the Diagnostic and Statistical Manual of Mental Disorders (DSM) produced by the American Psychiatric Association (APA). Both list categories of disorders thought to be distinct types, and have deliberately converged their codes in recent revisions so that the manuals are often broadly comparable, although significant differences remain. The first edition of the DSM in 1952 had a section on sociopathic personality disturbances, then a general term that included such things as homosexuality and alcoholism as well as an "antisocial reaction" and "dyssocial reaction". The latter two eventually became antisocial personality disorder (ASPD) in the DSM and dissocial personality disorder in the ICD. Both manuals have stated that their diagnoses have been referred to, or include what is referred to, as psychopathy or sociopathy, although neither diagnostic manual has ever included a disorder officially titled as such. Other tools There are some traditional personality tests that contain subscales relating to psychopathy, though they assess relatively non-specific tendencies towards antisocial or criminal behavior. These include the Minnesota Multiphasic Personality Inventory (Psychopathic Deviate scale), California Psychological Inventory (Socialization scale), and Millon Clinical Multiaxial Inventory Antisocial Personality Disorder scale. There is also the Levenson Self-Report Psychopathy Scale (LSRP) and the Hare Self-Report Psychopathy Scale (HSRP), but in terms of self-report tests, the PPI/PPI-R has become more used than either of these in modern psychopathy research on adults. Comorbidity Studies suggest strong comorbidity between psychopathy and antisocial personality disorder. Among numerous studies, positive correlations have also been reported between psychopathy and histrionic, narcissistic, borderline, paranoid, and schizoid personality disorders, panic and obsessive–compulsive disorders, but not neurotic disorders in general, schizophrenia, or depression. Factor 1 and the boldness scale of psychopathy measurements are associated with narcissism and histrionic personality disorder. This is due to a psychopath's cognitive and affective egocentrism. However, while a narcissistic individual might view themselves as confident, they might seek out validation and attention from others to validate their self-worth, whereas a psychopathic individual usually lacks such ambitions. Attention deficit hyperactivity disorder (ADHD) is known to be highly comorbid with conduct disorder (a theorized precursor to ASPD), and may also co-occur with psychopathic tendencies. This may be explained in part by deficits in executive function. Anxiety disorders often co-occur with ASPD, and contrary to assumptions, psychopathy can sometimes be marked by anxiety; this appears to be related to items from Factor 2 but not Factor 1 of the PCL-R. Psychopathy is also associated with substance use disorders. Michael Fitzgerald suggested overlaps between (primary) psychopathy and Asperger syndrome in terms of fearlessness, planning of acts, empathy deficits, callous behaviour, and sometimes superficial charisma. Studies investigating similarities and differences between psychopathy and autism indicate that autism and psychopathy are not part of the same construct. Rather both conditions might co-occur in some individuals. Recent studies indicate that some individuals with an autism diagnosis also show callous and unemotional traits (a risk-factor for developing psychopathy), but are less strongly associated with conduct problems. Likewise, some people with Asperger syndrome have shown correlations with the "unemotional" factor and "behavioural dyscontrol" factor of psychopathy, but not the "interpersonal" factor. It has been suggested that psychopathy may be comorbid with several other conditions than these, but limited work on comorbidity has been carried out. This may be partly due to difficulties in using inpatient groups from certain institutions to assess comorbidity, owing to the likelihood of some bias in sample selection. Sex differences Research on psychopathy has largely been done on men and the PCL-R was developed using mainly male criminal samples, raising the question of how well the results apply to women. Men score higher than women on both the PCL-R and the PPI and on both of their main scales. The differences tend to be somewhat larger on the interpersonal-affective scale than on the antisocial scale. Most but not all studies have found broadly similar factor structure for men and women. Many associations with other personality traits are similar, although in one study the antisocial factor was more strongly related with impulsivity in men and more strongly related with openness to experience in women. It has been suggested that psychopathy in men manifest more as an antisocial pattern while in women it manifests more as a histrionic pattern. Studies on this have shown mixed results. PCL-R scores may be somewhat less predictive of violence and recidivism in women. On the other hand, psychopathy may have a stronger relationship with suicide and possibly internalizing symptoms in women. A suggestion is that psychopathy manifests more as externalizing behaviors in men and more as internalizing behaviors in women. Furthermore, one study has suggested substantial gender differences were found in the etiology of psychopathy. For girls, 75% of the variance in severe callous and unemotional traits was attributable to environmental factors and just 0% of the variance was attributable to genetic factors. In boys, the link was reversed. Studies have also found that women in prison score significantly lower on psychopathy than men, with one study reporting only 11 percent of violent females in prison met the psychopathy criteria in comparison to 31 percent of violent males. Other studies have also indicated that high psychopathic females are rare in forensic settings. Management Clinical Psychopathy has often been considered untreatable. Its unique characteristics makes it among the most refractory of personality disorders, a class of mental illnesses that are already traditionally considered difficult to treat. People with psychopathy are generally unmotivated to seek treatment for their condition, and can be uncooperative in therapy. Attempts to treat psychopathy with the current tools available to psychiatry have been disappointing. Harris and Rice's Handbook of Psychopathy says that there is currently little evidence for a cure or effective treatment for psychopathy; as yet, no pharmacological therapies are known to or have been trialed for alleviating the emotional, interpersonal and moral deficits of psychopathy, and patients with psychopathy who undergo psychotherapy might gain the skills to become more adept at the manipulation and deception of others and be more likely to commit crime. Some studies suggest that punishment and behavior modification techniques are ineffective at modifying the behavior of psychopathic individuals as they are insensitive to punishment or threat. These failures have led to a widely pessimistic view on its treatment prospects, a view that is exacerbated by the little research being done into psychopathy compared to the efforts committed to other mental illnesses, which makes it more difficult to gain the understanding of this condition that is necessary to develop effective therapies. Although the core character deficits of highly psychopathic individuals are likely to be highly incorrigible to the currently available treatment methods, the antisocial and criminal behavior associated with it may be more amenable to management, the management of which being the main aim of therapy programs in correctional settings. It has been suggested that the treatments that may be most likely to be effective at reducing overt antisocial and criminal behavior are those that focus on self-interest, emphasizing the tangible, material value of prosocial behavior, with interventions that develop skills to obtain what the patient wants out of life in prosocial rather than antisocial ways. To this end, various therapies have been tried with the aim of reducing the criminal activity of incarcerated offenders with psychopathy, with mixed success. As psychopathic individuals are insensitive to sanction, reward-based management, in which small privileges are granted in exchange for good behavior, has been suggested and used to manage their behavior in institutional settings. Psychiatric medications may also alleviate co-occurring conditions sometimes associated with psychopathy or with symptoms such as aggression or impulsivity, including antipsychotic, antidepressant or mood-stabilizing medications, although none have yet been approved by the FDA for this purpose. For example, a study found that the antipsychotic clozapine may be effective in reducing various behavioral dysfunctions in a sample of high-security hospital inpatients with antisocial personality disorder and psychopathic traits. However, research into the pharmacological treatment of psychopathy and the related condition antisocial personality disorder is minimal, with much of the knowledge in this area being extrapolations based on what is known about pharmacology in other mental disorders. Legal The PCL-R, the PCL:SV, and the PCL:YV are highly regarded and widely used in criminal justice settings, particularly in North America. They may be used for risk assessment and for assessing treatment potential and be used as part of the decisions regarding bail, sentence, which prison to use, parole, and regarding whether a youth should be tried as a juvenile or as an adult. There have been several criticisms against its use in legal settings. They include the general criticisms against the PCL-R, the availability of other risk assessment tools which may have advantages, and the excessive pessimism surrounding the prognosis and treatment possibilities of those who are diagnosed with psychopathy. The interrater reliability of the PCL-R can be high when used carefully in research but tend to be poor in applied settings. In particular Factor 1 items are somewhat subjective. In sexually violent predator cases the PCL-R scores given by prosecution experts were consistently higher than those given by defense experts in one study. The scoring may also be influenced by other differences between raters. In one study it was estimated that of the PCL-R variance, about 45% was due to true offender differences, 20% was due to which side the rater testified for, and 30% was due to other rater differences. To aid a criminal investigation, certain interrogation approaches may be used to exploit and leverage the personality traits of suspects thought to have psychopathy and make them more likely to divulge information. United Kingdom The PCL-R score cut-off for a label of psychopathy is 25 out of 40 in the United Kingdom, instead of 30 as it is in the United States. In the United Kingdom, "psychopathic disorder" was legally defined in the Mental Health Act (UK), under MHA1983, as "a persistent disorder or disability of mind (whether or not including significant impairment of intelligence) which results in abnormally aggressive or seriously irresponsible conduct on the part of the person concerned". This term was intended to reflect the presence of a personality disorder in terms of conditions for detention under the Mental Health Act 1983. Amendments to MHA1983 within the Mental Health Act 2007 abolished the term "psychopathic disorder", with all conditions for detention (e.g. mental illness, personality disorder, etc.) encompassed by the generic term of "mental disorder". In England and Wales, the diagnosis of dissocial personality disorder is grounds for detention in secure psychiatric hospitals under the Mental Health Act if they have committed serious crimes, but since such individuals are disruptive to other patients and not responsive to usual treatment methods this alternative to traditional incarceration is often not used. United States "Sexual psychopath" laws Starting in the 1930s, before some modern concepts of psychopathy were developed, "sexual psychopath" laws, the term referring broadly to mental illness, were introduced by some states, and by the mid-1960s more than half of the states had such laws. Sexual offenses were considered to be caused by underlying mental illnesses, and it was thought that sex offenders should be treated, in agreement with the general rehabilitative trends at this time. Courts committed sex offenders to a mental health facility for community protection and treatment. Starting in 1970, many of these laws were modified or abolished in favor of more traditional responses such as imprisonment due to criticism of the "sexual psychopath" concept as lacking scientific evidence, the treatment being ineffective, and predictions of future offending being dubious. There were also a series of cases where persons treated and released committed new sexual offenses. Starting in the 1990s, several states have passed sexually dangerous person laws, including registration, housing restrictions, public notification, mandatory reporting by health care professionals, and civil commitment, which permits indefinite confinement after a sentence has been completed. Psychopathy measurements may be used in the confinement decision process. Prognosis The prognosis for psychopathy in forensic and clinical settings is quite poor, with some studies reporting that treatment may worsen the antisocial aspects of psychopathy as measured by recidivism rates, though it is noted that one of the frequently cited studies finding increased criminal recidivism after treatment, a 2011 retrospective study of a treatment program in the 1960s, had several serious methodological problems and likely would not be approved of today. However, some relatively rigorous quasi-experimental studies using more modern treatment methods have found improvements regarding reducing future violent and other criminal behavior, regardless of PCL-R scores, although none were randomized controlled trials. Various other studies have found improvements in risk factors for crime such as substance abuse. No study has yet examined whether the personality traits that form the core character disturbances of psychopathy could be changed by such treatments. Frequency A 2008 study using the PCL:SV found that 1.2% of a US sample scored 13 or more out of 24, indicating "potential psychopathy". The scores correlated significantly with violence, alcohol use, and lower intelligence. A 2009 British study by Coid et al., also using the PCL:SV, reported a community prevalence of 0.6% scoring 13 or more. However, if the scoring was adjusted to the recommended 18 or more, this would have left the prevalence closer to 0.1%. The scores correlated with younger age, male gender, suicide attempts, violence, imprisonment, homelessness, drug dependence, personality disorders (histrionic, borderline and antisocial), and panic and obsessive–compulsive disorders. Psychopathy has a much higher prevalence in the convicted and incarcerated population, where it is thought that an estimated 15–25% of prisoners qualify for the diagnosis. A study on a sample of inmates in the UK found that 7.7% of the inmates interviewed met the PCL-R cut-off of 30 for a diagnosis of psychopathy. A study on a sample of inmates in Iran using the PCL:SV found a prevalence of 23% scoring 18 or more. A study by Nathan Brooks from Bond University found that around one in five corporate bosses display clinically significant psychopathic traits - a proportion similar to that among prisoners. Society and culture In the workplace There is limited research on psychopathy in the general work populace, in part because the PCL-R includes antisocial behavior as a significant core factor (obtaining a PCL-R score above the threshold is unlikely without having significant scores on the antisocial-lifestyle factor) and does not include positive adjustment characteristics, and most researchers have studied psychopathy in incarcerated criminals, a relatively accessible population of research subjects. However, psychologists Fritzon and Board, in their study comparing the incidence of personality disorders in business executives against criminals detained in a mental hospital, found that the profiles of some senior business managers contained significant elements of personality disorders, including those referred to as the "emotional components", or interpersonal-affective traits, of psychopathy. Factors such as boldness, disinhibition, and meanness as defined in the triarchic model, in combination with other advantages such as a favorable upbringing and high intelligence, are thought to correlate with stress immunity and stability, and may contribute to this particular expression. Such individuals are sometimes referred to as "successful psychopaths" or "corporate psychopaths" and they may not always have extensive histories of traditional criminal or antisocial behavior characteristic of the traditional conceptualization of psychopathy. Robert Hare claims that the prevalence of psychopathic traits is higher in the business world than in the general population, reporting that while about 1% of the general population meet the clinical criteria for psychopathy, figures of around 3–4% have been cited for more senior positions in business. Hare considers newspaper tycoon Robert Maxwell to have been a strong candidate as a "corporate psychopath". Academics on this subject believe that although psychopathy is manifested in only a small percentage of workplace staff, it is more common at higher levels of corporate organizations, and its negative effects (for example, increased bullying, conflict, stress, staff turnover, absenteeism, reduction in productivity) often causes a ripple effect throughout an organization, setting the tone for an entire corporate culture. Employees with the disorder are self-serving opportunists, and may disadvantage their own organizations to further their own interests. They may be charming to staff above their level in the workplace hierarchy, aiding their ascent through the organization, but abusive to staff below their level, and can do enormous damage when they are positioned in senior management roles. Psychopathy as measured by the PCL-R is associated with lower performance appraisals among corporate professionals. The psychologist Oliver James identifies psychopathy as one of the dark triadic traits in the workplace, the others being narcissism and Machiavellianism, which, like psychopathy, can have negative consequences. According to a study from the University of Notre Dame published in the Journal of Business Ethics, psychopaths have a natural advantage in workplaces overrun by abusive supervision, and are more likely to thrive under abusive bosses, being more resistant to stress, including interpersonal abuse, and having less of a need for positive relationships than others. In fiction Characters with psychopathy or sociopathy are some of the most notorious characters in film and literature, but their characterizations may only vaguely or partly relate to the concept of psychopathy as it is defined in psychiatry, criminology, and research. The character may be identified as having psychopathy within the fictional work itself, by its creators, or from the opinions of audiences and critics, and may be based on undefined popular stereotypes of psychopathy. Characters with psychopathic traits have appeared in Greek and Roman mythology, Bible stories, and some of Shakespeare's works. Such characters are often portrayed in an exaggerated fashion and typically in the role of a villain or antihero, where the general characteristics and stereotypes associated with psychopathy are useful to facilitate conflict and danger. Because the definitions, criteria, and popular conceptions throughout its history have varied over the years and continue to change even now, many of the characters characterized as psychopathic in notable works at the time of publication may no longer fit the current definition and conception of psychopathy. There are several archetypal images of psychopathy in both lay and professional accounts which only partly overlap and can involve contradictory traits: the charming con artist, the deranged serial killer and mass murderer, the callous and scheming businessperson, and the chronic low-level offender and juvenile delinquent. The public concept reflects some combination of fear of a mythical bogeyman, the disgust and intrigue surrounding evil, and fascination and sometimes perhaps envy of people who might appear to go through life without attachments and unencumbered by guilt, anguish or insecurity. See also Collective narcissism Moral psychology Serial rapist Violence and autism References Bibliography Black, Will (2014) Psychopathic Cultures and Toxic Empires Frontline Noir, Edinburgh Blair, J. et al. (2005) The Psychopath – Emotion and the Brain. Malden, MA: Blackwell Publishing, Dutton, K. (2012) The Wisdom of Psychopaths (e-book) Häkkänen-Nyholm, H. & Nyholm, J-O. (2012). Psychopathy and Law: A Practitioners Guide. Chichester: John Wiley & Sons. Oakley, Barbara, Evil Genes: Why Rome Fell, Hitler Rose, Enron Failed, and My Sister Stole My Mother's Boyfriend. Prometheus Books, Amherst, NY, 2007, . Stone, Michael H., M.D. & Brucato, Gary, Ph.D., The New Evil: Understanding the Emergence of Modern Violent Crime (Amherst, N.Y.: Prometheus Books). . Thiessen, W Slip-ups and the dangerous mind: Seeing through and living beyond the psychopath (2012). Thimble, Michael H.F.R.C.P., F.R.C. Psych. Psychopathology of Frontal Lobe Syndromes. External links Handbook of Psychopathy, 2nd Edition (2018) on Google Books. The Mask of Sanity, 5th Edition, PDF of Hervey Cleckley's book, 1988 Without Conscience Official web site of Robert Hare Philpapers Psychopathy Understanding The Psychopath: Key Definitions & Research Psychopathy Is website The Paradox of Psychopathy Psychiatric Times, 2007 (nb: inconsistent access) Into the Mind of a Killer Nature, 2001 What Psychopaths Teach Us about How to Succeed Scientific American, October 2012 "When Your Child is a Psychopath" in The Atlantic 1840s neologisms Criminology Dark triad Forensic psychology Obsolete terms for mental disorders Behavioural sciences
0.778916
0.999818
0.778774
Misanthropy
Misanthropy is the general hatred, dislike, or distrust of the human species, human behavior, or human nature. A misanthrope or misanthropist is someone who holds such views or feelings. Misanthropy involves a negative evaluative attitude toward humanity that is based on humankind's flaws. Misanthropes hold that these flaws characterize all or at least the greater majority of human beings. They claim that there is no easy way to rectify them short of a complete transformation of the dominant way of life. Various types of misanthropy are distinguished in the academic literature based on what attitude is involved, at whom it is directed, and how it is expressed. Either emotions or theoretical judgments can serve as the foundation of the attitude. It can be directed toward all humans without exception or exclude a few idealized people. In this regard, some misanthropes condemn themselves while others consider themselves superior to everyone else. Misanthropy is sometimes associated with a destructive outlook aiming to hurt other people or an attempt to flee society. Other types of misanthropic stances include activism by trying to improve humanity, quietism in the form of resignation, and humor mocking the absurdity of the human condition. The negative misanthropic outlook is based on different types of human flaws. Moral flaws and unethical decisions are often seen as the foundational factor. They include cruelty, selfishness, injustice, greed, and indifference to the suffering of others. They may result in harm to humans and animals, such as genocides and factory farming of livestock. Other flaws include intellectual flaws, like dogmatism and cognitive biases, as well as aesthetic flaws concerning ugliness and lack of sensitivity to beauty. Many debates in the academic literature discuss whether misanthropy is a valid viewpoint and what its implications are. Proponents of misanthropy usually point to human flaws and the harm they have caused as a sufficient reason for condemning humanity. Critics have responded to this line of thought by claiming that severe flaws concern only a few extreme cases, like mentally ill perpetrators, but not humanity at large. Another objection is based on the claim that humans also have virtues besides their flaws and that a balanced evaluation might be overall positive. A further criticism rejects misanthropy because of its association with hatred, which may lead to violence, and because it may make people friendless and unhappy. Defenders of misanthropy have responded by claiming that this applies only to some forms of misanthropy but not to misanthropy in general. A related issue concerns the question of the psychological and social factors that cause people to become misanthropes. They include socio-economic inequality, living under an authoritarian regime, and undergoing personal disappointments in life. Misanthropy is relevant in various disciplines. It has been discussed and exemplified by philosophers throughout history, like Heraclitus, Diogenes, Thomas Hobbes, Jean-Jacques Rousseau, Arthur Schopenhauer, and Friedrich Nietzsche. Misanthropic outlooks form part of some religious teachings discussing the deep flaws of human beings, like the Christian doctrine of original sin. Misanthropic perspectives and characters are also found in literature and popular culture. They include William Shakespeare's portrayal of Timon of Athens, Molière's play The Misanthrope, and Gulliver's Travels by Jonathan Swift. Misanthropy is closely related to but not identical to philosophical pessimism. Some misanthropes promote antinatalism, the view that humans should abstain from procreation. Definition Misanthropy is traditionally defined as hatred or dislike of humankind. The word originated in the 17th century and has its roots in the Greek words μῖσος mīsos 'hatred' and ἄνθρωπος ānthropos 'man, human'. In contemporary philosophy, the term is usually understood in a wider sense as a negative evaluation of humanity as a whole based on humanity's vices and flaws. This negative evaluation can express itself in various forms, hatred being only one of them. In this sense, misanthropy has a cognitive component based on a negative assessment of humanity and is not just a blind rejection. Misanthropy is usually contrasted with philanthropy, which refers to the love of humankind and is linked to efforts to increase human well-being, for example, through good will, charitable aid, and donations. Both terms have a range of meanings and do not necessarily contradict each other. In this regard, the same person may be a misanthrope in one sense and a philanthrope in another sense. One central aspect of all forms of misanthropy is that their target is not local but ubiquitous. This means that the negative attitude is not just directed at some individual persons or groups but at humanity as a whole. In this regard, misanthropy is different from other forms of negative discriminatory attitudes directed at a particular group of people. This distinguishes it from the intolerance exemplified by misogynists, misandrists, and racists, which hold a negative attitude toward women, men, or certain races. According to literature theorist Andrew Gibson, misanthropy does not need to be universal in the sense that a person literally dislikes every human being. Instead, it depends on the person's horizon. For instance, a villager who loathes every other villager without exception is a misanthrope if their horizon is limited to only this village. Both misanthropes and their critics agree that negative features and failings are not equally distributed, i.e. that the vices and bad traits are exemplified much more strongly in some than in others. But for misanthropy, the negative assessment of humanity is not based on a few extreme and outstanding cases: it is a condemnation of humanity as a whole that is not just directed at exceptionally bad individuals but includes regular people as well. Because of this focus on the ordinary, it is sometimes held that these flaws are obvious and trivial but people may ignore them due to intellectual flaws. Some see the flaws as part of human nature as such. Others also base their view on non-essential flaws, i.e. what humanity has come to be. This includes flaws seen as symptoms of modern civilization in general. Nevertheless, both groups agree that the relevant flaws are "entrenched". This means that there is either no or no easy way to rectify them and nothing short of a complete transformation of the dominant way of life would be required if that is possible at all. Types Various types of misanthropy are distinguished in the academic literature. They are based on what attitude is involved, how it is expressed, and whether the misanthropes include themselves in their negative assessment. The differences between them often matter for assessing the arguments for and against misanthropy. An early categorization suggested by Immanuel Kant distinguishes between positive and negative misanthropes. Positive misanthropes are active enemies of humanity. They wish harm to other people and undertake attempts to hurt them in one form or another. Negative misanthropy, by contrast, is a form of peaceful anthropophobia that leads people to isolate themselves. They may wish others well despite seeing serious flaws in them and prefer to not involve themselves in the social context of humanity. Kant associates negative misanthropy with moral disappointment due to previous negative experiences with others. Another distinction focuses on whether the misanthropic condemnation of humanity is only directed at other people or at everyone including oneself. In this regard, self-inclusive misanthropes are consistent in their attitude by including themselves in their negative assessment. This type is contrasted with self-aggrandizing misanthropes, who either implicitly or explicitly exclude themselves from the general condemnation and see themselves instead as superior to everyone else. In this regard, it may be accompanied by an exaggerated sense of self-worth and self-importance. According to literature theorist Joseph Harris, the self-aggrandizing type is more common. He states that this outlook seems to undermine its own position by constituting a form of hypocrisy. A closely related categorization developed by Irving Babbitt distinguishes misanthropes based on whether they allow exceptions in their negative assessment. In this regard, misanthropes of the naked intellect regard humanity as a whole as hopeless. Tender misanthropes exclude a few idealized people from their negative evaluation. Babbitt cites Rousseau and his fondness for natural uncivilized man as an example of tender misanthropy and contrasts it with Jonathan Swift's thorough dismissal of all of humanity. A further way to categorize forms of misanthropy is in relation to the type of attitude involved toward humanity. In this regard, philosopher Toby Svoboda distinguishes the attitudes of dislike, hate, contempt, and judgment. A misanthrope based on dislike harbors a distaste in the form of negative feelings toward other people. Misanthropy focusing on hatred involves an intense form of dislike. It includes the additional component of wishing ill upon others and at times trying to realize this wish. In the case of contempt, the attitude is not based on feelings and emotions but on a more theoretical outlook. It leads misanthropes to see other people as worthless and look down on them while excluding themselves from this assessment. If the misanthropic attitude has its foundation in judgment, it is also theoretical but does not distinguish between self and others. It is the view that humanity is in general bad without implying that the misanthrope is in any way better than the rest. According to Svoboda, only misanthropy based on judgment constitutes a serious philosophical position. He holds that misanthropy focusing on contempt is biased against other people while misanthropy in the form of dislike and hate is difficult to assess since these emotional attitudes often do not respond to objective evidence. Misanthropic forms of life Misanthropy is usually not restricted to a theoretical opinion but involves an evaluative attitude that calls for a practical response. It can express itself in different forms of life. They come with different dominant emotions and practical consequences for how to lead one's life. These responses to misanthropy are sometimes presented through simplified archetypes that may be too crude to accurately capture the mental life of any single person. Instead, they aim to portray common attitudes among groups of misanthropes. The two responses most commonly linked to misanthropy involve either destruction or fleeing from society. The destructive misanthrope is said to be driven by a hatred of humankind and aims at tearing it down, with violence if necessary. For the fugitive misanthrope, fear is the dominant emotion and leads the misanthrope to seek a secluded place in order to avoid the corrupting contact with civilization and humanity as much as possible. The contemporary misanthropic literature has also identified further less-known types of misanthropic lifestyles. The activist misanthrope is driven by hope despite their negative appraisal of humanity. This hope is a form of meliorism based on the idea that it is possible and feasible for humanity to transform itself and the activist tries to realize this ideal. A weaker version of this approach is to try to improve the world incrementally to avoid some of the worst outcomes without the hope of fully solving the basic problem. Activist misanthropes differ from quietist misanthropes, who take a pessimistic approach toward what the person can do for bringing about a transformation or significant improvements. In contrast to the more drastic reactions of the other responses mentioned, they resign themselves to quiet acceptance and small-scale avoidance. A further approach is focused on humor based on mockery and ridicule at the absurdity of the human condition. An example is that humans hurt each other and risk future self-destruction for trivial concerns like a marginal increase in profit. This way, humor can act both as a mirror to portray the terrible truth of the situation and as its palliative at the same time. Forms of human flaws A core aspect of misanthropy is that its negative attitude toward humanity is based on human flaws. Various misanthropes have provided extensive lists of flaws, including cruelty, greed, selfishness, wastefulness, dogmatism, self-deception, and insensitivity to beauty. These flaws can be categorized in many ways. It is often held that moral flaws constitute the most serious case. Other flaws discussed in the contemporary literature include intellectual flaws, aesthetic flaws, and spiritual flaws. Moral flaws are usually understood as tendencies to violate moral norms or as mistaken attitudes toward what is the good. They include cruelty, indifference to the suffering of others, selfishness, moral laziness, cowardice, injustice, greed, and ingratitude. The harm done because of these flaws can be divided into three categories: harm done directly to humans, harm done directly to other animals, and harm done indirectly to both humans and other animals by harming the environment. Examples of these categories include the Holocaust, factory farming of livestock, and pollution causing climate change. In this regard, it is not just relevant that human beings cause these forms of harm but also that they are morally responsible for them. This is based on the idea that they can understand the consequences of their actions and could act differently. However, they decide not to, for example, because they ignore the long-term well-being of others in order to get short-term personal benefits. Intellectual flaws concern cognitive capacities. They can be defined as what leads to false beliefs, what obstructs knowledge, or what violates the demands of rationality. They include intellectual vices, like arrogance, wishful thinking, and dogmatism. Further examples are stupidity, gullibility, and cognitive biases, like the confirmation bias, the self-serving bias, the hindsight bias, and the anchoring bias. Intellectual flaws can work in tandem with all kinds of vices: they may deceive someone about having a vice. This prevents the affected person from addressing it and improving themselves, for instance, by being mindless and failing to recognize it. They also include forms of self-deceit, wilful ignorance, and being in denial about something. Similar considerations have prompted some traditions to see intellectual failings, like ignorance, as the root of all evil. Aesthetic flaws are usually not given the same importance as moral and intellectual flaws, but they also carry some weight for misanthropic considerations. These flaws relate to beauty and ugliness. They concern ugly aspects of human life itself, like defecation and aging. Other examples are ugliness caused by human activities, like pollution and litter, and inappropriate attitudes toward aesthetic aspects, like being insensitive to beauty. Causes Various psychological and social factors have been identified in the academic literature as possible causes of misanthropic sentiments. The individual factors by themselves may not be able to fully explain misanthropy but can show instead how it becomes more likely. For example, disappointments and disillusionments in life can cause a person to adopt a misanthropic outlook. In this regard, the more idealistic and optimistic the person initially was, the stronger this reversal and the following negative outlook tend to be. This type of psychological explanation is found as early as Plato's Phaedo. In it, Socrates considers a person who trusts and admires someone without knowing them sufficiently well. He argues that misanthropy may arise if it is discovered later that the admired person has serious flaws. In this case, the initial attitude is reversed and universalized to apply to all others, leading to general distrust and contempt toward other humans. Socrates argues that this becomes more likely if the admired person is a close friend and if it happens more than once. This form of misanthropy may be accompanied by a feeling of moral superiority in which the misanthrope considers themselves to be better than everyone else. Other types of negative personal experiences in life may have a similar effect. Andrew Gibson uses this line of thought to explain why some philosophers became misanthropes. He uses the example of Thomas Hobbes to explain how a politically unstable environment and the frequent wars can foster a misanthropic attitude. Regarding Arthur Schopenhauer, he states that being forced to flee one's home at an early age and never finding a place to call home afterward can have a similar effect. Another psychological factor concerns negative attitudes toward the human body, especially in the form of general revulsion from sexuality. Besides the psychological causes, some wider social circumstances may also play a role. Generally speaking, the more negative the circumstances are, the more likely misanthropy becomes. For instance, according to political scientist Eric M. Uslaner, socio-economic inequality in the form of unfair distribution of wealth increases the tendency to adopt a misanthropic perspective. This has to do with the fact that inequality tends to undermine trust in the government and others. Uslaner suggests that it may be possible to overcome or reduce this source of misanthropy by implementing policies that build trust and promote a more equal distribution of wealth. The political regime is another relevant factor. This specifically concerns authoritarian regimes using all means available to repress their population and stay in power. For example, it has been argued that the severe forms of repression of the Ancien Régime in the late 17th century made it more likely for people to adopt a misanthropic outlook because their freedom was denied. Democracy may have the opposite effect since it allows more personal freedom due to its more optimistic outlook on human nature. Empirical studies often use questions related to trust in other people to measure misanthropy. This concerns specifically whether the person believes that others would be fair and helpful. In an empirical study on misanthropy in American society, Tom W. Smith concludes that factors responsible for an increased misanthropic outlook are low socioeconomic status, being from racial and ethnic minorities, and having experienced recent negative events in one's life. In regard to religion, misanthropy is higher for people who do not attend church and for fundamentalists. Some factors seem to play no significant role, like gender, having undergone a divorce, and never having been married. Another study by Morris Rosenberg finds that misanthropy is linked to certain political outlooks. They include being skeptical about free speech and a tendency to support authoritarian policies. This concerns, for example, tendencies to suppress political and religious liberties. Arguments Various discussions in the academic literature concern the question of whether misanthropy is an accurate assessment of humanity and what the consequences of adopting it are. Many proponents of misanthropy focus on human flaws together with examples of when they exercise their negative influences. They argue that these flaws are so severe that misanthropy is an appropriate response. Special importance in this regard is usually given to moral faults. This is based on the idea that humans do not merely cause a great deal of suffering and destruction but are also morally responsible for them. The reason is that they are intelligent enough to understand the consequences of their actions and could potentially make balanced long-term decisions instead of focusing on personal short-term gains. Proponents of misanthropy sometimes focus on extreme individual manifestations of human flaws, like mass killings ordered by dictators. Others emphasize that the problem is not limited to a few cases, for example, that many ordinary people are complicit in their manifestation by supporting the political leaders committing them. A closely related argument is to claim that the underlying flaws are there in everyone, even if they reach their most extreme manifestation only in a few. Another approach is to focus not on the grand extreme cases but on the ordinary small-scale manifestations of human flaws in everyday life, such as lying, cheating, breaking promises, and being ungrateful. Some arguments for misanthropy focus not only on general tendencies but on actual damage caused by humans in the past. This concerns, for instance, damages done to the ecosystem, like ecological catastrophes resulting in mass extinctions. Criticism Various theorists have criticized misanthropy. Some opponents acknowledge that there are extreme individual manifestations of human flaws, like mentally ill perpetrators, but claim that these cases do not reflect humanity at large and cannot justify the misanthropic attitude. For instance, while there are cases of extreme human brutality, like the mass killings committed by dictators and their forces, listing such cases is not sufficient for condemning humanity at large. Some critics of misanthropy acknowledge that humans have various flaws but state that they present just one side of humanity while evaluative attitudes should take all sides into account. This line of thought is based on the idea that humans possess equally important virtues that make up for their shortcomings. For example, accounts that focus only on the great wars, cruelties, and tragedies in human history ignore its positive achievements in the sciences, arts, and humanities. Another explanation given by critics is that the negative assessment should not be directed at humanity but at some social forces. These forces can include capitalism, racism, religious fundamentalism, or imperialism. Supporters of this argument would adopt an opposition to one of these social forces rather than a misanthropic opposition to humanity. Some objections to misanthropy are based not on whether this attitude appropriately reflects the negative value of humanity but on the costs of accepting such a position. The costs can affect both the individual misanthrope and the society at large. This is especially relevant if misanthropy is linked to hatred, which may turn easily into violence against social institutions and other humans and may result in harm. Misanthropy may also deprive the person of most pleasures by making them miserable and friendless. Another form of criticism focuses more on the theoretical level and claims that misanthropy is an inconsistent and self-contradictory position. An example of this inconsistency is the misanthrope's tendency to denounce the social world while still being engaged in it and being unable to fully leave it behind. This criticism applies specifically to misanthropes who exclude themselves from the negative evaluation and look down on others with contempt from an arrogant position of inflated ego but it may not apply to all types of misanthropy. A closely related objection is based on the claim that misanthropy is an unnatural attitude and should therefore be seen as an aberration or a pathological case. In various disciplines History of philosophy Misanthropy has been discussed and exemplified by philosophers throughout history. One of the earliest cases was the pre-Socratic philosopher Heraclitus. He is often characterized as a solitary person who is not fond of social interactions with others. A central factor to his negative outlook on human beings was their lack of comprehension of the true nature of reality. This concerns especially cases in which they remain in a state of ignorance despite having received a thorough explanation of the issue in question. Another early discussion is found in Plato's Phaedo, where misanthropy is characterized as the result of frustrated expectations and excessively naïve optimism. Various reflections on misanthropy are also found in the cynic school of philosophy. There it is argued, for instance, that humans keep on reproducing and multiplying the evils they are attempting to flee. An example given by the first-century philosopher Dio Chrysostom is that humans move to cities to defend themselves against outsiders but this process thwarts their initial goal by leading to even more violence due to high crime rates within the city. Diogenes is a well-known cynic misanthrope. He saw other people as hypocritical and superficial. He openly rejected all kinds of societal norms and values, often provoking others by consciously breaking conventions and behaving rudely. Thomas Hobbes is an example of misanthropy in early modern philosophy. His negative outlook on humanity is reflected in many of his works. For him, humans are egoistic and violent: they act according to their self-interest and are willing to pursue their goals at the expense of others. In their natural state, this leads to a never-ending war in which "every man to every man ... is an enemy". He saw the establishment of an authoritative state characterized by the strict enforcement of laws to maintain order as the only way to tame the violent human nature and avoid perpetual war. A further type of misanthropy is found in Jean-Jacques Rousseau. He idealizes the harmony and simplicity found in nature and contrasts them with the confusion and disorder found in humanity, especially in the form of society and institutions. For instance, he claims that "Man is born free; and everywhere he is in chains". This negative outlook was also reflected in his lifestyle: he lived solitary and preferred to be with plants rather than humans. Arthur Schopenhauer is often mentioned as a prime example of misanthropy. According to him, everything in the world, including humans and their activities, is an expression of one underlying will. This will is blind, which causes it to continuously engage in futile struggles. On the level of human life, this "presents itself as a continual deception" since it is driven by pointless desires. They are mostly egoistic and often result in injustice and suffering to others. Once they are satisfied, they only give rise to new pointless desires and more suffering. In this regard, Schopenhauer dismisses most things that are typically considered precious or meaningful in human life, like romantic love, individuality, and liberty. He holds that the best response to the human condition is a form of asceticism by denying the expression of the will. This is only found in rare humans and "the dull majority of men" does not live up to this ideal. Friedrich Nietzsche, who was strongly influenced by Schopenhauer, is also often cited as an example of misanthropy. He saw man as a decadent and "sick animal" that shows no progress over other animals. He even expressed a negative attitude toward apes since they are more similar to human beings than other animals, for example, with regard to cruelty. For Nietzsche, a noteworthy flaw of human beings is their tendency to create and enforce systems of moral rules that favor weak people and suppress true greatness. He held that the human being is something to be overcome and used the term Übermensch to describe an ideal individual who has transcended traditional moral and societal norms. Religion Some misanthropic views are also found in religious teachings. In Christianity, for instance, this is linked to the sinful nature of humans and the widespread manifestation of sin in everyday life. Common forms of sin are discussed in terms of the seven deadly sins. Examples are an excessive sense of self-importance in the form of pride and strong sexual cravings constituting lust. They also include the tendency to follow greed for material possessions as well as being envious of the possessions of others. According to the doctrine of original sin, this flaw is found in every human being since the doctrine states that human nature is already tainted by sin from birth by inheriting it from Adam and Eve's rebellion against God's authority. John Calvin's theology of Total depravity has been described by some theologians as misanthropic. Misanthropic perspectives can also be discerned in various Buddhist teachings. For example, Buddha had a negative outlook on the widespread flaws of human beings, including lust, hatred, delusion, sorrow, and despair. These flaws are identified with some form of craving or attachment (taṇhā) and cause suffering (dukkha). Buddhists hold that it is possible to overcome these failings in the process of achieving Buddhahood or enlightenment due to an innate Buddha nature. However, this is seen as a rare achievement in one lifetime in some of the Indian traditions whereas within East Asian Mahayana Chan, Zen and Pureland practice's it is achievable to become suddenly enlightened in one life time , In contrast to Indian Buddhist's doctrine regard that most human beings carry these deep flaws with them throughout their lives and to the next through the law of karma. However, there are also many religious teachings opposed to misanthropy, such as the emphasis on kindness and helping others. In Christianity, this is found in the concept of agape, which involves selfless and unconditional love in the form of compassion and a willingness to help others. Buddhists see the practice of loving kindness (metta) as a central aspect that implies a positive intention of compassion and the expression of kindness toward all sentient beings. Literature and popular culture Many examples of misanthropy are also found in literature and popular culture. Timon of Athens by William Shakespeare is a famous portrayal of the life of the Ancient Greek Timon, who is widely known for his extreme misanthropic attitude. Shakespeare depicts him as a wealthy and generous gentleman. However, he becomes disillusioned with his ungrateful friends and humanity at large. This way, his initial philanthropy turns into an unrestrained hatred of humanity, which prompts him to leave society in order to live in a forest. Molière's play The Misanthrope is another famous example. Its protagonist, Alceste, has a low opinion of the people around him. He tends to focus on their flaws and openly criticizes them for their superficiality, insincerity, and hypocrisy. He rejects most social conventions and thereby often offends others, for example, by refusing to engage in social niceties like polite small talk. The author Jonathan Swift had a reputation for being misanthropic. In some statements, he openly declares that he hates and detests "that animal called man". Misanthropy is also found in many of his works. An example is Gulliver's Travels, which tells the adventures of the protagonist Gulliver, who journeys to various places, like an island inhabited by tiny people and a land ruled by intelligent horses. Through these experiences of the contrast between humans and other species, he comes to see more and more the deep flaws of humanity, leading him to develop a revulsion toward other human beings. Ebenezer Scrooge from Charles Dickens's A Christmas Carol is an often-cited example of misanthropy. He is described as a cold-hearted, solitary miser who detests Christmas. He is greedy, selfish, and has no regard for the well-being of others. Other writers associated with misanthropy include Gustave Flaubert and Philip Larkin. The Joker from the DC Universe is an example of misanthropy in popular culture. He is one of the main antagonists of Batman and acts as an agent of chaos. He believes that people are selfish, cruel, irrational, and hypocritical. He is usually portrayed as a sociopath with a twisted sense of humor who uses violent means to expose and bring down organized society. Related concepts Philosophical pessimism Misanthropy is closely related but not identical to philosophical pessimism. Philosophical pessimism is the view that life is not worth living or that the world is a bad place, for example, because it is meaningless and full of suffering. This view is exemplified by Arthur Schopenhauer and Philipp Mainländer. Philosophical pessimism is often accompanied by misanthropy if the proponent holds that humanity is also bad and partially responsible for the negative value of the world. However, the two views do not require each other and can be held separately. A non-misanthropic pessimist may hold, for instance, that humans are just victims of a terrible world but not to blame for it. Eco-misanthropists, by contrast, may claim that the world and its nature are valuable but that humanity exerts a negative and destructive influence. Antinatalism and human extinction Antinatalism is the view that coming into existence is bad and that humans have a duty to abstain from procreation. A central argument for antinatalism is called the misanthropic argument. It sees the deep flaws of humans and their tendency to cause harm as a reason for avoiding the creation of more humans. These harms include wars, genocides, factory farming, and damages done to the environment. This argument contrasts with philanthropic arguments, which focus on the future suffering of the human about to come into existence. They argue that the only way to avoid their future suffering is to prevent them from being born. The Voluntary Human Extinction Movement and the Church of Euthanasia are well-known examples of social movements in favor of antinatalism and human extinction. Antinatalism is commonly endorsed by misanthropic thinkers but there are also many other ways that could lead to the extinction of the human species. This field is still relatively speculative but various suggestions have been made about threats to the long-term survival of the human species, like nuclear wars, self-replicating nanorobots, or super-pathogens. Such cases are usually seen as terrible scenarios and dangerous threats but misanthropes may instead interpret them as reasons for hope because the abhorrent age of humanity in history may soon come to an end. A similar sentiment is expressed by Bertrand Russell. He states in relation to the existence of human life on earth and its misdeeds that they are "a passing nightmare; in time the earth will become again incapable of supporting life, and peace will return." Human exceptionalism and deep ecology Human exceptionalism is the claim that human beings have unique importance and are exceptional compared to all other species. It is often based on the claim that they stand out because of their special capacities, like intelligence, rationality, and autonomy. In religious contexts, it is frequently explained in relation to a unique role that God foresaw for them or that they were created in God's image. Human exceptionalism is usually combined with the claim that human well-being matters more than the well-being of other species. This line of thought can be used to draw various ethical conclusions. One is the claim that humans have the right to rule the planet and impose their will on other species. Another is that inflicting harm on other species may be morally acceptable if it is done with the purpose of promoting human well-being and excellence. Generally speaking, the position of human exceptionalism is at odds with misanthropy in relation to the value of humanity. But this is not necessarily the case and it may be possible to hold both positions at the same time. One way to do this is to claim that humanity is exceptional because of a few rare individuals but that the average person is bad. Another approach is to hold that human beings are exceptional in a negative sense: given their destructive and harmful history, they are much worse than any other species. Theorists in the field of deep ecology are also often critical of human exceptionalism and tend to favor a misanthropic perspective. Deep ecology is a philosophical and social movement that stresses the inherent value of nature and advocates a radical change in human behavior toward nature. Various theorists have criticized deep ecology based on the claim that it is misanthropic by privileging other species over humans. For example, the deep ecology movement Earth First! faced severe criticism when they praised the AIDS epidemic in Africa as a solution to the problem of human overpopulation in their newsletter. See also Asociality – lack of motivation to engage in social interaction Antihumanism – rejection of humanism Antisocial personality disorder Cosmicism Emotional isolation Hatred (video game) Nihilism Social alienation References Citations Sources External links Anti-social behaviour Concepts in social philosophy Human behavior Philosophical pessimism Philosophy of life Psychological attitude Social emotions
0.778783
0.99978
0.778612
Need
A need is dissatisfaction at a point of time and in a given context. Needs are distinguished from wants. In the case of a need, a deficiency causes a clear adverse outcome: a dysfunction or death. In other words, a need is something required for a safe, stable and healthy life (e.g. air, water, food, land, shelter) while a want is a desire, wish or aspiration. When needs or wants are backed by purchasing power, they have the potential to become economic demands. Basic needs such as air, water, food and protection from environmental dangers are necessary for an organism to live. In addition to basic needs, humans also have needs of a social or societal nature such as the human need for purpose, to socialize, to belong to a family or community or other group. Needs can be objective and physical, such as the need for food, or psychical and subjective, such as the need for self-esteem. Understanding both kinds of "unmet needs" is improved by considering the social context of their not being fulfilled. Needs and wants are a matter of interest in, and form a common substrate for, the fields of philosophy, biology, psychology, social science, economics, marketing and politics. Psychological definition To most psychologists, need is a psychological feature that arouses an organism to action toward a goal, giving purpose and direction to behavior. The most widely known academic model of needs was proposed by psychologist Abraham Maslow in his hierarchy of needs in 1943. His theory proposed that people have a hierarchy of psychological needs, which range from basic physiological or lower order needs such as food, water and safety (e.g. shelter) through to the higher order needs such as self-actualization. People tend to spend most of their resources (time, energy and finances) attempting to satisfy these basic before the higher order needs of belonging, esteem and self-actualization become meaningful. Maslow's approach is a generalised model for understanding human motivations in a wide variety of contexts but must be adapted for specific contexts. While intuitively appealing, Maslow's model has been difficult to operationalize experimentally. It was developed further by Clayton Alderfer. The academic study of needs, which was at its zenith in the 1950s, receives less attention among psychologists today. One exception involves Richard Sennett's work on the importance of respect. One difficulty with a psychological theory of needs is that conceptions of "need" may vary radically among different cultures or among different parts of the same society. For a psychological theory of human need, one found compatible with the Doyal/Gough Theory, see self-determination theory. Doyal and Gough's definition A second view of need is presented in the work of political economy professor Ian Gough, who has published on the subject of human needs in the context of social assistance provided by the welfare state. Together with medical ethics professor Len Doyal, he published A Theory of Human Need in 1991. Their view goes beyond the emphasis on psychology: it might be said that an individual's needs represent "the costs of being human" within society. A person who does not have their needs fulfilled—i.e., a "needy" person—will function poorly in society. In the view of Gough and Doyal, every person has an objective interest in avoiding serious harm that prevents that person from endeavoring to attain their vision of what is good, regardless of what exactly that may be. That endeavour requires a capacity to participate in the societal setting in which the individual lives. More specifically, every person needs to possess both physical health and personal autonomy. The latter involves the capacity to make informed choices about what should be done and how to implement it. This requires mental health, cognitive skills, and opportunities to participate in society's activities and collective decision-making. How are such needs satisfied? Doyal and Gough point to twelve broad categories of "intermediate needs" that define how the needs for physical health and personal autonomy are fulfilled: Adequate nutritious food and water Adequate protective housing A safe work environment A supply of clothing A safe physical environment Appropriate health care Security in childhood Meaningful primary relations with others Physical security Economic security Safe birth control and child-bearing Appropriate basic and cross-cultural education How are the details of needs satisfaction determined? The authors point to rational identification of needs, using up-to-date scientific knowledge; consideration of the actual experiences of individuals in their everyday lives; and democratic decision-making. The satisfaction of human needs cannot be imposed "from above". This theory may be compared to the capability approach developed by Amartya Sen and Martha Nussbaum. Individuals with more internal "assets" or "capacities" (e.g., education, mental health, physical strength, etc.) have more capabilities (i.e., more available choices, more positive freedom). They are thus more able to escape or avoid poverty. Those individuals who possess more capabilities fulfill more of their needs. Pending publication in 2015 in the Cambridge Journal of Economics of the final version of this work, Gough discussed the Doyal/Gough theory in a working paper available online. Other views The concept of intellectual need has been studied in education, as well as in social work, where an Oxford Bibliographies Online: Social Work entry on Human Need reviewed the literature as of 2008 on human need from a variety of disciplines. Also see the 2008 and pending 2015 entries on Human Needs: Overview in the Encyclopedia of Social Work. In his 1844 Paris Manuscripts, Karl Marx famously defined humans as "creatures of need" or "needy creatures" who experienced suffering in the process of learning and working to meet their needs. These needs were both physical needs as well as moral, emotional and intellectual needs. According to Marx, human development is characterized by the fact that in the process of meeting their needs, humans develop new needs, implying that at least to some extent they make and remake their own nature. This idea is discussed in more detail by the Hungarian philosopher Ágnes Heller in A Theory of Need in Marx (London: Allison and Busby, 1976). Political economy professor Michael Lebowitz has developed the Marxian interpretation of needs further in two editions of his book Beyond Capital. Professor György Márkus systematised Marx's ideas about needs as follows: humans are different from other animals because their vital activity, work, is mediated to the satisfaction of needs (an animal who manufactures tools to produce other tools or his/her satisfactory), which makes a human being a universal natural being capable to turn the whole nature into the subject of his/her needs and his/her activity, and develops his/her needs and abilities (essential human forces) and develops himself/herself, a historical-universal being. Work generates the breach of the animal subject-object fusion, thus generating the possibility of human conscience and self-conscience, which tend to universality (the universal conscious being). A human being's conditions as a social being are given by work, but not only by work as it is not possible to live like a human being without a relationship with others: work is social because human beings work for each other with means and abilities produced by prior generations. Human beings are also free entities able to accomplish, during their lifetime, the objective possibilities generated by social evolution, on the basis of their conscious decisions. Freedom should be understood both in a negative (freedom to decide and to establish relationships) and a positive sense (dominion over natural forces and development of human creativity) of the essential human forces. To sum up, the essential interrelated traits of human beings are: a) work is their vital activity; b) human beings are conscious beings; c) human beings are social beings; d) human beings tend to universality, which manifests in the three previous traits and make human beings natural-historical-universal, social-universal and universal conscious entities, and e) human beings are free. In his texts about what he calls "moral economics", professor Julio Boltvinik Kalinka asserts that the ideas exposed by David Wiggins about needs are correct but insufficient: needs are of a normative nature but they are also factual. These "gross ethical concepts" (as stated by Hilary Putnam) should also include an evaluation: Ross Fitzgerald's criticism of Maslow's ideas rejects the concept of objective human needs and uses instead the concept of preferences. Marshall Rosenberg's model of Compassionate Communication, also known as Nonviolent Communication (NVC) makes the distinction between universal human needs (what sustains and motivates human life) and specific strategies used to meet these needs. Feelings are seen as neither good nor bad, right nor wrong, but as indicators of when human needs are met or unmet. In contrast to Maslow, Rosenberg's model does not place needs in a hierarchy. Rosenberg's model supports people developing awareness of feelings as indicators, of what needs are alive within them and others, moment by moment; to forefront needs, to make it more likely and possible for two or more people, to arrive at mutually agreed upon strategies to meet the needs of all parties. Rosenberg diagrams this sequence in part like this: Observations > Feelings > Needs > Requests where identifying needs is most significant to the process. People also talk about the needs of a community or organisation. Such needs might include demand for a particular type of business, for a certain government program or entity, or for individuals with particular skills. This is an example of metonymy in language and presents with the logical problem of reification. Medical needs. In clinical medical practice, it may be difficult to distinguish between treatment a patient needs; treatment that may be desirable;and treatment that could be deemed frivolous. At one end of this spectrum for example, any practising clinician would accept that a child with fulminating meningococcal meningitis needs rapid access to medical care. At the other end, rarely could a young healthy woman be deemed to need breast augmentation. Numerous surgical procedures fall into this spectrum: particularly, this is so in our ageing Western population, where there is an ever-increasing prevalence of painful, but not life-threatening disorders: typified by the ageing spine. See also Consumer behaviour Human rights Homeostasis Information needs Maslow's hierarchy of needs Mental health Murray's system of needs Needs assessment Need theory (McClelland) Nonviolent Communication Poverty Simple living Want Well-being Notes References External links Consumer Motivational theories Psychological theories
0.784076
0.992715
0.778364
Physical geography
Physical geography (also known as physiography) is one of the three main branches of geography. Physical geography is the branch of natural science which deals with the processes and patterns in the natural environment such as the atmosphere, hydrosphere, biosphere, and geosphere. This focus is in contrast with the branch of human geography, which focuses on the built environment, and technical geography, which focuses on using, studying, and creating tools to obtain, analyze, interpret, and understand spatial information. The three branches have significant overlap, however. Sub-branches Physical geography can be divided into several branches or related fields, as follows: Geomorphology is concerned with understanding the surface of the Earth and the processes by which it is shaped, both at the present as well as in the past. Geomorphology as a field has several sub-fields that deal with the specific landforms of various environments, e.g. desert geomorphology and fluvial geomorphology; however, these sub-fields are united by the core processes which cause them, mainly tectonic or climatic processes. Geomorphology seeks to understand landform history and dynamics, and predict future changes through a combination of field observation, physical experiment, and numerical modeling (Geomorphometry). Early studies in geomorphology are the foundation for pedology, one of two main branches of soil science. Hydrology is predominantly concerned with the amounts and quality of water moving and accumulating on the land surface and in the soils and rocks near the surface and is typified by the hydrological cycle. Thus the field encompasses water in rivers, lakes, aquifers and to an extent glaciers, in which the field examines the process and dynamics involved in these bodies of water. Hydrology has historically had an important connection with engineering and has thus developed a largely quantitative method in its research; however, it does have an earth science side that embraces the systems approach. Similar to most fields of physical geography it has sub-fields that examine the specific bodies of water or their interaction with other spheres e.g. limnology and ecohydrology. Glaciology is the study of glaciers and ice sheets, or more commonly the cryosphere or ice and phenomena that involve ice. Glaciology groups the latter (ice sheets) as continental glaciers and the former (glaciers) as alpine glaciers. Although research in the areas is similar to research undertaken into both the dynamics of ice sheets and glaciers, the former tends to be concerned with the interaction of ice sheets with the present climate and the latter with the impact of glaciers on the landscape. Glaciology also has a vast array of sub-fields examining the factors and processes involved in ice sheets and glaciers e.g. snow hydrology and glacial geology. Biogeography is the science which deals with geographic patterns of species distribution and the processes that result in these patterns. Biogeography emerged as a field of study as a result of the work of Alfred Russel Wallace, although the field prior to the late twentieth century had largely been viewed as historic in its outlook and descriptive in its approach. The main stimulus for the field since its founding has been that of evolution, plate tectonics and the theory of island biogeography. The field can largely be divided into five sub-fields: island biogeography, paleobiogeography, phylogeography, zoogeography and phytogeography. Climatology is the study of the climate, scientifically defined as weather conditions averaged over a long period of time. Climatology examines both the nature of micro (local) and macro (global) climates and the natural and anthropogenic influences on them. The field is also sub-divided largely into the climates of various regions and the study of specific phenomena or time periods e.g. tropical cyclone rainfall climatology and paleoclimatology. Soil geography deals with the distribution of soils across the terrain. This discipline, between geography and soil science, is fundamental to both physical geography and pedology. Pedology is the study of soils in their natural environment. It deals with pedogenesis, soil morphology, soil classification. Soil geography studies the spatial distribution of soils as it relates to topography, climate (water, air, temperature), soil life (micro-organisms, plants, animals) and mineral materials within soils (biogeochemical cycles). Palaeogeography is a cross-disciplinary study that examines the preserved material in the stratigraphic record to determine the distribution of the continents through geologic time. Almost all the evidence for the positions of the continents comes from geology in the form of fossils or paleomagnetism. The use of these data has resulted in evidence for continental drift, plate tectonics, and supercontinents. This, in turn, has supported palaeogeographic theories such as the Wilson cycle. Coastal geography is the study of the dynamic interface between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, geology, and oceanography) and the human geography of the coast. It involves an understanding of coastal weathering processes, particularly wave action, sediment movement and weathering, and also the ways in which humans interact with the coast. Coastal geography, although predominantly geomorphological in its research, is not just concerned with coastal landforms, but also the causes and influences of sea level change. Oceanography is the branch of physical geography that studies the Earth's oceans and seas. It covers a wide range of topics, including marine organisms and ecosystem dynamics (biological oceanography); ocean currents, waves, and geophysical fluid dynamics (physical oceanography); plate tectonics and the geology of the sea floor (geological oceanography); and fluxes of various chemical substances and physical properties within the ocean and across its boundaries (chemical oceanography). These diverse topics reflect multiple disciplines that oceanographers blend to further knowledge of the world ocean and understanding of processes within it. Quaternary science is an interdisciplinary field of study focusing on the Quaternary period, which encompasses the last 2.6 million years. The field studies the last ice age and the recent interstadial the Holocene and uses proxy evidence to reconstruct the past environments during this period to infer the climatic and environmental changes that have occurred. Landscape ecology is a sub-discipline of ecology and geography that address how spatial variation in the landscape affects ecological processes such as the distribution and flow of energy, materials, and individuals in the environment (which, in turn, may influence the distribution of landscape "elements" themselves such as hedgerows). The field was largely funded by the German geographer Carl Troll. Landscape ecology typically deals with problems in an applied and holistic context. The main difference between biogeography and landscape ecology is that the latter is concerned with how flows or energy and material are changed and their impacts on the landscape whereas the former is concerned with the spatial patterns of species and chemical cycles. Geomatics is the field of gathering, storing, processing, and delivering geographic information, or spatially referenced information. Geomatics includes geodesy (scientific discipline that deals with the measurement and representation of the earth, its gravitational field, and other geodynamic phenomena, such as crustal motion, oceanic tides, and polar motion), cartography, geographical information science (GIS) and remote sensing (the short or large-scale acquisition of information of an object or phenomenon, by the use of either recording or real-time sensing devices that are not in physical or intimate contact with the object). Environmental geography is a branch of geography that analyzes the spatial aspects of interactions between humans and the natural world. The branch bridges the divide between human and physical geography and thus requires an understanding of the dynamics of geology, meteorology, hydrology, biogeography, and geomorphology, as well as the ways in which human societies conceptualize the environment. Although the branch was previously more visible in research than at present with theories such as environmental determinism linking society with the environment. It has largely become the domain of the study of environmental management or anthropogenic influences. Journals and literature Main category: Geography Journals Mental geography and earth science journals communicate and document the results of research carried out in universities and various other research institutions. Most journals cover a specific publish the research within that field, however unlike human geographers, physical geographers tend to publish in inter-disciplinary journals rather than predominantly geography journal; the research is normally expressed in the form of a scientific paper. Additionally, textbooks, books, and communicate research to laypeople, although these tend to focus on environmental issues or cultural dilemmas. Examples of journals that publish articles from physical geographers are: Historical evolution of the discipline From the birth of geography as a science during the Greek classical period and until the late nineteenth century with the birth of anthropogeography (human geography), geography was almost exclusively a natural science: the study of location and descriptive gazetteer of all places of the known world. Several works among the best known during this long period could be cited as an example, from Strabo (Geography), Eratosthenes (Geographika) or Dionysius Periegetes (Periegesis Oiceumene) in the Ancient Age. In more modern times, these works include the Alexander von Humboldt (Kosmos) in the nineteenth century, in which geography is regarded as a physical and natural science through the work Summa de Geografía of Martín Fernández de Enciso from the early sixteenth century, which indicated for the first time the New World. During the eighteenth and nineteenth centuries, a controversy exported from geology, between supporters of James Hutton (uniformitarianism thesis) and Georges Cuvier (catastrophism) strongly influenced the field of geography, because geography at this time was a natural science. Two historical events during the nineteenth century had a great effect on the further development of physical geography. The first was the European colonial expansion in Asia, Africa, Australia and even America in search of raw materials required by industries during the Industrial Revolution. This fostered the creation of geography departments in the universities of the colonial powers and the birth and development of national geographical societies, thus giving rise to the process identified by Horacio Capel as the institutionalization of geography. The exploration of Siberia is an example. In the mid-eighteenth century, many geographers were sent to perform geographical surveys in the area of Arctic Siberia. Among these is who is considered the patriarch of Russian geography, Mikhail Lomonosov. In the mid-1750s Lomonosov began working in the Department of Geography, Academy of Sciences to conduct research in Siberia. They showed the organic origin of soil and developed a comprehensive law on the movement of the ice, thereby founding a new branch of geography: glaciology. In 1755 on his initiative was founded Moscow University where he promoted the study of geography and the training of geographers. In 1758 he was appointed director of the Department of Geography, Academy of Sciences, a post from which would develop a working methodology for geographical survey guided by the most important long expeditions and geographical studies in Russia. The contributions of the Russian school became more frequent through his disciples, and in the nineteenth century we have great geographers such as Vasily Dokuchaev who performed works of great importance as a "principle of comprehensive analysis of the territory" and "Russian Chernozem". In the latter, he introduced the geographical concept of soil, as distinct from a simple geological stratum, and thus found a new geographic area of study: pedology. Climatology also received a strong boost from the Russian school by Wladimir Köppen whose main contribution, climate classification, is still valid today. However, this great geographer also contributed to the paleogeography through his work "The climates of the geological past" which is considered the father of paleoclimatology. Russian geographers who made great contributions to the discipline in this period were: NM Sibirtsev, Pyotr Semyonov, K.D. Glinka, Neustrayev, among others. The second important process is the theory of evolution by Darwin in mid-century (which decisively influenced the work of Friedrich Ratzel, who had academic training as a zoologist and was a follower of Darwin's ideas) which meant an important impetus in the development of Biogeography. Another major event in the late nineteenth and early twentieth centuries took place in the United States. William Morris Davis not only made important contributions to the establishment of discipline in his country but revolutionized the field to develop cycle of erosion theory which he proposed as a paradigm for geography in general, although in actually served as a paradigm for physical geography. His theory explained that mountains and other landforms are shaped by factors that are manifested cyclically. He explained that the cycle begins with the lifting of the relief by geological processes (faults, volcanism, tectonic upheaval, etc.). Factors such as rivers and runoff begin to create V-shaped valleys between the mountains (the stage called "youth"). During this first stage, the terrain is steeper and more irregular. Over time, the currents can carve wider valleys ("maturity") and then start to wind, towering hills only ("senescence"). Finally, everything comes to what is a plain flat plain at the lowest elevation possible (called "baseline") This plain was called by Davis' "peneplain" meaning "almost plain" Then river rejuvenation occurs and there is another mountain lift and the cycle continues. Although Davis's theory is not entirely accurate, it was absolutely revolutionary and unique in its time and helped to modernize and create a geography subfield of geomorphology. Its implications prompted a myriad of research in various branches of physical geography. In the case of the Paleogeography, this theory provided a model for understanding the evolution of the landscape. For hydrology, glaciology, and climatology as a boost investigated as studying geographic factors shape the landscape and affect the cycle. The bulk of the work of William Morris Davis led to the development of a new branch of physical geography: Geomorphology whose contents until then did not differ from the rest of geography. Shortly after this branch would present a major development. Some of his disciples made significant contributions to various branches of physical geography such as Curtis Marbut and his invaluable legacy for Pedology, Mark Jefferson, Isaiah Bowman, among others. Notable physical geographers Eratosthenes (276194 BC) who invented the discipline of geography. He made the first known reliable estimation of the Earth's size. He is considered the father of mathematical geography and geodesy. Ptolemy (c. 90c. 168), who compiled Greek and Roman knowledge to produce the book Geographia. Abū Rayhān Bīrūnī (9731048 AD), considered the father of geodesy. Ibn Sina (Avicenna, 980–1037), who formulated the law of superposition and concept of uniformitarianism in Kitāb al-Šifāʾ (also called The Book of Healing). Muhammad al-Idrisi (Dreses, 1100), who drew the Tabula Rogeriana, the most accurate world map in pre-modern times. Piri Reis (1465c. 1554), whose Piri Reis map is the oldest surviving world map to include the Americas and possibly Antarctica Gerardus Mercator (1512–1594), an innovative cartographer and originator of the Mercator projection. Bernhardus Varenius (1622–1650), Wrote his important work "General Geography" (1650), first overview of the geography, the foundation of modern geography. Mikhail Lomonosov (1711–1765), father of Russian geography and founded the study of glaciology. Alexander von Humboldt (1769–1859), considered the father of modern geography. Published Cosmos and founded the study of biogeography. Arnold Henry Guyot (1807–1884), who noted the structure of glaciers and advanced the understanding of glacial motion, especially in fast ice flow. Louis Agassiz (1807–1873), the author of a glacial theory which disputed the notion of a steady-cooling Earth. Alfred Russel Wallace (1823–1913), founder of modern biogeography and the Wallace line. Vasily Dokuchaev (1840–1903), patriarch of Russian geography and founder of pedology. Wladimir Peter Köppen (1846–1940), developer of most important climate classification and founder of Paleoclimatology. William Morris Davis (1850–1934), father of American geography, founder of Geomorphology and developer of the geographical cycle theory. John Francon Williams FRGS (1854-1911), wrote his seminal work Geography of the Oceans published in 1881. Walther Penck (1888–1923), proponent of the cycle of erosion and the simultaneous occurrence of uplift and denudation. Sir Ernest Shackleton (1874–1922), Antarctic explorer during the Heroic Age of Antarctic Exploration. Robert E. Horton (1875–1945), founder of modern hydrology and concepts such as infiltration capacity and overland flow. J Harlen Bretz (1882–1981), pioneer of research into the shaping of landscapes by catastrophic floods, most notably the Bretz (Missoula) floods. Luis García Sáinz (1894–1965), pioneer of physical geography in Spain. Willi Dansgaard (1922–2011), palaeoclimatologist and quaternary scientist, instrumental in the use of oxygen-isotope dating and co-identifier of Dansgaard-Oeschger events. Hans Oeschger (1927–1998), palaeoclimatologist and pioneer in ice core research, co-identifier of Dansgaard-Orschger events. Richard Chorley (1927–2002), a key contributor to the quantitative revolution and the use of systems theory in geography. Sir Nicholas Shackleton (1937–2006), who demonstrated that oscillations in climate over the past few million years could be correlated with variations in the orbital and positional relationship between the Earth and the Sun. See also Areography Atmosphere of Earth Concepts and Techniques in Modern Geography Earth system science Environmental science Environmental studies Geographic information science Geographic information system Geophysics Geostatistics Global Positioning System Planetary science Physiographic regions of the world Selenography Technical geography References Further reading Pidwirny, Michael. (2014). Glossary of Terms for Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play. Pidwirny, Michael. (2014). Understanding Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play. Reynolds, Stephen J. et al. (2015). Exploring Physical Geography. [A Visual Textbook, Featuring more than 2500 Photographies & Illustrations]. McGraw-Hill Education, New York. External links Physiography by T.X. Huxley, 1878, full text, physical geography of the Thames River Basin Fundamentals of Physical Geography, 2nd Edition, by M. Pidwirny, 2006, full text Physical Geography for Students and Teachers, UK National Grid For Learning Earth sciences
0.780141
0.997665
0.778319
Cognitive development
Cognitive development is a field of study in neuroscience and psychology focusing on a child's development in terms of information processing, conceptual resources, perceptual skill, language learning, and other aspects of the developed adult brain and cognitive psychology. Qualitative differences between how a child processes their waking experience and how an adult processes their waking experience are acknowledged (such as object permanence, the understanding of logical relations, and cause-effect reasoning in school-age children). Cognitive development is defined as the emergence of the ability to consciously cognize, understand, and articulate their understanding in adult terms. Cognitive development is how a person perceives, thinks, and gains understanding of their world through the relations of genetic and learning factors. There are four stages to cognitive information development. They are, reasoning, intelligence, language, and memory. These stages start when the baby is about 18 months old, they play with toys, listen to their parents speak, they watch TV, anything that catches their attention helps build their cognitive development. Jean Piaget was a major force establishing this field, forming his "theory of cognitive development". Piaget proposed four stages of cognitive development: the sensorimotor, preoperational, concrete operational, and formal operational period. Many of Piaget's theoretical claims have since fallen out of favor. His description of the most prominent changes in cognition with age, is generally still accepted today (e.g., how early perception moves from being dependent on concrete, external actions. Later, abstract understanding of observable aspects of reality can be captured; leading to the discovery of underlying abstract rules and principles, usually starting in adolescence) In recent years, however, alternative models have been advanced, including information-processing theory, neo-Piagetian theories of cognitive development, which aim to integrate Piaget's ideas with more recent models and concepts in developmental and cognitive science, theoretical cognitive neuroscience, and social-constructivist approaches. Another such model of cognitive development is Bronfenbrenner's Ecological Systems Theory. A major controversy in cognitive development has been "nature versus nurture", i.e., the question if cognitive development is mainly determined by an individual's innate qualities ("nature"), or by their personal experiences ("nurture"). However, it is now recognized by most experts that this is a false dichotomy: there is overwhelming evidence from biological and behavioral sciences that from the earliest points in development, gene activity interacts with events and experiences in the environment. While naturalists are convinced of the power of genetic mechanisms, knowledge from different disciplines, such as Comparative psychology, Molecular biology, and Neuroscience, shows arguments for an ecological component in launching cognition (see the section "The beginning of cognition" below). History Jean Piaget is inexorably linked to cognitive development as he was the first to systematically study developmental processes. Despite being the first to develop a systemic study of cognitive development, Piaget was not the first to theorize about cognitive development. Jean-Jacques Rousseau wrote Emile, or On Education in 1762. He discusses childhood development as happening in three stages. In the first stage, up to age 12, the child is guided by their emotions and impulses. In the second stage, ages 12–16, the child's reason starts to develop. In the third and final stage, age 16 and up, the child develops into an adult. James Sully wrote several books on childhood development, including Studies of Childhood in 1895 and Children's Ways in 1897. He used a detailed observational study method with the children. Contemporary research in child development actually repeats observations and observational methods summarized by Sully in Studies of Childhood, such as the mirror technique. Sigmund Freud developed the theory of psychosexual development, which indicates children must pass through several stages as they develop their cognitive skills. Maria Montessori began her career working with mentally disabled children in 1897, then conducted observation and experimental research in elementary schools. She wrote The Discovery of the Child in 1950 which developed the Montessori method of education. She discussed four planes of development: birth to 6 years, 6 to 12, 12 to 18, and 18 to 24. The Montessori method now has three developmentally-meaningful age groups: 2–2.5 years, 2.5–6, and 6–12. She was working on human behavior in older children but only published lecture notes on the subject. Arnold Gesell was the creator of the maturational theory of development. Gesell said that development occurs due to biological hereditary features such as genetics and children will reach developmental milestones when they are ready to do so in a predictable sequence. Because of his theory of development, he devised a developmental scale that is used today called the Gesell Developmental Schedule (GDS) that provides parents, teachers, doctors, and other pertinent people with an overview of where an infant or child falls on the developmental spectrum. Erik Erikson was a neo-Freudian who focused on how children develop personality and identity. Although a contemporary of Freud, there is a larger focus on social experiences that occur across the lifespan, as opposed to childhood exclusively, that contribute to how personality and identity emerge. His framework uses eight systematic stages that all children must pass through. Urie Bronfenbrenner devised the ecological systems theory, which identifies various levels of a child's environment. The primary focus of this theory focuses on the quality and context of a child's environment. Bronfenbrenner suggested that as a child grows older, their interaction between the various levels of their environment grows more complex due to cognitive abilities expanding. Lawrence Kohlberg wrote the theory of stages of moral development, which extended Piaget's findings of cognitive development and showed that they continue through the lifespan. Kohlberg's six stages follow Piaget's constructivist requirements in that those stages can not be skipped and it is very rare to regress in stages. Notable works: Moral Stages and Moralization: The Cognitive-Development Approach (1976) and Essays on Moral Development (1981) Lev Vygotsky's theory is based on social learning as the most important aspect of cognitive development. In Vygotsky's theory, adults are very important for young children's development. They help children learn through mediation, which is modeling and explaining concepts. Together, adults and children master concepts of their culture and activities. Vygotsky believed we get our complex mental activities through social learning. A significant part of Vygotsky's theory is based on the zone of proximal development, which he believes is when the most effective learning takes place. The zone of proximal development is what a child cannot accomplish alone but can accomplish with the help of an MKO (more knowledgeable other). Vygotsky also believed culture is a very important part of cognitive development such as the language, writing and counting system used in that culture. Another aspect of Vygotsky's theory is private speech. Private speech is when a person talks to themselves in order to help themselves problem solve. Scaffolding or providing support to a child and then slowly removing support and allowing the child to do more on their own over time is also an aspect of Vygotsky's theory. Beginning of Cognition In cognitive development, the essential issue in beginning cognition is how the nervous system grasps perception and shapes intentionality in the sensorimotor stage (or before) when organisms only demonstrate simple reflexes (see articles perception, cognition, binding problem, multi sensory integration). The significance of this knowledge is that the mode to cognize at the stage without communication and abstract thinking, being a pre-requisite of social reality formation, determines the development of everything from cooperative interactions and knowledge assimilation to moral identity and cultural evolution that provides building societies (see also Social cognition and Collective behaviour). The contemporary academic discussion on a controversy in cognitive development (whether cognitive development is mainly determined by an individual's innate qualities or personal experiences) is still in progress. Many influential scientists argue that the genetic code is no more than a rule of causal specificity based on the fact that cells use nucleic acids as templates for the primary structure of proteins. However, it is unacceptable to say that DNA contains the information for phenotypic design. The epigenetic approach to human psychological development – that cascading phenotypic effects are not encoded directly in the genes – contrasts sharply with many so-called nativist approaches. Opponents of innate knowledge discuss four problems in appearance of the perception of objects. The binding problem – According to cognitive psychologist Anne Treisman, the binding problem can be divided into three separate problems. (1) How are relevant elements that should be related as a whole selected and separated from elements that belong to other objects, ideas, or events? (2) How is the binding encoded so it can be transferred to other brain systems and used? (3) How are the correct relationships between related elements within the same object defined? This problem is also connected to the problem of multisensory integration in perception. The perception stability problem – According to research professor of Liepaja University Igor Val Danilov, newborns and infants cannot capture the same picture of the environment as adults because of their immature sensory systems. They cannot sense environmental stimuli from social phenomena to the same extent as adults. The outcomes of processing similar sensory stimuli in immature and mature organisms differ. The corresponding holistic representations of objects can hardly occur in these organisms. The excitatory inputs problem – According to the received view in cognitive sciences, cognition develops due to experience-dependent neuronal plasticity, e.g.,. Neuronal plasticity refers to the capacity of the nervous system to modify itself, functionally and structurally, in response to experience and injury. However, the structural organization of excitatory inputs supporting spike-timing-dependent plasticity remains unknown. How is the relation between a specific sensory stimulus and the appropriate structural organization of the excitatory inputs in specific neurons formed? The problem of Morphogenesis – Cell actions during an embryo formation, including shape changes, cell contact remodeling, cell migration, cell division, and cell extrusion, need control over cell mechanics. This complex dynamical process is associated with protrusive, contractile, and adhesive forces and hydrostatic pressure, as well as material properties of cells that dictate how cells respond to active stresses. Precise coordination of all cells is a necessary condition. Moreover, such a complex dynamical process likely requires clear parameters of the final biological structure – the complete developmental program with a template for accomplishing it. Collinet and Lecuit (2021) pose a question: what forces or mechanisms at the cellular level manage four very general classes of tissue deformation, namely tissue folding and invagination, tissue flow and extension, tissue hollowing, and, finally, tissue branching? They challenge the nativists' notion that shape is fully encoded and determined by genes: how are cell mechanics and associated cell behaviors robustly organized in space and time during tissue morphogenesis? They argue that not only gene expression and the resulting biochemical cues but also mechanics and geometry act as sources of morphogenetic information to ultimately define the time and length scales of the cell behaviors driving morphogenesis. Thus, it is not only the interaction of gene activity with events and experiences in the environment that contributes to the formation of tissues in morphogenesis. Because the nervous system structures operate over everything that makes us human, the formation of neural tissues in a certain way is essential for shaping cognitive functions. According to research professor Igor Val Danilov, such a complex process of shaping the determined structure of the nervous system requires a complete developmental program with a template for accomplishing the final biological structure of the nervous system. Indeed, because even processes of the cell coupling for shaping a nervous system during embryonal development challenge the naturalistic approach, how the nervous system grasps perception and shapes intentionality (independently, i.e., without any template) seems even more complicated. So, the fact that gene activity interacts with events and experiences in the environment (as noted above) may not fully explain the integrative complexity of intentionality-perception development for beginning cognitive development. Nowadays, the Shared intentionality hypothesis is the only one that attempts to explain neurophysiological processes at the beginning of cognitive development at different levels of interaction, from interpersonal dynamics to neuronal interactions. It also solves the above noted problems. Professor of psychology Michael Tomasello hypothesised that social bonds between children and caregivers would gradually increase through the essential motive force of shared intentionality beginning from birth. The notion of Shared intentionality, introduced by Michael Tomasello, was developed by Research Professor Igor Val Danilov, expanding it to the intrauterine period. The Shared intentionality approach also points out that "an innate sensitivity to specific patterns of information" mentioned in the section "Speculated core systems of cognition" is also the outcome of Shared intentionality with caregivers, who obviously participated in the experiments. Jean Piaget Jean Piaget was the first psychologist and philosopher to brand this type of study as "cognitive development". Other researchers, in multiple disciplines, had studied development in children before, but Piaget is often credited as being the first one to make a systematic study of cognitive development and gave it its name. His main contribution is the stage theory of child cognitive development. He also published his observational studies of cognition in children, and created a series of simple tests to reveal different cognitive abilities in children. Piaget believed that people move through stages of development that allow them to think in new, more complex ways. Criticism Many of Piaget's claims have fallen out of favor. For example, he claimed that young children cannot conserve numbers. However, further experiments showed that children did not really understand what was being asked of them. When the experiment is done with candies, and the children are asked which set they want rather than having to tell an adult which is more, they show no confusion about which group has more items. Piaget argues that the child cannot conserve numbers if they do not understand one-to-one correspondence. Piaget's theory of cognitive development ends at the formal operational stage that is usually developed in early adulthood. It does not take into account later stages of adult cognitive development as described by, for example, Harvard University professor Robert Kegan. Additionally, Piaget largely ignores the effects of social and cultural upbringing on stages of development because he only examined children from western societies. This matters as certain societies and cultures have different early childhood experiences. For example, individuals in nomadic tribes struggle with number counting and object counting. Certain cultures have specific activities and events that are common at a younger age which can affect aspects such as object permeance. This indicates that children from different societies may achieve a stage like the formal operational stage while in other societies, children at the exact same age remain in the concrete operational stage. Stages Sensorimotor stage Piaget believed that infants entered a sensorimotor stage which lasts from birth until age 2. In this stage, individuals use their senses to investigate and interact with their environment. Through this they develop coordination between the sensory input and motor responses. Piaget also theorized that this stage ended with the acquisition of object permanence and the emergence of symbolic thought. This view collapsed in the 1980s when research was put out showing that infants as young as five months are able to represent out-of-sight objects, as well their properties, such as number and rigidity. Preoperational stage Piaget believed that children entered a preoperational stage from roughly age 2 until age 7. This stage involves the development of symbolic thought (which manifests in children’s increased ability to ‘play pretend’). This stage involves language acquisition, but also the inability to understand complex logic or to manipulate information. Subsequent work suggesting that preschoolers were indeed capable of taking others' perspectives into account and reasoning about abstract relationships, including causal relationships marked the demise of this aspect of stage theory as well. Concrete operational stage Piaget believed that the concrete operational stage spanned roughly from age 6 through age 12. This stage is marked by the development and achievement of skills such as conservation, classification, serialism, and spatial reasoning. Work suggesting that much younger children reason about abstract ideas including kinds, logical operators, and causal relationships rendered this aspect of stage theory obsolete. Formal operational stage Piaget believed that the formal operational stage spans roughly from age 12 through adulthood, and is marked by the ability to apply mental operations to abstract ideas. Erik Erickson Erikson worked with Freud but unlike Freud, Erikson focused on Biological, Psychological, and social factors in human development. Each stage is rooted in some kind of competence, or perceived ability to do things. Each stage is defined by 2 conflicting psychological tendencies and by what traits develop in the stage dependent on how much of each tendency was experienced. There are virtues that develop in healthy circumstances and maladaptations that develop in unhealthy circumstances. It consists of 8 stages. While the conflicting tendencies may appear to be good versus bad. They can be considered as a balance where most healthy individuals experience some of each. Stage 1-Infancy- Trust Versus Mistrust A baby has very little ability to do anything for themself. As such infants develop according to whether they learn to trust or distrust the world around them. The virtue that arises during this stage is hope and the maladaptation is withdrawal. Stage 2-Early Childhood- Autonomy Versus Shame As a child starts to explore the world the conflict they experience is autonomy or a feeling of being able to do things themselves, verses shame or doubt, which is a feeling of being unable to do things themselves and fear of making mistakes. The virtue that arises during this period is will, suggesting a control over one's actions. The maladaptation for this stage is compulsion, or lack of control over one's actions. Stage 3-Play Age- Initiative Versus Guilt As a child grows from the stage of autonomy verses shame, they experience the conflict of initiative vs guilt. Initiative or having the ability to act in a situation against guilt or feeling bad about their actions or feeling incapable of acting. The virtue that develops in this stage is purpose and the maladaptation is inhibition. Stage 4-School Age- Industry Versus Inferiority As a child's awareness of their effect on the world around them grows they come to the conflict of industry and inferiority. Industry meaning ability and willingness to proactively interact with the world around them and Inferiority meaning incapability or perceived incapability to interact with the world. The virtue that is learned in this stage is competence and the maladaptation is inertia or passivity. Stage 5-adolescence- Identity Versus Identity Confusion As a child grows into adolescence, their ability to interact with the world starts to interact with their perceptions of who they are, and they find themselves in a conflict between identity and identity confusion. Identity means knowledge of who they are and developing their own sense of right and wrong. Identity confusion meaning confusion over who they are and what right and wrong is to them. The virtue that is developed is fidelity and the maldevelopment is repudiation. Stage 6-Young Adulthood- Intimacy Versus Isolation During young adulthood, people find themselves in a place where they are looking for belonging in a small number of close relationships. Intimacy suggests finding very close relationships with other people and isolation is a lack of such a connection. The virtue that can arise from this is love and the maladaptation is distantiation. Stage 7-Adulthood- Generativity Versus Stagnation In this stage of life people find that along with accomplishing personal goals, they are either giving to the next generation, whether as a mentor or a parent or they turn towards themselves and keep a distance from others. The virtue that arises in this stage is caring and the maladaptation is rejectivity. Stage 8-Old Age- Integrity Versus Despair Those in the twilight of their life look back at their lives and either are satisfied with their life's work or feel great regret. This satisfaction or regret is a large part of their identity at the end of their lives. The virtue that develops is wisdom and the maldevelopment is disdain. Current Theories of Cognitive Development Core Knowledge Theory Empiricists study how these skills may be learned in such a short time. The debate is over whether these systems are learned by general-purpose learning devices or domain-specific cognition. Moreover, many modern cognitive developmental psychologists, recognizing that the term "innate" does not square with modern knowledge about epigenesis, neurobiological development, or learning, favor a non-nativist framework. Researchers who discuss "core systems" often speculate about differences in thinking and learning between proposed domains. Research suggests that children have an innate sensitivity to specific patterns of information, referred to as core domains.The discussion of “core knowledge” theory focuses on a few main systems, including agents, objects, numbers, and navigation. Agents It is speculated that a piece of an infants’ core knowledge lies in their ability to abstractly represent actors. Agents are actors, human or otherwise, who process events and situations, and select actions based on goals and beliefs. Children expect the actions of agents to be goal-directed, efficient, and understand that they have costs, such as time, energy, or effort. Children are importantly able to differentiate between actors and inanimate objects, proving a deeper understanding of the concept of an agent. Objects Within the theorized systems, infants’ core knowledge of objects has been one of the most extensively studied. These studies suggest that young infants appear to have an early expectation of object solidity, namely understanding that objects cannot pass through one another. Similarly, they demonstrate an awareness of object continuity, expecting objects to move on continuous paths rather than teleporting or discontinuously changing their locations. They also expect objects to follow the laws of gravity. Numbers Evidence suggests that humans utilize two core systems for number representation: approximate representations and precise representations. The approximate number system helps to capture the relationship between quantities by estimating numerical magnitudes. This system becomes more precise with age. The second system helps to precisely monitor small groups (limited to around 3 for infants) of individual objects and accurately represent those numerical quantities. Place Very young children appear to have some skill in navigation. This basic ability to infer the direction and distance of unseen locations develops in ways that are not entirely clear. However, there is some evidence that it involves the development of complex language skills between 3 and 5 years. Also, there is evidence that this skill depends importantly on visual experience, because congenitally blind individuals have been found to have impaired abilities to infer new paths between familiar locations. One of the original nativist versus empiricist debates was over depth perception. There is some evidence that children less than 72 hours old can perceive such complex things as biological motion. However, it is unclear how visual experience in the first few days contributes to this perception. There are far more elaborate aspects of visual perception that develop during infancy and beyond. Shared Intentionality This approach integrates Externalism (a group of positions in the philosophy of mind: embodied cognition, embodied embedded cognition, enactivism, extended mind, and situated cognition) with the Empiricist ideas about the beginning of cognition only from learning in the environment. According to the Externalism approach, communicative symbols are encoded into the local topological properties of neuronal maps, which reflect a dynamical action pattern. The sensorimotor neuronal network enables pairing the relevant cue with a particular symbol saved in the sensorimotor structures and processes that reveals embodied meanings. In this sense, the Shared intentionality theory does not contradict the Core Knowledge Theory while complements it. Based on evidence of child cognitive development, experimental data from research on child behavior in the prenatal period, and advances in inter-brain neuroscience research, research professor at Liepaja University Igor Val Danilov introduced the notion of non-local neuronal coupling of the mother and fetus neuronal networks. The term non-local neuronal coupling refers to the pre-perceptual communication provided by copying adequate ecological dynamics by one biological system from another, both indwelling one environmental context. The naive actor (fetus) replicates information from the experienced agent (mother) due to the synchronization of intrinsic processes of these dynamic systems (embodied information). This non-local neuronal coupling succeeds due to a low-frequency oscillator (mother's heartbeats) that coordinates relevant local neuronal networks in specific subsystems of these two organisms, which already exhibit gamma activity (similar embodied information in both). The registered cooperative neuronal activity in inter-brain research, so-called mirror neurons, is probably the manifestation of this non-local neuronal coupling. In such a manner, the experienced agent ensures one-direction conveying information about an actual cognitive event toward an organism at the simple reflexes stage of cognitive development without interacting through sensory signals. Obviously, any sensory communication between the mother and fetus is impossible. Therefore, non-local neuronal coupling mediates environmental learning early in cognition. The notion of non-local neuronal coupling filled a gap in knowledge both in the Core Knowledge Theory and the group of positions in Externalism about the very beginning of cognition, which has also been shown by the binding problem, the perception stability problem, the excitatory inputs problem, and the problem of Morphogenesis. The nervous system of the young organism at the prenatal stage of development cannot alone solve the complexity of intentionality-perception development for beginning cognitive development. For the innate sensitivity to specific patterns of information (referred to as core domains according to the Core Knowledge Theory) or for pairing the relevant cue with a particular symbol saved in the sensorimotor structures (embodied information according to Externalism), the organism only with an ability of reflex responses should distinguish the relevant stimulus (an informative cue) from the environment with the cacophony of stimuli: electromagnetic waves, chemical interactions, and pressure fluctuations. The notion of non-local neuronal coupling explains the neurophysiological processes of Shared intentionality at the cellular level that reveal in young organisms the innate sensitivity and/or embodied meanings during cognition. The Shared intentionality approach shows how, at different levels of interaction, from interpersonal dynamics to neuronal coupling, the collaborative interaction emerges in the mother-child pairs for sharing the essential sensory stimulus of the actual cognitive event. Finally, research has already shown that the Shared intentionality magnitude can be assessed by emulating the mother-fetal communication model in dyads of mothers and children from 2 to 10 years old. Key Topics of Study in Cognitive Development Language Acquisition A major, well-studied process and consequence of cognitive development is language acquisition. The traditional view was that this is the result of deterministic, human-specific genetic structures and processes. Other traditions, however, have emphasized the role of social experience in language learning. However, the relation of gene activity, experience, and language development is now recognized as incredibly complex and difficult to specify. Language development is sometimes separated into learning of phonology (systematic organization of sounds), morphology (structure of linguistic units—root words, affixes, parts of speech, intonation, etc.), syntax (rules of grammar within sentence structure), semantics (study of meaning), and discourse or pragmatics (relation between sentences). However, all of these aspects of language knowledge—which were originally posited by the linguist Noam Chomsky to be autonomous or separate—are now recognized to interact in complex ways. It was not until 1962 that bilingualism had been accepted as a contributing factor to cognitive development. There have been a number of studies showing how bilingualism contributes to the executive function of the brain, which is the main center at which cognitive development happens. According to Bialystok in "Bilingualism and the Development of Executive Function: The Role of Attention", children who are bilingual have to actively filter through the two different languages to select the one they need to use, which in turn makes the development stronger in that center. Other theories Whorf's hypothesis While working as a student of Edward Sapir, Benjamin Lee Whorf posited that a person's thinking depends on the structure and content of their social group's language. Per Whorf, language determines our thoughts and perceptions. For example, it used to be thought that the Greeks, who wrote left to right, thought differently than Egyptians since the Egyptians wrote right to left. Whorf's theory was so strict that he believed if a word is absent in a language, then the individual is unaware of the object's existence. This theory was played out in George Orwell's book, Animal Farm; the pig leaders slowly eliminated words from the citizen's vocabulary so that they were incapable of realizing what they were missing. The Whorfian hypothesis failed to recognize that people can still be aware of the concept or item, even though they lack efficient coding to quickly identify the target information. Quine's bootstrapping hypothesis Willard Van Orman Quine argued that there are innate conceptual biases that enable the acquisition of language, concepts, and beliefs. Quine's theory follows nativist philosophical traditions, such as the European rationalist philosophers, for example Immanuel Kant. Neo-Piagetian theories Neo-Piagetian theories of cognitive development emphasized the role of information processing mechanisms in cognitive development, such as attention control and working memory. They suggested that progression along Piagetian stages or other levels of cognitive development is a function of strengthening of control mechanisms and is within the stages themselves. Neuroscience During development, especially the first few years of life, children show interesting patterns of neural development and a high degree of neuroplasticity. Neuroplasticity, as explained by the World Health Organization, can be summed up in three points. Any adaptive mechanism used by the nervous system to repair itself after injury. Any means by which the nervous system can repair individually damaged central circuits. Any means by which the capacity of the central nervous system can adapt to new physiological conditions and environment. The relation of brain development and cognitive development is extremely complex and, since the 1990s, has been a growing area of research. Cognitive development and motor development may also be closely interrelated. When a person experiences a neurodevelopmental disorder and their cognitive development is disturbed, we often see adverse effects in motor development as well. Cerebellum, which is the part of brain that is most responsible for motor skills, has been shown to have significant importance in cognitive functions in the same way that prefrontal cortex has important duties in not only cognitive abilities but also development of motor skills. To support this, there is evidence of close co-activation of neocerebellum and dorsolateral prefrontal cortex in functional neuroimaging as well as abnormalities seen in both cerebellum and prefrontal cortex in the same developmental disorder. In this way, we see close interrelation of motor development and cognitive development and they cannot operate in their full capacity when either of them are impaired or delayed. Cultural influences From cultural psychologists' view, minds and culture shape each other. In other words, culture can influence brain structures which then influence our interpretation of the culture. These examples reveal cultural variations in neural responses: Figure-line task Behavioral research has shown that one's strength in independent (tasks which are focused on influencing others or oneself) or interdependent tasks (tasks where one changes their own behavior to favor others) differ based on their cultural context. In general, East Asian cultures are more interdependent whereas Western cultures are more independent. Hedden et al. assessed functional magnetic resonance imaging (fMRI) responses of East Asians and Americans while they performed independent (absolute) or interdependent (relative) tasks. The study showed that participants used regions of the brain associated with attentional control when they had to perform culturally incongruent tasks. In other words, neural paths used for the same task were different for Americans and East Asians. Transcultural neuroimaging studies New studies in transcultural neuroimaging studies have demonstrated that one's cultural background can influence the neural activity that underlies both high (for example, social cognition) and low (for example, perception) level cognitive functions. Studies demonstrated that groups that come from different cultures or that have been exposed to culturally different stimuli have differences in neural activity. For example, differences were found in that of the pre motor cortex during mental calculation and that of the VMPFC during trait judgements of one's mother from people with different cultural backgrounds. In conclusion, since differences were found in both high-level and low-level cognition one can assume that our brain's activity is strongly and, at least in part, constitutionally shaped by its sociocultural context. Understanding of others' intentions Kobayashi et al. compared American-English monolingual and Japanese-English bilingual children's brain responses in understanding others' intentions through false-belief story and cartoon tasks. They found universal activation of the region bilateral ventromedial prefrontal cortex in theory of mind tasks. However, American children showed greater activity in the left inferior frontal gyrus during the tasks whereas Japanese children had greater activity in the right inferior frontal gyrus during the Japanese Theory of Mind tasks. In conclusion, these examples suggest that the brain's neural activities are not universal but are culture dependent. In underrepresented groups Deaf and hard-of-hearing Being deaf or hard-of-hearing has been noted to impact cognitive development as hearing loss impacts social development, language acquisition, and the culture reacts to a deaf child. Cognitive development in academic achievement, reading development, language development, performance on standardized measures of intelligence, visual-spatial and memory skills, development of conceptual skills, and neuropsychological function are dependent upon the child's primary language of communication, either American Sign Language or English, as well as if the child is able to communicate and use the communication modality as a language. There is some research pointing to deficits in development of theory of mind in children who are deaf and hard-of-hearing which may be due to a lack of early conversational experience. Other research points to lower scores on the Wechsler Intelligence Scale for Children, especially in the Verbal Comprehension Index due differences in cultural knowledge acquisition. Transgender people Since the 2010s there has been an increase in research into how transgender people fit into cognitive development theory. At the earliest, transgender children can begin to socially transition during identity exploration. In 2015, Kristina Olson and colleagues studied transgender youth in comparison to their cisgender siblings and unrelated cisgender children. The students participated in the IAT, a test that measures how one may identify based on a series of questions related to memory. Overall it determines a child's gender preference. It showed that the transgender children's results correlated with their desired gender. The behaviors of the children also related back to their results. For instance, the transgender boys enjoyed food and activities typically associated and enjoyed by cisgender boys. The article reports that the researchers found that the children were not confused, deceptive, or oppositional of their gender identity, and responded with actions that are typically represented by their gender identity. See also References Further reading Klausmeier, J. Herbert & Patricia, S. Allen. "Cognitive Development of Children and Youth: A Longitudinal Study". 1978. pp. 3, 4, 5, 83, 91, 92, 93, 95, 96 McShane, John. "Cognitive Development: an information processing approach". 1991. pp. 22–24, 140, 141, 156, 157 Begley, Sharon. (1996) Your Child's Brain. Newsweek. Record: 005510CCB734C89244420. Cherry, Kendra. (2012). Erikson's Theory of Psychosocial Development. Psychosocial Development in Infancy and Early Childhood. Freud, Lisa (10/05/2010). Developmental Cognitive Psychology, Behavioral Neuroscience, and Psychobiology Program. Eunice Kennedy Shiver: National Institute of Child Health and Human Development. Davies, Kevin. (4/17/2001). Nature vs. Nurture Revisited. NOVA. Cognitive psychology Neuroscience Developmental psychology
0.781331
0.995995
0.778203
Interdisciplinarity
Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several fields like sociology, anthropology, psychology, economics, etc. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term "interdisciplinary" is sometimes confined to academic settings. The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields. The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. Interdisciplinary education fosters cognitive flexibility and prepares students to tackle complex, real-world problems by integrating knowledge from multiple fields. This approach emphasizes active learning, critical thinking, and problem-solving skills, equipping students with the adaptability needed in an increasingly interconnected world. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics. Development Although "interdisciplinary" and "interdisciplinarity" are frequently viewed as twentieth century terms, the concept has historical antecedents, most notably Greek philosophy. Julie Thompson Klein attests that "the roots of the concepts lie in a number of ideas that resonate through modern discourse—the ideas of a unified science, general knowledge, synthesis and the integration of knowledge", while Giles Gunn says that Greek historians and dramatists took elements from other realms of knowledge (such as medicine or philosophy) to further understand their own material. The building of Roman roads required men who understood surveying, material science, logistics and several other disciplines. Any broadminded humanist project involves interdisciplinarity, and history shows a crowd of cases, as seventeenth-century Leibniz's task to create a system of universal justice, which required linguistics, economics, management, ethics, law philosophy, politics, and even sinology. Interdisciplinary programs sometimes arise from a shared conviction that the traditional disciplines are unable or unwilling to address an important problem. For example, social science disciplines such as anthropology and sociology paid little attention to the social analysis of technology throughout most of the twentieth century. As a result, many social scientists with interests in technology have joined science, technology and society programs, which are typically staffed by scholars drawn from numerous disciplines. They may also arise from new research developments, such as nanotechnology, which cannot be addressed without combining the approaches of two or more disciplines. Examples include quantum information processing, an amalgamation of quantum physics and computer science, and bioinformatics, combining molecular biology with computer science. Sustainable development as a research area deals with problems requiring analysis and synthesis across economic, social and environmental spheres; often an integration of multiple social and natural science disciplines. Interdisciplinary research is also key to the study of health sciences, for example in studying optimal solutions to diseases. Some institutions of higher education offer accredited degree programs in Interdisciplinary Studies. At another level, interdisciplinarity is seen as a remedy to the harmful effects of excessive specialization and isolation in information silos. On some views, however, interdisciplinarity is entirely indebted to those who specialize in one field of study—that is, without specialists, interdisciplinarians would have no information and no leading experts to consult. Others place the focus of interdisciplinarity on the need to transcend disciplines, viewing excessive specialization as problematic both epistemologically and politically. When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another. Barriers Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in "softer" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those disciplines. From the disciplinary perspective, however, much interdisciplinary work may be seen as "soft", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure. Interdisciplinary programs may also fail if they are not given sufficient autonomy. For example, interdisciplinary faculty are usually recruited to a joint appointment, with responsibilities in both an interdisciplinary program (such as women's studies) and a traditional discipline (such as history). If the traditional discipline makes the tenure decisions, new interdisciplinary faculty will be hesitant to commit themselves fully to interdisciplinary work. Other barriers include the generally disciplinary orientation of most scholarly journals, leading to the perception, if not the fact, that interdisciplinary research is hard to publish. In addition, since traditional budgetary practices at most universities channel resources through the disciplines, it becomes difficult to account for a given scholar or teacher's salary and time. During periods of budgetary contraction, the natural tendency to serve the primary constituency (i.e., students majoring in the traditional discipline) makes resources scarce for teaching and research comparatively far from the center of the discipline as traditionally understood. For these same reasons, the introduction of new interdisciplinary programs is often resisted because it is perceived as a competition for diminishing funds. Due to these and other barriers, interdisciplinary research areas are strongly motivated to become disciplines themselves. If they succeed, they can establish their own research funding programs and make their own tenure and promotion decisions. In so doing, they lower the risk of entry. Examples of former interdisciplinary research areas that have become disciplines, many of them named for their parent disciplines, include neuroscience, cybernetics, biochemistry and biomedical engineering. These new fields are occasionally referred to as "interdisciplines". On the other hand, even though interdisciplinary activities are now a focus of attention for institutions promoting learning and teaching, as well as organizational and social entities concerned with education, they are practically facing complex barriers, serious challenges and criticism. The most important obstacles and challenges faced by interdisciplinary activities in the past two decades can be divided into "professional", "organizational", and "cultural" obstacles. Interdisciplinary studies and studies of interdisciplinarity An initial distinction should be made between interdisciplinary studies, which can be found spread across the academy today, and the study of interdisciplinarity, which involves a much smaller group of researchers. The former is instantiated in thousands of research centers across the US and the world. The latter has one US organization, the Association for Interdisciplinary Studies (founded in 1979), two international organizations, the International Network of Inter- and Transdisciplinarity (founded in 2010) and the Philosophy of/as Interdisciplinarity Network (founded in 2009). The US's research institute devoted to the theory and practice of interdisciplinarity, the Center for the Study of Interdisciplinarity at the University of North Texas, was founded in 2008 but is closed as of 1 September 2014, the result of administrative decisions at the University of North Texas. An interdisciplinary study is an academic program or process seeking to synthesize broad perspectives, knowledge, skills, interconnections, and epistemology in an educational setting. Interdisciplinary programs may be founded in order to facilitate the study of subjects which have some coherence, but which cannot be adequately understood from a single disciplinary perspective (for example, women's studies or medieval studies). More rarely, and at a more advanced level, interdisciplinarity may itself become the focus of study, in a critique of institutionalized disciplines' ways of segmenting knowledge. In contrast, studies of interdisciplinarity raise to self-consciousness questions about how interdisciplinarity works, the nature and history of disciplinarity, and the future of knowledge in post-industrial society. Researchers at the Center for the Study of Interdisciplinarity have made the distinction between philosophy 'of' and 'as' interdisciplinarity, the former identifying a new, discrete area within philosophy that raises epistemological and metaphysical questions about the status of interdisciplinary thinking, with the latter pointing toward a philosophical practice that is sometimes called 'field philosophy'. Perhaps the most common complaint regarding interdisciplinary programs, by supporters and detractors alike, is the lack of synthesis—that is, students are provided with multiple disciplinary perspectives but are not given effective guidance in resolving the conflicts and achieving a coherent view of the subject. Others have argued that the very idea of synthesis or integration of disciplines presupposes questionable politico-epistemic commitments. Critics of interdisciplinary programs feel that the ambition is simply unrealistic, given the knowledge and intellectual maturity of all but the exceptional undergraduate; some defenders concede the difficulty, but insist that cultivating interdisciplinarity as a habit of mind, even at that level, is both possible and essential to the education of informed and engaged citizens and leaders capable of analyzing, evaluating, and synthesizing information from multiple sources in order to render reasoned decisions. While much has been written on the philosophy and promise of interdisciplinarity in academic programs and professional practice, social scientists are increasingly interrogating academic discourses on interdisciplinarity, as well as how interdisciplinarity actually works—and does not—in practice. Some have shown, for example, that some interdisciplinary enterprises that aim to serve society can produce deleterious outcomes for which no one can be held to account. Politics of interdisciplinary studies Since 1998, there has been an ascendancy in the value of interdisciplinary research and teaching and a growth in the number of bachelor's degrees awarded at U.S. universities classified as multi- or interdisciplinary studies. The number of interdisciplinary bachelor's degrees awarded annually rose from 7,000 in 1973 to 30,000 a year by 2005 according to data from the National Center of Educational Statistics (NECS). In addition, educational leaders from the Boyer Commission to Carnegie's President Vartan Gregorian to Alan I. Leshner, CEO of the American Association for the Advancement of Science have advocated for interdisciplinary rather than disciplinary approaches to problem-solving in the 21st century. This has been echoed by federal funding agencies, particularly the National Institutes of Health under the direction of Elias Zerhouni, who has advocated that grant proposals be framed more as interdisciplinary collaborative projects than single-researcher, single-discipline ones. At the same time, many thriving longstanding bachelor's in interdisciplinary studies programs in existence for 30 or more years, have been closed down, in spite of healthy enrollment. Examples include Arizona International (formerly part of the University of Arizona), the School of Interdisciplinary Studies at Miami University, and the Department of Interdisciplinary Studies at Wayne State University; others such as the Department of Interdisciplinary Studies at Appalachian State University, and George Mason University's New Century College, have been cut back. Stuart Henry has seen this trend as part of the hegemony of the disciplines in their attempt to recolonize the experimental knowledge production of otherwise marginalized fields of inquiry. This is due to threat perceptions seemingly based on the ascendancy of interdisciplinary studies against traditional academia. Examples Communication science: Communication studies takes up theories, models, concepts, etc. of other, independent disciplines such as sociology, political science and economics and thus decisively develops them. Environmental science: Environmental science is an interdisciplinary earth science aimed at addressing environmental issues such as global warming and pollution, and involves the use of a wide range of scientific disciplines including geology, chemistry, physics, ecology, and oceanography. Faculty members of environmental programs often collaborate in interdisciplinary teams to solve complex global environmental problems. Those who study areas of environmental policy such as environmental law, sustainability, and environmental justice, may also seek knowledge in the environmental sciences to better develop their expertise and understanding in their fields. Knowledge management: Knowledge management discipline exists as a cluster of divergent schools of thought under an overarching knowledge management umbrella by building on works in computer science, economics, human resource management, information systems, organizational behavior, philosophy, psychology, and strategic management. Liberal arts education: A select realm of disciplines that cut across the humanities, social sciences, and hard sciences, initially intended to provide a well-rounded education. Several graduate programs exist in some form of Master of Arts in Liberal Studies to continue to offer this interdisciplinary course of study. Materials science: Field that combines the scientific and engineering aspects of materials, particularly solids. It covers the design, discovery and application of new materials by incorporating elements of physics, chemistry, and engineering. Permaculture: A holistic design science that provides a framework for making design decisions in any sphere of human endeavor, but especially in land use and resource security. Provenance research: Interdisciplinary research comes into play when clarifying the path of artworks into public and private art collections and also in relation to human remains in natural history collections. Sports science: Sport science is an interdisciplinary science that researches the problems and manifestations in the field of sport and movement in cooperation with a number of other sciences, such as sociology, ethics, biology, medicine, biomechanics or pedagogy. Transport sciences: Transport sciences are a field of science that deals with the relevant problems and events of the world of transport and cooperates with the specialised legal, ecological, technical, psychological or pedagogical disciplines in working out the changes of place of people, goods, messages that characterise them.<ref>Hendrik Ammoser, Mirko Hoppe: Glossary of Transport and Transport Sciences (PDF; 1,3 MB), published in the series Discussion Papers from the Institute of Economics and Transport, Technische Universität Dresden. Dresden 2006. </ref> Venture research: Venture research is an interdisciplinary research area located in the human sciences that deals with the conscious entering into and experiencing of borderline situations. For this purpose, the findings of evolutionary theory, cultural anthropology, social sciences, behavioral research, differential psychology, ethics or pedagogy are cooperatively processed and evaluated.Siegbert A. Warwitz: Vom Sinn des Wagens. Why people take on dangerous challenges. In: German Alpine Association (ed.): Berg 2006. Tyrolia Publishing House. Munich-Innsbruck-Bolzano. P. 96-111. Historical examples There are many examples of when a particular idea, almost in the same period, arises in different disciplines. One case is the shift from the approach of focusing on "specialized segments of attention" (adopting one particular perspective), to the idea of "instant sensory awareness of the whole", an attention to the "total field", a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in painting (with cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from an era shaped by mechanization, which brought sequentiality, to the era shaped by the instant speed of electricity, which brought simultaneity. Efforts to simplify and defend the concept An article in the Social Science Journal attempts to provide a simple, common-sense, definition of interdisciplinarity, bypassing the difficulties of defining that concept and obviating the need for such related concepts as transdisciplinarity, pluridisciplinarity, and multidisciplinary: In turn, interdisciplinary richness of any two instances of knowledge, research, or education can be ranked by weighing four variables: number of disciplines involved, the "distance" between them, the novelty of any particular combination, and their extent of integration. Interdisciplinary knowledge and research are important because: "Creativity often requires interdisciplinary knowledge. Immigrants often make important contributions to their new field. Disciplinarians often commit errors which can be best detected by people familiar with two or more disciplines. Some worthwhile topics of research fall in the interstices among the traditional disciplines. Many intellectual, social, and practical problems require interdisciplinary approaches. Interdisciplinary knowledge and research serve to remind us of the unity-of-knowledge ideal. Interdisciplinarians enjoy greater flexibility in their research. More so than narrow disciplinarians, interdisciplinarians often treat themselves to the intellectual equivalent of traveling in new lands. Interdisciplinarians may help breach communication gaps in the modern academy, thereby helping to mobilize its enormous intellectual resources in the cause of greater social rationality and justice. By bridging fragmented disciplines, interdisciplinarians might play a role in the defense of academic freedom." Quotations See also Commensurability (philosophy of science) Double degree Encyclopedism Holism Holism in science Integrative learning Interdiscipline Interdisciplinary arts Interdisciplinary teaching Interprofessional education Meta-functional expertise Methodology Polymath Science of team science Social ecological model Science and technology studies (STS) Synoptic philosophy Systems theory Thematic learning Periodic table of human sciences in Tinbergen's four questions Transdisciplinarity References Further reading Association for Interdisciplinary Studies Center for the Study of Interdisciplinarity Centre for Interdisciplinary Research in the Arts (University of Manchester) College for Interdisciplinary Studies, University of British Columbia, Vancouver, British Columbia, Canada Frank, Roberta: Interdisciplitarity': The First Half Century", Issues in Integrative Studies 6 (1988): 139-151. Frodeman, R., Klein, J.T., and Mitcham, C. Oxford Handbook of Interdisciplinarity. Oxford University Press, 2010. The Evergreen State College, Olympia, Washington Gram Vikas (2007) Annual Report, p. 19. Hang Seng Centre for Cognitive Studies Indiresan, P.V. (1990) Managing Development: Decentralisation, Geographical Socialism And Urban Replication. India: Sage Interdisciplinary Arts Department, Columbia College Chicago Interdisciplinarity and tenure/ Interdisciplinary Studies Project, Harvard University School of Education, Project Zero Klein, Julie Thompson (1996) Crossing Boundaries: Knowledge, Disciplinarities, and Interdisciplinarities (University Press of Virginia) Klein, Julie Thompson (2006) "Resources for interdisciplinary studies." Change, (Mark/April). 52–58 Klein, Julie Thompson and Thorsten Philipp (2023), "Interdisciplinarity" in Handbook Transdisciplinary Learning. Eds. Thorsten Philipp und Tobias Schmohl, 195-204. Bielefeld: transcript. doi: 10.14361/9783839463475-021. Kockelmans, Joseph J. editor (1979) Interdisciplinarity and Higher Education, The Pennsylvania State University Press . Yifang Ma, Roberta Sinatra, Michael Szell, Interdisciplinarity: A Nobel Opportunity, November 2018 Gerhard Medicus Gerhard Medicus: Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin 2017 VWB] Moran, Joe. (2002). Interdisciplinarity. Morson, Gary Saul and Morton O. Schapiro (2017). Cents and Sensibility: What Economics Can Learn from the Humanities. (Princeton University Press) NYU Gallatin School of Individualized Study, New York, NY Poverty Action Lab Rhoten, D. (2003). A multi-method analysis of the social and technical conditions for interdisciplinary collaboration. School of Social Ecology at the University of California, Irvine Siskin, L.S. & Little, J.W. (1995). The Subjects in Question. Teachers College Press. about the departmental organization of high schools and efforts to change that. Stiglitz, Joseph (2002) Globalisation and its Discontents, United States of America, W.W. Norton and Company Sumner, A and M. Tribe (2008) International Development Studies: Theories and Methods in Research and Practice, London: Sage Thorbecke, Eric. (2006) "The Evolution of the Development Doctrine, 1950–2005". UNU-WIDER Research Paper No. 2006/155. United Nations University, World Institute for Development Economics Research Trans- & inter-disciplinary science approaches- A guide to on-line resources on integration and trans- and inter-disciplinary approaches. Truman State University's Interdisciplinary Studies Program Peter Weingart and Nico Stehr, eds. 2000. Practicing Interdisciplinarity (University of Toronto Press) External links Association for Interdisciplinary Studies National Science Foundation Workshop Report: Interdisciplinary Collaboration in Innovative Science and Engineering Fields'' Rethinking Interdisciplinarity online conference, organized by the Institut Nicod, CNRS, Paris [broken] Center for the Study of Interdisciplinarity at the University of North Texas Labyrinthe. Atelier interdisciplinaire, a journal (in French), with a special issue on La Fin des Disciplines? Rupkatha Journal on Interdisciplinary Studies in Humanities: An Online Open Access E-Journal, publishing articles on a number of areas Article about interdisciplinary modeling (in French with an English abstract) Wolf, Dieter. Unity of Knowledge, an interdisciplinary project Soka University of America has no disciplinary departments and emphasizes interdisciplinary concentrations in the Humanities, Social and Behavioral Sciences, International Studies, and Environmental Studies. SystemsX.ch – The Swiss Initiative in Systems Biology Tackling Your Inner 5-Year-Old: Saving the world requires an interdisciplinary perspective Academia Academic discipline interactions Knowledge Occupations Pedagogy Philosophy of education
0.78058
0.996882
0.778146
Therapist
A therapist is a person who offers any kinds of therapy. Therapists are trained professionals in the field of any types of services like psychologists, social workers, counselors, etc. They are helpful in counseling individuals for various mental and physical issues. Meaning Therapist refers to trained professionals engaged in providing services any kind of treatment or rehabilitation. Reasons Therapists can help in addressing a range of issues including: anxiety behavioral issues depression managing life changes eating disorders loneliness grief self-esteem negative thinking chronic illness management sleep disorder gender or sexuality relationships social issues stress addiction suicide or self-harm based thoughts trauma Types Following are various types of therapists. Therapists for addiction Therapists for art Therapists for children Therapists for massage Therapists for marriage and children Therapists for music Therapists for occupation Therapists for physical body Therapists for psychology Therapists for yoga Psychotherapists belong to varied fields like psychologists, social workers, psychiatric nurses, and psychiatrists Specialization Therapist are basically specialized in below areas: Disorders in behavior Mental health of Community Schooling and career Related to rehabilitation Substance based abuse Autism and/or autism awareness Education A therapist or a licensed counselor is required to qualify in state licensure exam and also preferred to have a master's degree in addition to completion of internship with a practicing supervisor. Some bachelor's degree holding counselors also practice under the guidance of licensed therapist or a psychologist. Few counselors prefer to have art therapy or addictions prevention training. Benefits Following are the benefits of consulting a therapist. Improvement of physical and mental health Creating awareness on thoughts and its effect on behaviors Practically understanding the relationship between thoughts and actions Friendly support and understanding In-depth knowledge in experience and behaviors Self awareness Improving social relationships Learning new skills in managing stress Interaction of issues like fear and worries with a neutral person References Therapy
0.783557
0.993052
0.778114
Behavioral neuroscience
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is part of the broad, interdisciplinary field of neuroscience, with its primary focus being on the biological and neural mechanisms underlying behavior. Cognitive neuroscience is similar to behavioral neuroscience, in that both fields study the neurobiological functions related to psychology, as in experiences and behaviors. Behavioral neuroscientists examine the biological bases of behavior through research that involves neuroanatomical substrates, environmental and genetic factors, effects of lesions and electrical stimulation, developmental processes, recording electrical activity, neurotransmitters, hormonal influences, chemical components, and the effects of drugs. Important topics of consideration for neuroscientific research in behavior include learning and memory, sensory processes, motivation and emotion, as well as genetic and molecular substrates concerning the biological bases of behavior. History Behavioral neuroscience as a scientific discipline emerged from a variety of scientific and philosophical traditions in the 18th and 19th centuries. René Descartes proposed physical models to explain animal as well as human behavior. Descartes suggested that the pineal gland, a midline unpaired structure in the brain of many organisms, was the point of contact between mind and body. Descartes also elaborated on a theory in which the pneumatics of bodily fluids could explain reflexes and other motor behavior. This theory was inspired by moving statues in a garden in Paris. Other philosophers also helped give birth to psychology. One of the earliest textbooks in the new field, The Principles of Psychology by William James, argues that the scientific study of psychology should be grounded in an understanding of biology. The emergence of psychology and behavioral neuroscience as legitimate sciences can be traced from the emergence of physiology from anatomy, particularly neuroanatomy. Physiologists conducted experiments on living organisms, a practice that was distrusted by the dominant anatomists of the 18th and 19th centuries. The influential work of Claude Bernard, Charles Bell, and William Harvey helped to convince the scientific community that reliable data could be obtained from living subjects. Even before the 18th and 19th centuries, behavioral neuroscience was beginning to take form as far back as 1700 B.C. The question that seems to continually arise is: what is the connection between the mind and body? The debate is formally referred to as the mind-body problem. There are two major schools of thought that attempt to resolve the mind–body problem; monism and dualism. Plato and Aristotle are two of several philosophers who participated in this debate. Plato believed that the brain was where all mental thought and processes happened. In contrast, Aristotle believed the brain served the purpose of cooling down the emotions derived from the heart. The mind-body problem was a stepping stone toward attempting to understand the connection between the mind and body. Another debate arose about localization of function or functional specialization versus equipotentiality which played a significant role in the development in behavioral neuroscience. As a result of localization of function research, many famous people found within psychology have come to various different conclusions. Wilder Penfield was able to develop a map of the cerebral cortex through studying epileptic patients along with Rassmussen. Research on localization of function has led behavioral neuroscientists to a better understanding of which parts of the brain control behavior. This is best exemplified through the case study of Phineas Gage. The term "psychobiology" has been used in a variety of contexts, emphasizing the importance of biology, which is the discipline that studies organic, neural and cellular modifications in behavior, plasticity in neuroscience, and biological diseases in all aspects, in addition, biology focuses and analyzes behavior and all the subjects it is concerned about, from a scientific point of view. In this context, psychology helps as a complementary, but important discipline in the neurobiological sciences. The role of psychology in this questions is that of a social tool that backs up the main or strongest biological science. The term "psychobiology" was first used in its modern sense by Knight Dunlap in his book An Outline of Psychobiology (1914). Dunlap also was the founder and editor-in-chief of the journal Psychobiology. In the announcement of that journal, Dunlap writes that the journal will publish research "...bearing on the interconnection of mental and physiological functions", which describes the field of behavioral neuroscience even in its modern sense. Relationship to other fields of psychology and biology In many cases, humans may serve as experimental subjects in behavioral neuroscience experiments; however, a great deal of the experimental literature in behavioral neuroscience comes from the study of non-human species, most frequently rats, mice, and monkeys. As a result, a critical assumption in behavioral neuroscience is that organisms share biological and behavioral similarities, enough to permit extrapolations across species. This allies behavioral neuroscience closely with comparative psychology, ethology, evolutionary biology, and neurobiology. Behavioral neuroscience also has paradigmatic and methodological similarities to neuropsychology, which relies heavily on the study of the behavior of humans with nervous system dysfunction (i.e., a non-experimentally based biological manipulation). Synonyms for behavioral neuroscience include biopsychology, biological psychology, and psychobiology. Physiological psychology is a subfield of behavioral neuroscience, with an appropriately narrower definition. Research methods The distinguishing characteristic of a behavioral neuroscience experiment is that either the independent variable of the experiment is biological, or some dependent variable is biological. In other words, the nervous system of the organism under study is permanently or temporarily altered, or some aspect of the nervous system is measured (usually to be related to a behavioral variable). Disabling or decreasing neural function Lesions – A classic method in which a brain-region of interest is naturally or intentionally destroyed to observe any resulting changes such as degraded or enhanced performance on some behavioral measure. Lesions can be placed with relatively high accuracy "Thanks to a variety of brain 'atlases' which provide a map of brain regions in 3-dimensional" stereotactic coordinates. Surgical lesions – Neural tissue is destroyed by removing it surgically. Electrolytic lesions – Neural tissue is destroyed through the application of electrical shock trauma. Chemical lesions – Neural tissue is destroyed by the infusion of a neurotoxin. Temporary lesions – Neural tissue is temporarily disabled by cooling or by the use of anesthetics such as tetrodotoxin. Transcranial magnetic stimulation – A new technique usually used with human subjects in which a magnetic coil applied to the scalp causes unsystematic electrical activity in nearby cortical neurons which can be experimentally analyzed as a functional lesion. Synthetic ligand injection – A receptor activated solely by a synthetic ligand (RASSL) or Designer Receptor Exclusively Activated by Designer Drugs (DREADD), permits spatial and temporal control of G protein signaling in vivo. These systems utilize G protein-coupled receptors (GPCR) engineered to respond exclusively to synthetic small molecules ligands, like clozapine N-oxide (CNO), and not to their natural ligand(s). RASSL's represent a GPCR-based chemogenetic tool. These synthetic ligands upon activation can decrease neural function by G-protein activation. This can with Potassium attenuating neural activity. Optogenetic inhibition – A light activated inhibitory protein is expressed in cells of interest. Powerful millisecond timescale neuronal inhibition is instigated upon stimulation by the appropriate frequency of light delivered via fiber optics or implanted LEDs in the case of vertebrates, or via external illumination for small, sufficiently translucent invertebrates. Bacterial Halorhodopsins or Proton pumps are the two classes of proteins used for inhibitory optogenetics, achieving inhibition by increasing cytoplasmic levels of halides or decreasing the cytoplasmic concentration of protons, respectively. Enhancing neural function Electrical stimulation – A classic method in which neural activity is enhanced by application of a small electric current (too small to cause significant cell death). Psychopharmacological manipulations – A chemical receptor antagonist induces neural activity by interfering with neurotransmission. Antagonists can be delivered systemically (such as by intravenous injection) or locally (intracerebrally) during a surgical procedure into the ventricles or into specific brain structures. For example, NMDA antagonist AP5 has been shown to inhibit the initiation of long term potentiation of excitatory synaptic transmission (in rodent fear conditioning) which is believed to be a vital mechanism in learning and memory. Synthetic Ligand Injection – Likewise, Gq-DREADDs can be used to modulate cellular function by innervation of brain regions such as Hippocampus. This innervation results in the amplification of γ-rhythms, which increases motor activity. Transcranial magnetic stimulation – In some cases (for example, studies of motor cortex), this technique can be analyzed as having a stimulatory effect (rather than as a functional lesion). Optogenetic excitation – A light activated excitatory protein is expressed in select cells. Channelrhodopsin-2 (ChR2), a light activated cation channel, was the first bacterial opsin shown to excite neurons in response to light, though a number of new excitatory optogenetic tools have now been generated by improving and imparting novel properties to ChR2. Measuring neural activity Optical techniques – Optical methods for recording neuronal activity rely on methods that modify the optical properties of neurons in response to the cellular events associated with action potentials or neurotransmitter release. Voltage sensitive dyes (VSDs) were among the earliest method for optically detecting neuronal activity. VSDs commonly changed their fluorescent properties in response to a voltage change across the neuron's membrane, rendering membrane sub-threshold and supra-threshold (action potentials) electrical activity detectable. Genetically encoded voltage sensitive fluorescent proteins have also been developed. Calcium imaging relies on dyes or genetically encoded proteins that fluoresce upon binding to the calcium that is transiently present during an action potential. Synapto-pHluorin is a technique that relies on a fusion protein that combines a synaptic vesicle membrane protein and a pH sensitive fluorescent protein. Upon synaptic vesicle release, the chimeric protein is exposed to the higher pH of the synaptic cleft, causing a measurable change in fluorescence. Single-unit recording – A method whereby an electrode is introduced into the brain of a living animal to detect electrical activity that is generated by the neurons adjacent to the electrode tip. Normally this is performed with sedated animals but sometimes it is performed on awake animals engaged in a behavioral event, such as a thirsty rat whisking a particular sandpaper grade previously paired with water in order to measure the corresponding patterns of neuronal firing at the decision point. Multielectrode recording – The use of a bundle of fine electrodes to record the simultaneous activity of up to hundreds of neurons. Functional magnetic resonance imaging – fMRI, a technique most frequently applied on human subjects, in which changes in cerebral blood flow can be detected in an MRI apparatus and are taken to indicate relative activity of larger scale brain regions (i.e., on the order of hundreds of thousands of neurons). Positron emission tomography - PET detects particles called photons using a 3-D nuclear medicine examination. These particles are emitted by injections of radioisotopes such as fluorine. PET imaging reveal the pathological processes which predict anatomic changes making it important for detecting, diagnosing and characterising many pathologies. Electroencephalography – EEG, and the derivative technique of event-related potentials, in which scalp electrodes monitor the average activity of neurons in the cortex (again, used most frequently with human subjects). This technique uses different types of electrodes for recording systems such as needle electrodes and saline-based electrodes. EEG allows for the investigation of mental disorders, sleep disorders and physiology. It can monitor brain development and cognitive engagement. Functional neuroanatomy – A more complex counterpart of phrenology. The expression of some anatomical marker is taken to reflect neural activity. For example, the expression of immediate early genes is thought to be caused by vigorous neural activity. Likewise, the injection of 2-deoxyglucose prior to some behavioral task can be followed by anatomical localization of that chemical; it is taken up by neurons that are electrically active. Magnetoencephalography – MEG shows the functioning of the human brain through the measurement of electromagnetic activity. Measuring the magnetic fields created by the electric current flowing within the neurons identifies brain activity associated with various human functions in real time, with millimeter spatial accuracy. Clinicians can noninvasively obtain data to help them assess neurological disorders and plan surgical treatments. Genetic techniques QTL mapping – The influence of a gene in some behavior can be statistically inferred by studying inbred strains of some species, most commonly mice. The recent sequencing of the genome of many species, most notably mice, has facilitated this technique. Selective breeding – Organisms, often mice, may be bred selectively among inbred strains to create a recombinant congenic strain. This might be done to isolate an experimentally interesting stretch of DNA derived from one strain on the background genome of another strain to allow stronger inferences about the role of that stretch of DNA. Genetic engineering – The genome may also be experimentally-manipulated; for example, knockout mice can be engineered to lack a particular gene, or a gene may be expressed in a strain which does not normally do so (the 'transgenic'). Advanced techniques may also permit the expression or suppression of a gene to occur by injection of some regulating chemical. Quantifying behavior Markerless pose estimation – The advancement of computer vision techniques in recent years have allowed for precise quantifications of animal movements without needing to fit physical markers onto the subject. On high-speed video captured in a behavioral assay, keypoints from the subject can be extracted frame-by-frame, which is often useful to analyze in tandem with neural recordings/manipulations. Analyses can be conducted on how keypoints (i.e. parts of the animal) move within different phases of a particular behavior (on a short timescale), or throughout an animal's behavioral repertoire (longer timescale). These keypoint changes can be compared with corresponding changes in neural activity. A machine learning approach can also be used to identify specific behaviors (e.g. forward walking, turning, grooming, courtship, etc.), and quantify the dynamics of transitions between behaviors. Other research methods Computational models - Using a computer to formulate real-world problems to develop solutions. Although this method is often focused in computer science, it has begun to move towards other areas of study. For example, psychology is one of these areas. Computational models allow researchers in psychology to enhance their understanding of the functions and developments in nervous systems. Examples of methods include the modelling of neurons, networks and brain systems and theoretical analysis. Computational methods have a wide variety of roles including clarifying experiments, hypothesis testing and generating new insights. These techniques play an increasing role in the advancement of biological psychology. Limitations and advantages Different manipulations have advantages and limitations. Neural tissue destroyed as a primary consequence of a surgery, electric shock or neurotoxin can confound the results so that the physical trauma masks changes in the fundamental neurophysiological processes of interest. For example, when using an electrolytic probe to create a purposeful lesion in a distinct region of the rat brain, surrounding tissue can be affected: so, a change in behavior exhibited by the experimental group post-surgery is to some degree a result of damage to surrounding neural tissue, rather than by a lesion of a distinct brain region. Most genetic manipulation techniques are also considered permanent. Temporary lesions can be achieved with advanced in genetic manipulations, for example, certain genes can now be switched on and off with diet. Pharmacological manipulations also allow blocking of certain neurotransmitters temporarily as the function returns to its previous state after the drug has been metabolized. Topic areas In general, behavioral neuroscientists study various neuronal and biological processes underlying behavior, though limited by the need to use nonhuman animals. As a result, the bulk of literature in behavioral neuroscience deals with experiences and mental processes that are shared across different animal models such as: Sensation and perception Motivated behavior (hunger, thirst, sex) Control of movement Learning and memory Sleep and biological rhythms Emotion However, with increasing technical sophistication and with the development of more precise noninvasive methods that can be applied to human subjects, behavioral neuroscientists are beginning to contribute to other classical topic areas of psychology, philosophy, and linguistics, such as: Language Reasoning and decision making Consciousness Behavioral neuroscience has also had a strong history of contributing to the understanding of medical disorders, including those that fall under the purview of clinical psychology and biological psychopathology (also known as abnormal psychology). Although animal models do not exist for all mental illnesses, the field has contributed important therapeutic data on a variety of conditions, including: Parkinson's disease, a degenerative disorder of the central nervous system that often impairs motor skills and speech. Huntington's disease, a rare inherited neurological disorder whose most obvious symptoms are abnormal body movements and a lack of coordination. It also affects a number of mental abilities and some aspects of personality. Alzheimer's disease, a neurodegenerative disease that, in its most common form, is found in people over the age of 65 and is characterized by progressive cognitive deterioration, together with declining activities of daily living and by neuropsychiatric symptoms or behavioral changes. Clinical depression, a common psychiatric disorder, characterized by a persistent lowering of mood, loss of interest in usual activities and diminished ability to experience pleasure. Schizophrenia, a psychiatric diagnosis that describes a mental illness characterized by impairments in the perception or expression of reality, most commonly manifesting as auditory hallucinations, paranoid or bizarre delusions or disorganized speech and thinking in the context of significant social or occupational dysfunction. Autism, a brain development disorder that impairs social interaction and communication, and causes restricted and repetitive behavior, all starting before a child is three years old. Anxiety, a physiological state characterized by cognitive, somatic, emotional, and behavioral components. These components combine to create the feelings that are typically recognized as fear, apprehension, or worry. Drug abuse, including alcoholism. Research on topic areas Cognition Behavioral neuroscientists conduct research on various cognitive processes through the use of different neuroimaging techniques. Examples of cognitive research might involve examination of neural correlates during emotional information processing, such as one study that analyzed the relationship between subjective affect and neural reactivity during sustained processing of positive (savoring) and negative (rumination) emotion. The aim of the study was to analyze whether repetitive positive thinking (seen as being beneficial) and repetitive negative thinking (significantly related to worse mental health) would have similar underlying neural mechanisms. Researchers found that the individuals who had a more intense positive affect during savoring, were also the same individuals who had a more intense negative affect during rumination. fMRI data showed similar activations in brain regions during both rumination and savoring, suggesting shared neural mechanisms between the two types of repetitive thinking. The results of the study suggest there are similarities, both subjectively and mechanistically, with repetitive thinking about positive and negative emotions. This overall suggests shared neural mechanisms by which sustained emotional processing of both positive and negative information occurs. Awards Nobel Laureates The following Nobel Prize winners could reasonably be considered behavioral neuroscientists or neurobiologists. (This list omits winners who were almost exclusively neuroanatomists or neurophysiologists; i.e., those that did not measure behavioral or neurobiological variables.) Kavli Prize in Neuroscience Ann Graybiel (1942) Cornelia Bargmann (1961) Winfried Denk (1957) See also References External links Biological Psychology Links Theory of Biological Psychology (Documents No. 9 and 10 in English) IBRO (International Brain Research Organization) Neuropsychology Psychoneuroimmunology
0.783041
0.993695
0.778104
Self-cultivation
Self-cultivation or personal cultivation is the development of one's mind or capacities through one's own efforts. Self-cultivation is the cultivation, integration, and coordination of mind and body. Although self-cultivation may be practiced and implemented as a form of cognitive therapy in psychotherapy, it goes beyond healing and self-help to also encompass self-development, self-improvement and self realisation. It is associated with attempts to go beyond and understand normal states of being, enhancing and polishing one's capacities and developing or uncovering innate human potential. Self-cultivation also alludes to philosophical models in Mohism, Confucianism, Taoism and other Chinese philosophies, as well as in Epicureanism, and is an essential component of well-established East-Asian ethical values. Although this term applies to cultural traditions in Confucianism and Taoism, the goals and aspirations of self-cultivation in these traditions differ greatly. Theoretical background Purposes and applications Self-cultivation is an essential component of the context of . It enhances individuality and personal growth and of human agency. Self-cultivation is a process that cultivates one's mind and body in an attempt to transcend ordinary habitual states of being, enhancing a person's coordination and integration of congruent thoughts, beliefs and actions. It aims to polish or enlighten their capacities and inborn potentials. Self-cultivation: cultural and philosophical psychotherapies Confucianism, Taoism, and Buddhism have adopted elements of doctrine from one another to form new branches and sects. Some of these have disseminated to East Asian regions including Taiwan, Japan, and Korea. Confucianism and the relational self Confucius believed that one's life is the continuation of one's parents' life. Therefore, followers of Confucianism teach their children in such way that the younger generation is educated to cultivate themselves to live with a satisfactory level of self-discipline. Even though individuals see a clear-cut boundary between themselves and others, each person in a dyadic relationship is seen embedded in a particular social network. By respecting the parents—the elder and the superior—a child is raised to be morally upright according to the expectations of others. This can be a social burden that causes stressful interpersonal relationships, and can cause disturbance and conflicts. Taoism and the authentic self Taoism tends to focus on linking the body and mind to the Nature. Taoism advocates the authentic self that is free from legal, social, or political restrictions. It seeks to cultivate an individual's self by healing and emancipating them from the ethical bounds of the human society. Taoism interprets the fortune or misfortune in one's life in terms of one's destiny, which is determined by the person's birth date and time. By avoiding the interference of personal desires and by relating everything to the system of the opposing elements of yin and yang, the cosmology of Taoism aims to keep individuals and everything in the harmonious balance. The explanation of self-cultivation in Taoism also corresponds to the equilibrium of the Five Transformative Phases ( Wu Xing): metal, wood, water, fire, and  earth. Buddhism and the non-self After the introduction of Buddhism to China, "spiritual self-cultivation" became one of the terms used to translate the Buddhist concept of . The ultimate life goal in Buddhism is nirvana. People are encouraged to practice self-cultivation by detaching themselves from their desires and egos, and by attaining a mindful awareness of the non-self. Chán and Zen Buddhist scholars emphasise that the key in self-cultivation is a "beginner's mind" which can allow the uncovering of the "luminous mind" and the realisation of innate Buddha-nature through the experience of sudden enlightenment. In Japan, the Buddhist practice is equated with the notion of or personal cultivation. Influences of self-cultivation on Chinese philosophy Confucian self-cultivation as a psychological process Self-cultivation in the Confucian tradition refers to keeping the balance between inner and outer selves, and between self and others. Self-cultivation in Chinese is an abbreviation of "", which literally translates to "rectifying one’s mind and nurturing one’s character (in particular through art, music and philosophy)". Confucianism embodies metaphysics of self. It develops a complex model of self-cultivation. The cohering key concept is 'intellectual intuition', which is explained as a direct insight and cognition of present knowledge of reality, with no inference of bias toward discernment or logical reasoning. Confucianism has a large emphasis as its foundation the incorporation, application and implementation of filial piety. Self-cultivation aims to achieve a harmonious society that is dependent on personal noble cultivation. The process entails the pursuit of moral perfection through knowledge and application. In the Analects of Confucius there are two types of persons. One is the "profound person" (, ), and the other is "petty person" (, ). These two types are opposed to one another in terms of developed potential. Confucius takes something of a blank slate perspective: "all human beings are alike at birth" (Analects 17.2), but eventually "the profound person understands what is moral. The petty person understands what is profitable" (4.16). The is the person who always manifests the quality of ("humaneness", "co-humanity" in an interdependent, hierarchical universe, "") in themselves and they display the quality of ("rightness", "righteousness") in their actions (4.5). Confucius highlights his fundamentally elitist, hierarchical model of relations by describing how the relates to their fellows: According to D. C. Lau, is an attribute of actions, and is an attribute of agents. There are conceptual links between , ("ritual propriety"), ("virtue"), and the . According to what is , the exerts the moral force, which is , and thus demonstrates . The following passages from the Analects point out the pathway towards self-cultivation that Confucius taught, with the ultimate goal of becoming the : In the first passage, "self-reflection" is explained as "Do not do to others what you do not desire for yourself" (15.24). Confucius considers it extremely important for one to realise the necessity of concern and empathy for others, which can be achieved by reflecting upon oneself. The deeply relational self can then respond to inner reflection with outer virtue. The second passage indicates the life-long timescale of the process of self-cultivation. It can begin during one's early teenage years, then extend well into more-mature age. The process includes the transformation of the individual, in which they realise that they should be able to distinguish and choose from what is right and what is desired. Self-cultivation, Confucius expects, is an essential philosophical process for one to become by maximising . Confucius does not suffer from the Cartesian "mind-body problem". In Confucianism, there is no division between inner and outer self, thus the cumulative effect brought by Confucian self-cultivation is not just limited to one's self or person, but extends rather to the social and even cosmic. Cultural and Ethical Values involved Self-cultivation is one of the key principles of Confucianism, and may be considered the core of Chinese philosophy. The latter can be seen as the disciplined reflections on the insights of self-cultivation. While Étienne Balazs asserted that all Chinese philosophy is social philosophy and that the idea of the group takes precedence over conceptions of the individual self as the social dimension of the human condition features so prominently in the Chinese world of thought, Wing-Tsit Chan suggests a more comprehensive characterisation of Chinese philosophy as humanism: not the humanism that denies or slights a Supreme Power, but one that professes the unity of man and Heaven. Similar to the Western sense of guilt, the Chinese sense of shame In Chinese ethics, . Cultivation of self in East Asian philosophy of education In East Asian cultures, . To help students and the younger generation understand the meaning of being a person, philosophers (mostly scholars) tried to explain their definitions of self with various theoretical approaches. The legacy of Chinese philosopher Confucius, among others (for example, Laozi, Zhuangzi, and Mencius), has provided a rich domain of Chinese philosophical heritage in East Asia. Firstly, the goal of education, and one's most noble goal in life, is to properly develop oneself in order to become a "profound person" (, ). Young people were taught that it was shameful to become a "petty person" (, ), as that was the exact opposite to "sage" (, ). However, as both Confucian and Daoist philosophers adopted the term , there has been divergence that led to differences in educational concepts and practices. Besides Confucianism and Daoism, the Hundred Schools of Thought in Ancient China also included Buddhist and other varieties of philosophy, each of which offered different thoughts on the ideal conception of self. In the modern era, some East Asian cultures have abandoned some of the archaic conceptions, or have replaced traditional humanistic education with a more common modern approach of self-cultivation that adapts the influences of globalisation. Nevertheless, the East Asian descendants and followers of Confucius still consider an ideal human being essential for their life-time education, with their cultural heritage deeply influenced by radical Confucian values. Modern practices The "self"-concept in Western culture The "self" concept in western psychology originated from views of a number of empiricists and rationalists. Hegel (1770–1831) established a view of self-consciousness in which, by observation, our subject-object consciousness stimulates our rationale and reasoning, which then guides human behaviour. Freud (1856–1939) developed a three-part model of the psyche comprising the Id, the Ego, and the superego. Freud's self-concept influenced Erikson (1902–1994), who emphasized self-identity crisis and self-development. Following Erikson, J. Marcia described the continuum of identity development and the nature of our self-identity. The concept of self-consciousness derives from self-esteem, self-regulation, and self-efficacy. Morita therapy Through case-based research, Japanese psychologist Morita Masatake (1874–1938) introduced Morita therapy. It is based on Masatake's theory of consciousness and his four-stage therapeutic method, and is described as an ecological therapy method that focuses on . Morita therapy resembles rational-emotive therapy by American psychologist Albert Ellis, and existential and cognitive behavioral therapy. Naikan therapy ("", , self-reflection) is a Japanese psychotherapeutic method introduced and developed decades ago by Japanese businessman and Buddhist monk (Jōdo Shinshū) Yoshimoto Ishin (1916–1988). Initially, therapy was more often used in correctional settings, however it has been adapted to situational and psychoneurotic disorders. Similar to Morita therapy, requires subordination to a carefully structured period of "retreat" that is compassionately supervised by the practitioner. Contrary to Morita, is shorter (seven days) and utilizes long, regulated periods of daily meditation in which introspection is directed toward the resolution of contemporary conflicts and problems. "In contrast to Western psychoanalytic psychotherapy, both and Morita tend to keep transference issues simplified and positive, while resistance is dealt with procedurally rather than interpretively." The theory of constructive living Based largely on the adaptions of two Japanese structured methods of self-reflection, Naikan therapy and Morita therapy, constructive living is a Western approach to mental health education. Purpose-centered and response-oriented, constructive living (sometimes abbreviated as CL) focuses on the mindfulness and purposes of one's life. It is considered as a process of action to approach the reality thoughtfully. It also emphasizes the ability to understand one's self by recognizing the past, in which it reflects upon the present. Constructive Living highlights the importance of acceptance, of the world we live in, as well as the emotions and feelings individuals have in unique situations. D. Reynolds, Author of Constructive Living and Director of the Constructive Living Center in Oregon, U.S.A, argues that before taking the actions which may potentially bring positive changes, people are often hold back by the belief of "dealing with negative emotions first". According to Reynolds, the most crucial component of the process of effectuating affirmations is not getting the mind right. However, one's mind and emotions are effectively adjusted during the process of self-reflection, which indicates that there shall be a behavioural change taken place beforehand. Epicurean meleta At the closing of his Letter to Menoeceus, Epicurus instructs his disciple to practice (meleta) "both by yourself and with others of like mind". The first field of practice shares semantic roots with and is related to the Hellenistic philosophical concept of "epimeleia heauton" (self-care), which involves methods of self-cultivation. In addition to the study of philosophy, this may include other techniques for living (techne biou) or technologies of the soul, like the visualizing technique known as "placing before the eyes", a cognitive therapy technique known as "relabeling", moral portraiture, and other didactic and ethical methods. We find examples of these techniques in Philodemus of Gadara, the poet Lucretius, and other Epicurean guides. Nietzsche's ethics of self-cultivation "If you incorporate this thought within you, amongst your other thoughts" he maintains "It will transform you. If for everything you wish to do you begin by asking yourself: 'Am I certain I want to do this an infinite number of times?' this will become for you the greatest weight." (KSA 9:11 [143]) Nietzsche worked on the project of reviving Self-cultivation, an ancient ethics. "I hate everything that merely instructs me without augmenting or directly invigorating my own activity"(HL 2:1) "It follows therefore that he must conceive eternal recurrence among other things as a practice that stimulates self-cultivation. In fact in one of his characteristically grandiose moments he identified it as 'the great cultivating thought' in the sense that it might weed out those too weak to bear the thought of living again (WP 1053). In a more tempered fashion, however, he framed the thought of recurrence as part of an ethics of self-cultivation and self-transformation." See also Self Neo-Confucianism Eastern philosophy References Bibliography Confucian Self-Cultivation and Daoist Personhood, H.Wang Gramsci, A. (1992). Prison notebooks, Vol. 2. New York, NY: Columbia University Press. [Google Scholar] Heidegger, M. (1969). Identity and difference (J. Stambaugh, Trans. with an introduction.). New York, NY: Harper & Row Publishers. [Google Scholar] Heidegger, M. (1977). The question concerning technology and other essays ( W. Lovitt, Trans. with an introduction.). New York: Harper Torchbooks. [Google Scholar] Heidegger, M. (1978). Letter on humanism. In D. F. Krell (Ed.), Basic writings (2nd ed., pp. 213–265). London: Routledge. [Google Scholar] Huang, C. -C. (2010). Humanism in East Asian Confucian Contexts. Bielefeld: Transcript Verlag.[Crossref], [Google Scholar] Legge, J. (Trans.). (1861). Confucian analects. The Chinese classics, volume 1. (D. Sturgeon, Ed.). Chinese Text Project. Retrieved 21 March 2017, from http://ctext.org/analects [Google Scholar] Wittgenstein, L. (1997). Philosophical investigations (2nd ed.). (G. E. M. Anscombe, Trans.). Malden, MA: Blackwell. [Google Scholar] Wittgenstein, L. (2001). Tractatus Logico-philosophicus (D. F. Pears & B. F. McGuinness, Trans.). New York, NY: Routledge.  [Google Scholar] Yu, K. P. (2013). The hows and whys of the classics of filial piety孝經的道與理 (Xiaojing de dao yu li). Hong Kong: InfoLink. [Google Scholar] External links Stanford Encyclopedia of Philosophy Entry: Confucius Interfaith Online: Confucianism Confucian Documents at the Internet Sacred Texts Archive. Oriental Philosophy, "Topic:Confucianism" Institutional China Confucian Philosophy China Confucian Religion China Confucian Temples China Kongzi Network Chinese philosophy Concepts in ethics Confucian ethics Taoist philosophy Taoist practices Buddhist practices Buddhist philosophy Psychotherapy Self-care Personal development Philosophy of life Concepts in Chinese philosophy
0.784533
0.991723
0.778039
Behavior
Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary. Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector. Models Biology Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli". A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny). Behaviors can be either innate or learned from the environment. Behaviour can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment. Human behavior The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior. Animal behavior Ethology is the scientific and objective study of animal behavior, usually with a focus on behavior under natural conditions, and viewing behavior as an evolutionarily adaptive trait. Behaviorism is a term that also describes the scientific and objective study of animal behavior, usually referring to measured responses to stimuli or trained behavioral responses in a laboratory context, without a particular emphasis on evolutionary adaptivity. Consumer behavior Consumers behavior Consumer behavior involves the processes consumers go through, and reactions they have towards products or services. It has to do with consumption, and the processes consumers go through around purchasing and consuming goods and services. Consumers recognize needs or wants, and go through a process to satisfy these needs. Consumer behavior is the process they go through as customers, which includes types of products purchased, amount spent, frequency of purchases and what influences them to make the purchase decision or not. Circumstances that influence consumer behaviour are varied, with contributions from both internal and external factors. Internal factors include attitudes, needs, motives, preferences and perceptual processes, whilst external factors include marketing activities, social and economic factors, and cultural aspects. Doctor Lars Perner of the University of Southern California claims that there are also physical factors that influence consumer behavior, for example, if a consumer is hungry, then this physical feeling of hunger will influence them so that they go and purchase a sandwich to satisfy the hunger. Consumer decision making Lars Perner presents a model that outlines the decision-making process involved in consumer behaviour. The process initiates with the identification of a problem, wherein the consumer acknowledges an unsatisfied need or desire. Subsequently, the consumer proceeds to seek information, whereas for low-involvement products, the search tends to rely on internal resources, retrieving alternatives from memory. Conversely, for high-involvement products, the search is typically more extensive, involving activities like reviewing reports, reading reviews, or seeking recommendations from friends. The consumer will then evaluate his or her alternatives, comparing price, and quality, doing trade-offs between products, and narrowing down the choice by eliminating the less appealing products until there is one left. After this has been identified, the consumer will purchase the product. Finally, the consumer will evaluate the purchase decision, and the purchased product, bringing in factors such as value for money, quality of goods, and purchase experience. However, this logical process does not always happen this way, people are emotional and irrational creatures. People make decisions with emotion and then justify them with logic according to Robert Cialdini Ph.D. Psychology. How the 4P's influence consumer behavior The Marketing mix (4 P's) are a marketing tool and stand for Price, Promotion, Product, and Placement. Due to the significant impact of business-to-consumer marketing on consumer behavior, the four elements of the marketing mix, known as the 4 P's (product, price, place, and promotion), exert a notable influence on consumer behavior. The price of a good or service is largely determined by the market, as businesses will set their prices to be similar to that of other businesses so as to remain competitive whilst making a profit. When market prices for a product are high, it will cause consumers to purchase less and use purchased goods for longer periods of time, meaning they are purchasing the product less often. Alternatively, when market prices for a product are low, consumers are more likely to purchase more of the product, and more often. The way that promotion influences consumer behavior has changed over time. In the past, large promotional campaigns and heavy advertising would convert into sales for a business, but nowadays businesses can have success on products with little or no advertising. This is due to the Internet and in particular social media. They rely on word of mouth from consumers using social media, and as products trend online, so sales increase as products effectively promote themselves. Thus, promotion by businesses does not necessarily result in consumer behavior trending towards purchasing products. The way that product influences consumer behavior is through consumer willingness to pay, and consumer preferences. This means that even if a company were to have a long history of products in the market, consumers will still pick a cheaper product over the company in question's product if it means they will pay less for something that is very similar. This is due to consumer willingness to pay, or their willingness to part with the money they have earned. The product also influences consumer behavior through customer preferences. For example, take Pepsi vs Coca-Cola, a Pepsi-drinker is less likely to purchase Coca-Cola, even if it is cheaper and more convenient. This is due to the preference of the consumer, and no matter how hard the opposing company tries they will not be able to force the customer to change their mind. Product placement in the modern era has little influence on consumer behavior, due to the availability of goods online. If a customer can purchase a good from the comfort of their home instead of purchasing in-store, then the placement of products is not going to influence their purchase decision. In management Behavior outside of psychology includes Organizational In management, behaviors are associated with desired or undesired focuses. Managers generally note what the desired outcome is, but behavioral patterns can take over. These patterns are the reference to how often the desired behavior actually occurs. Before a behavior actually occurs, antecedents focus on the stimuli that influence the behavior that is about to happen. After the behavior occurs, consequences fall into place. Consequences consist of rewards or punishments. Social behavior Social behavior is behavior among two or more organisms within the same species, and encompasses any behavior in which one member affects the other. This is due to an interaction among those members. Social behavior can be seen as similar to an exchange of goods, with the expectation that when one gives, one will receive the same. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in. Behavior informatics Behavior informatics also called behavior computing, explores behavior intelligence and behavior insights from the informatics and computing perspectives. Different from applied behavior analysis from the psychological perspective, BI builds computational theories, systems and tools to qualitatively and quantitatively model, represent, analyze, and manage behaviors of individuals, groups and/or organizations. Health Health behavior refers to a person's beliefs and actions regarding their health and well-being. Health behaviors are direct factors in maintaining a healthy lifestyle. Health behaviors are influenced by the social, cultural, and physical environments in which we live. They are shaped by individual choices and external constraints. Positive behaviors help promote health and prevent disease, while the opposite is true for risk behaviors. Health behaviors are early indicators of population health. Because of the time lag that often occurs between certain behaviors and the development of disease, these indicators may foreshadow the future burdens and benefits of health-risk and health-promoting behaviors. Correlates A variety of studies have examined the relationship between health behaviors and health outcomes (e.g., Blaxter 1990) and have demonstrated their role in both morbidity and mortality. These studies have identified seven features of lifestyle which were associated with lower morbidity and higher subsequent long-term survival (Belloc and Breslow 1972): Avoiding snacks Eating breakfast regularly Exercising regularly Maintaining a desirable body weight Moderate alcohol intake Not smoking Sleeping 7–8hrs per night Health behaviors impact upon individuals' quality of life, by delaying the onset of chronic disease and extending active lifespan. Smoking, alcohol consumption, diet, gaps in primary care services and low screening uptake are all significant determinants of poor health, and changing such behaviors should lead to improved health. For example, in US, Healthy People 2000, United States Department of Health and Human Services, lists increased physical activity, changes in nutrition and reductions in tobacco, alcohol and drug use as important for health promotion and disease prevention. Treatment approach Any interventions done are matched with the needs of each individual in an ethical and respected manner. Health belief model encourages increasing individuals' perceived susceptibility to negative health outcomes and making individuals aware of the severity of such negative health behavior outcomes. E.g. through health promotion messages. In addition, the health belief model suggests the need to focus on the benefits of health behaviors and the fact that barriers to action are easily overcome. The theory of planned behavior suggests using persuasive messages for tackling behavioral beliefs to increase the readiness to perform a behavior, called intentions. The theory of planned behavior advocates the need to tackle normative beliefs and control beliefs in any attempt to change behavior. Challenging the normative beliefs is not enough but to follow through the intention with self-efficacy from individual's mastery in problem solving and task completion is important to bring about a positive change. Self efficacy is often cemented through standard persuasive techniques. See also Applied behavior analysis Behavioral cusp Behavioral economics Behavioral genetics Behavioral sciences Cognitive bias Evolutionary physiology Experimental analysis of behavior Human sexual behavior Herd behavior Instinct Mere-measurement effect Motivation Normality (behavior) Organizational studies Radical behaviorism Reasoning Rebellion Social relation Theories of political behavior Work behavior References General Cao, L. (2014). Behavior Informatics: A New Perspective. IEEE Intelligent Systems (Trends and Controversies), 29(4): 62–80. Perner, L. (2008), Consumer behavior. University of Southern California, Marshall School of Business. Retrieved from http://www.consumerpsychologist.com/intro_Consumer_Behavior.html Further reading Bateson, P. (2017) behavior, Development and Evolution. Open Book Publishers, Cambridge. . External links What is behavior? Baby don't ask me, don't ask me, no more at Earthling Nature. behaviorinformatics.org Links to review articles by Eric Turkheimer and co-authors on behavior research Links to IJCAI2013 tutorial on behavior informatics and computing
0.780109
0.997051
0.777808
Behavioral economics
Behavioral economics is the study of the psychological and cognitive factors involved in the decisions of individuals or institutions, and how these decisions deviate from those implied by traditional economic theory. Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from psychology, neuroscience and microeconomic theory. Behavioral economics began as a distinct field of study in the 1970s and 1980s, but can be traced back to 18th-century economists, such as Adam Smith, who deliberated how the economic behavior of individuals could be influenced by their desires. The status of behavioral economics as a subfield of economics is a fairly recent development; the breakthroughs that laid the foundation for it were published through the last three decades of the 20th century. Behavioral economics is still growing as a field, being used increasingly in research and in teaching. History Early classical economists included psychological reasoning in much of their writing, though psychology at the time was not a recognized field of study. In The Theory of Moral Sentiments, Adam Smith wrote on concepts later popularized by modern Behavioral Economic theory, such as loss aversion. Jeremy Bentham, a Utilitarian philosopher in the 1700s conceptualized utility as a product of psychology. Other economists who incorporated psychological explanations in their works included Francis Edgeworth, Vilfredo Pareto and Irving Fisher. A rejection and elimination of psychology from economics in the early 1900s brought on a period defined by a reliance on empiricism. There was a lack of confidence in hedonic theories, which saw pursuance of maximum benefit as an essential aspect in understanding human economic behavior. Hedonic analysis had shown little success in predicting human behavior, leading many to question its viability as a reliable source for prediction. There was also a fear among economists that the involvement of psychology in shaping economic models was inordinate and a departure from accepted principles. They feared that an increased emphasis on psychology would undermine the mathematic components of the field. To boost the ability of economics to predict accurately, economists started looking to tangible phenomena rather than theories based on human psychology. Psychology was seen as unreliable to many of these economists as it was a new field, not regarded as sufficiently scientific. Though a number of scholars expressed concern towards the positivism within economics, models of study dependent on psychological insights became rare. Economists instead conceptualized humans as purely rational and self-interested decision makers, illustrated in the concept of homo economicus. The resurgence of psychology within economics, which facilitated the expansion of behavioral economics, has been linked to the cognitive revolution. In the 1960s, cognitive psychology began to shed more light on the brain as an information processing device (in contrast to behaviorist models). Psychologists in this field, such as Ward Edwards, Amos Tversky and Daniel Kahneman began to compare their cognitive models of decision-making under risk and uncertainty to economic models of rational behavior. These developments spurred economists to reconsider how psychology could be applied to economic models and theories. Concurrently, the Expected utility hypothesis and discounted utility models began to gain acceptance. In challenging the accuracy of generic utility, these concepts established a practice foundational in behavioral economics: Building on standard models by applying psychological knowledge. Mathematical psychology reflects a longstanding interest in preference transitivity and the measurement of utility. Development of Behavioral Economics In 2017, Niels Geiger, a lecturer in economics at the University of Hohenheim conducted an investigation into the proliferation of behavioral economics. Geiger's research looked at studies that had quantified the frequency of references to terms specific to behavioral economics, and how often influential papers in behavioral economics were cited in journals on economics. The quantitative study found that there was a significant spread in behavioral economics after Kahneman and Tversky's work in the 1990s and into the 2000s. Bounded rationality Bounded rationality is the idea that when individuals make decisions, their rationality is limited by the tractability of the decision problem, their cognitive limitations and the time available. Herbert A. Simon proposed bounded rationality as an alternative basis for the mathematical modeling of decision-making. It complements "rationality as optimization", which views decision-making as a fully rational process of finding an optimal choice given the information available. Simon used the analogy of a pair of scissors, where one blade represents human cognitive limitations and the other the "structures of the environment", illustrating how minds compensate for limited resources by exploiting known structural regularity in the environment. Bounded rationality implicates the idea that humans take shortcuts that may lead to suboptimal decision-making. Behavioral economists engage in mapping the decision shortcuts that agents use in order to help increase the effectiveness of human decision-making. Bounded rationality finds that actors do not assess all available options appropriately, in order to save on search and deliberation costs. As such decisions are not always made in the sense of greatest self-reward as limited information is available. Instead agents shall choose to settle for an acceptable solution. One approach, adopted by Richard M. Cyert and March in their 1963 book A Behavioral Theory of the Firm, was to view firms as coalitions of groups whose targets were based on satisficing rather than optimizing behaviour. Another treatment of this idea comes from Cass Sunstein and Richard Thaler's Nudge. Sunstein and Thaler recommend that choice architectures are modified in light of human agents' bounded rationality. A widely cited proposal from Sunstein and Thaler urges that healthier food be placed at sight level in order to increase the likelihood that a person will opt for that choice instead of less healthy option. Some critics of Nudge have lodged attacks that modifying choice architectures will lead to people becoming worse decision-makers. Prospect theory In 1979, Kahneman and Tversky published Prospect Theory: An Analysis of Decision Under Risk, that used cognitive psychology to explain various divergences of economic decision making from neo-classical theory. Kahneman and Tversky utilising prospect theory determined three generalisations; gains are treated differently than losses, outcomes received with certainty are overweighed relative to uncertain outcomes and the structure of the problem may affect choices. These arguments were supported in part by altering a survey question so that it was no longer a case of achieving gains but averting losses and the majority of respondents altered their answers accordingly. In essence proving that emotions such as fear of loss, or greed can alter decisions, indicating the presence of an irrational decision-making process. Prospect theory has two stages: an editing stage and an evaluation stage. In the editing stage, risky situations are simplified using various heuristics. In the evaluation phase, risky alternatives are evaluated using various psychological principles that include: Reference dependence: When evaluating outcomes, the decision maker considers a "reference level". Outcomes are then compared to the reference point and classified as "gains" if greater than the reference point and "losses" if less than the reference point. Loss aversion: Losses are avoided more than equivalent gains are sought. In their 1992 paper, Kahneman and Tversky found the median coefficient of loss aversion to be about 2.25, i.e., losses hurt about 2.25 times more than equivalent gains reward. Non-linear probability weighting: Decision makers overweigh small probabilities and underweigh large probabilities—this gives rise to the inverse-S shaped "probability weighting function". Diminishing sensitivity to gains and losses: As the size of the gains and losses relative to the reference point increase in absolute value, the marginal effect on the decision maker's utility or satisfaction falls. In 1992, in the Journal of Risk and Uncertainty, Kahneman and Tversky gave a revised account of prospect theory that they called cumulative prospect theory. The new theory eliminated the editing phase in prospect theory and focused just on the evaluation phase. Its main feature was that it allowed for non-linear probability weighting in a cumulative manner, which was originally suggested in John Quiggin's rank-dependent utility theory. Psychological traits such as overconfidence, projection bias and the effects of limited attention are now part of the theory. Other developments include a conference at the University of Chicago, a special behavioral economics edition of the Quarterly Journal of Economics ("In Memory of Amos Tversky"), and Kahneman's 2002 Nobel Prize for having "integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty." A further argument of Behavioural Economics relates to the impact of the individual's cognitive limitations as a factor in limiting the rationality of people's decisions. Sloan first argued this in his paper 'Bounded Rationality' where he stated that our cognitive limitations are somewhat the consequence of our limited ability to foresee the future, hampering the rationality of decision. Daniel Kahneman further expanded upon the effect cognitive ability and processes have on decision making in his book Thinking, Fast and Slow Kahneman delved into two forms of thought, fast thinking which he considered "operates automatically and quickly, with little or no effort and no sense of voluntary control". Conversely, slow thinking is the allocation of cognitive ability, choice and concentration. Fast thinking utilises heuristics, which is a decision-making process that undertakes shortcuts, and rules of thumb to provide an immediate but often irrational and imperfect solution. Kahneman proposed that the result of the shortcuts is the occurrence of a number of biases such as hindsight bias, confirmation bias and outcome bias among others. A key example of fast thinking and the resultant irrational decisions is the 2008 financial crisis. Nudge theory Nudge is a concept in behavioral science, political theory and economics which proposes positive reinforcement and indirect suggestions as ways to influence the behavior and decision making of groups or individuals—in other words, it's "a way to manipulate people's choices to lead them to make specific decisions". The first formulation of the term and associated principles was developed in cybernetics by James Wilk before 1995 and described by Brunel University academic D. J. Stewart as "the art of the nudge" (sometimes referred to as micronudges). It also drew on methodological influences from clinical psychotherapy tracing back to Gregory Bateson, including contributions from Milton Erickson, Watzlawick, Weakland and Fisch, and Bill O'Hanlon. In this variant, the nudge is a microtargeted design geared towards a specific group of people, irrespective of the scale of intended intervention. In 2008, Richard Thaler and Cass Sunstein's book Nudge: Improving Decisions About Health, Wealth, and Happiness brought nudge theory to prominence. It also gained a following among US and UK politicians, in the private sector and in public health. The authors refer to influencing behavior without coercion as libertarian paternalism and the influencers as choice architects. Thaler and Sunstein defined their concept as: Nudging techniques aim to capitalise on the judgemental heuristics of people. In other words, a nudge alters the environment so that when heuristic, or System 1, decision-making is used, the resulting choice will be the most positive or desired outcome. An example of such a nudge is switching the placement of junk food in a store, so that fruit and other healthy options are located next to the cash register, while junk food is relocated to another part of the store. In 2008, the United States appointed Sunstein, who helped develop the theory, as administrator of the Office of Information and Regulatory Affairs. Notable applications of nudge theory include the formation of the British Behavioural Insights Team in 2010. It is often called the "Nudge Unit", at the British Cabinet Office, headed by David Halpern. In addition, the Penn Medicine Nudge Unit is the world's first behavioral design team embedded within a health system. Nudge theory has also been applied to business management and corporate culture, such as in relation to health, safety and environment (HSE) and human resources. Regarding its application to HSE, one of the primary goals of nudge is to achieve a "zero accident culture". Criticisms Cass Sunstein has responded to critiques at length in his The Ethics of Influence making the case in favor of nudging against charges that nudges diminish autonomy, threaten dignity, violate liberties, or reduce welfare. Ethicists have debated this rigorously. These charges have been made by various participants in the debate from Bovens to Goodwin. Wilkinson for example charges nudges for being manipulative, while others such as Yeung question their scientific credibility. Some, such as Hausman & Welch have inquired whether nudging should be permissible on grounds of (distributive) justice; Lepenies & Malecka have questioned whether nudges are compatible with the rule of law. Similarly, legal scholars have discussed the role of nudges and the law. Behavioral economists such as Bob Sugden have pointed out that the underlying normative benchmark of nudging is still homo economicus, despite the proponents' claim to the contrary. It has been remarked that nudging is also a euphemism for psychological manipulation as practiced in social engineering. There exists an anticipation and, simultaneously, implicit criticism of the nudge theory in works of Hungarian social psychologists who emphasize the active participation in the nudge of its target (Ferenc Merei and Laszlo Garai). Concepts Behavioral economics aims to improve or overhaul traditional economic theory by studying failures in its assumptions that people are rational and selfish. Specifically, it studies the biases, tendencies and heuristics of people's economic decisions. It aids in determining whether people make good choices and whether they could be helped to make better choices. It can be applied both before and after a decision is made. Search heuristics Behavioral economics proposes search heuristics as an aid for evaluating options. It is motivated by the fact that it is costly to gain information about options and it aims to maximise the utility of searching for information. While each heuristic is not wholistic in its explanation of the search process alone, a combination of these heuristics may be used in the decision-making process. There are three primary search heuristics. Satisficing Satisficing is the idea that there is some minimum requirement from the search and once that has been met, stop searching. After satisficing, a person may not have the most optimal option (i.e. the one with the highest utility), but would have a "good enough" one. This heuristic may be problematic if the aspiration level is set at such a level that no products exist that could meet the requirements. Directed cognition Directed cognition is a search heuristic in which a person treats each opportunity to research information as their last. Rather than a contingent plan that indicates what will be done based on the results of each search, directed cognition considers only if one more search should be conducted and what alternative should be researched. Elimination by aspects Whereas satisficing and directed cognition compare choices, elimination by aspects compares certain qualities. A person using the elimination by aspects heuristic first chooses the quality that they value most in what they are searching for and sets an aspiration level. This may be repeated to refine the search. i.e. identify the second most valued quality and set an aspiration level. Using this heuristic, options will be eliminated as they fail to meet the minimum requirements of the chosen qualities. Heuristics and cognitive effects Besides searching, behavioral economists and psychologists have identified other heuristics and other cognitive effects that affect people's decision making. These include: Mental accounting Mental accounting refers to the propensity to allocate resources for specific purposes. Mental accounting is a behavioral bias that causes one to separate money into different categories known as mental accounts either based on the source or the intention of the money. Anchoring Anchoring describes when people have a mental reference point with which they compare results to. For example, a person who anticipates that the weather on a particular day would be raining, but finds that on the day it is actually clear blue skies, would gain more utility from the pleasant weather because they anticipated that it would be bad. Herd behavior This is a relatively simple bias that reflects the tendency of people to mimic what everyone else is doing and follow the general consensus. Framing effects People tend to choose differently depending on how the options are presented to them. People tend to have little control over their susceptibility to the framing effect, as often their choice-making process is based on intuition. Biases and fallacies While heuristics are tactics or mental shortcuts to aid in the decision-making process, people are also affected by a number of biases and fallacies. Behavioral economics identifies a number of these biases that negatively affect decision making such as: Present bias Present bias reflects the human tendency to want rewards sooner. It describes people who are more likely to forego a greater payoff in the future in favour of receiving a smaller benefit sooner. An example of this is a smoker who is trying to quit. Although they know that in the future they will suffer health consequences, the immediate gain from the nicotine hit is more favourable to a person affected by present bias. Present bias is commonly split into people who are aware of their present bias (sophisticated) and those who are not (naive). Gambler's fallacy The gambler's fallacy stems from law of small numbers. It is the belief that an event that has occurred often in the past is less likely to occur in the future, despite the probability remaining constant. For example, if a coin had been flipped three times and turned up heads every single time, a person influenced by the gambler's fallacy would predict that the next one ought to be tails because of the abnormal number of heads flipped in the past, even though the probability of a heads occurring is still 50%. Hot hand fallacy The hot hand fallacy is the opposite of the gambler's fallacy. It is the belief that an event that has occurred often in the past is more likely to occur again in the future such that the streak will continue. This fallacy is particularly common within sports. For example, if a football team has consistently won the last few games they have participated in, then it is often said that they are 'on form' and thus, it is expected that the football team will maintain their winning streak. Narrative fallacy Narrative fallacy refers to when people use narratives to connect the dots between random events to make sense of arbitrary information. The term stems from Nassim Taleb's book The Black Swan: The Impact of the Highly Improbable. The narrative fallacy can be problematic as it can lead to individuals making false cause-effect relationships between events. For example, a startup may get funding because investors are swayed by a narrative that sounds plausible, rather than by a more reasoned analysis of available evidence. Loss aversion Loss aversion refers to the tendency to place greater weight on losses compared to equivalent gains. In other words, this means that when an individual receives a loss, this will cause their utility to decline more so than the same-sized gain. This means that they are far more likely to try to assign a higher priority on avoiding losses than making investment gains. As a result, some investors might want a higher payout to compensate for losses. If the high payout is not likely, they might try to avoid losses altogether even if the investment's risk is acceptable from a rational standpoint. Recency bias Recency bias is the belief that of a particular outcome is more probably simply because it had just occurred. For example, if the previous one or two flips were heads, a person affected by recency bias would continue to predict that heads would be flipped. Confirmation bias Confirmation bias is the tendency to prefer information consistent with one's beliefs and discount evidence inconsistent with them. Familiarity bias Familiarity bias simply describes the tendency of people to return to what they know and are comfortable with. Familiarity bias discourages affected people from exploring new options and may limit their ability to find an optimal solution. Status quo bias Status quo bias describes the tendency of people to keep things as they are. It is a particular aversion to change in favor of remaining comfortable with what is known. Connected to this concept is the endowment effect, a theory that people value things more if they own them - they require more to give up an object than they would be willing to pay to acquire it. Behavioral finance Behavioral finance is the study of the influence of psychology on the behavior of investors or financial analysts. It assumes that investors are not always rational, have limits to their self-control and are influenced by their own biases. For example, behavioral law and economics scholars studying the growth of financial firms' technological capabilities have attributed decision science to irrational consumer decisions. It also includes the subsequent effects on the markets. Behavioral Finance attempts to explain the reasoning patterns of investors and measures the influential power of these patterns on the investor's decision making. The central issue in behavioral finance is explaining why market participants make irrational systematic errors contrary to assumption of rational market participants. Such errors affect prices and returns, creating market inefficiencies. Traditional finance The accepted theories of finance are referred to as traditional finance. The foundation of traditional finance is associated with the modern portfolio theory (MPT) and the efficient-market hypothesis (EMH). Modern portfolio theory is based on a stock or portfolio's expected return, standard deviation, and its correlation with the other assets held within the portfolio. With these three concepts, an efficient portfolio can be created for any group of assets. An efficient portfolio is a group of assets that has the maximum expected return given the amount of risk. The efficient-market hypothesis states that all public information is already reflected in a security's price. The proponents of the traditional theories believe that "investors should just own the entire market rather than attempting to outperform the market". Behavioral finance has emerged as an alternative to these theories of traditional finance and the behavioral aspects of psychology and sociology are integral catalysts within this field of study. Evolution The foundations of behavioral finance can be traced back over 150 years. Several original books written in the 1800s and early 1900s marked the beginning of the behavioral finance school. Originally published in 1841, MacKay's Extraordinary Popular Delusions and the Madness of Crowds presents a chronological timeline of the various panics and schemes throughout history. This work shows how group behavior applies to the financial markets of today. Le Bon's important work, The Crowd: A Study of the Popular Mind, discusses the role of "crowds" (also known as crowd psychology) and group behavior as they apply to the fields of behavioral finance, social psychology, sociology and history. Selden's 1912 book Psychology of The Stock Market was one of the first to apply the field of psychology directly to the stock market. This classic discusses the emotional and psychological forces at work on investors and traders in the financial markets. These three works along with several others form the foundation of applying psychology and sociology to the field of finance. The foundation of behavioral finance is an area based on an interdisciplinary approach including scholars from the social sciences and business schools. From the liberal arts perspective, this includes the fields of psychology, sociology, anthropology, economics and behavioral economics. On the business administration side, this covers areas such as management, marketing, finance, technology and accounting. Critics contend that behavioral finance is more a collection of anomalies than a true branch of finance and that these anomalies are either quickly priced out of the market or explained by appealing to market microstructure arguments. However, individual cognitive biases are distinct from social biases; the former can be averaged out by the market, while the other can create positive feedback loops that drive the market further and further from a "fair price" equilibrium. It is observed that, the problem with the general area of behavioral finance is that it only serves as a complement to general economics. Similarly, for an anomaly to violate market efficiency, an investor must be able to trade against it and earn abnormal profits; this is not the case for many anomalies. A specific example of this criticism appears in some explanations of the equity premium puzzle. It is argued that the cause is entry barriers (both practical and psychological) and that the equity premium should reduce as electronic resources open up the stock market to more traders. In response, others contend that most personal investment funds are managed through superannuation funds, minimizing the effect of these putative entry barriers. In addition, professional investors and fund managers seem to hold more bonds than one would expect given return differentials. Quantitative behavioral finance Quantitative behavioral finance uses mathematical and statistical methodology to understand behavioral biases. Some financial models used in money management and asset valuation, as well as more theoretical models, likewise, incorporate behavioral finance parameters. Examples: Thaler's model of price reactions to information, with three phases (underreaction, adjustment, and overreaction), creating a price trend. (One characteristic of overreaction is that average returns following announcements of good news is lower than following bad news. In other words, overreaction occurs if the market reacts too strongly or for too long to news, thus requiring an adjustment in the opposite direction. As a result, outperforming assets in one period is likely to underperform in the following period. This also applies to customers' irrational purchasing habits.) The stock image coefficient Artificial financial market Market microstructure Applied issues Behavioral game theory Behavioral game theory, invented by Colin Camerer, analyzes interactive strategic decisions and behavior using the methods of game theory, experimental economics, and experimental psychology. Experiments include testing deviations from typical simplifications of economic theory such as the independence axiom and neglect of altruism, fairness, and framing effects. On the positive side, the method has been applied to interactive learning and social preferences. As a research program, the subject is a development of the last three decades. Artificial intelligence Much of the decisions are more and more made either by human beings with the assistance of artificial intelligent machines or wholly made by these machines. Tshilidzi Marwala and Evan Hurwitz in their book, studied the utility of behavioral economics in such situations and concluded that these intelligent machines reduce the impact of bounded rational decision making. In particular, they observed that these intelligent machines reduce the degree of information asymmetry in the market, improve decision making and thus making markets more rational. The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking. Other areas of research Other branches of behavioral economics enrich the model of the utility function without implying inconsistency in preferences. Ernst Fehr, Armin Falk, and Rabin studied fairness, inequity aversion and reciprocal altruism, weakening the neoclassical assumption of perfect selfishness. This work is particularly applicable to wage setting. The work on "intrinsic motivation by Uri Gneezy and Aldo Rustichini and "identity" by George Akerlof and Rachel Kranton assumes that agents derive utility from adopting personal and social norms in addition to conditional expected utility. According to Aggarwal, in addition to behavioral deviations from rational equilibrium, markets are also likely to suffer from lagged responses, search costs, externalities of the commons, and other frictions making it difficult to disentangle behavioral effects in market behavior. "Conditional expected utility" is a form of reasoning where the individual has an illusion of control, and calculates the probabilities of external events and hence their utility as a function of their own action, even when they have no causal ability to affect those external events. Behavioral economics caught on among the general public with the success of books such as Dan Ariely's Predictably Irrational. Practitioners of the discipline have studied quasi-public policy topics such as broadband mapping. Applications for behavioral economics include the modeling of the consumer decision-making process for applications in artificial intelligence and machine learning. The Silicon Valley–based start-up Singularities is using the AGM postulates proposed by Alchourrón, Gärdenfors, and Makinson—the formalization of the concepts of beliefs and change for rational entities—in a symbolic logic to create a "machine learning and deduction engine that uses the latest data science and big data algorithms in order to generate the content and conditional rules (counterfactuals) that capture customer's behaviors and beliefs." The University of Pennsylvania's Center for Health Incentives & Behavioral Economics (CHIBE) looks at how behavioral economics can improve health outcomes. CHIBE researchers have found evidence that many behavioral economics principles (incentives, patient and clinician nudges, gamification, loss aversion, and more) can be helpful to encourage vaccine uptake, smoking cessation, medication adherence, and physical activity, for example. Applications of behavioral economics also exist in other disciplines, for example in the area of supply chain management. Honors and awards Nobel Prize 1978 – Herbert Simon In 1978 Herbert Simon was awarded the Nobel Memorial Prize in Economic Sciences "for his pioneering research into the decision-making process within economic organizations". Simon earned his Bachelor of Arts and his Ph.D. in Political Science from the University of Chicago before going on to teach at Carnegie Tech. Herbert was praised for his work on bounded rationality, a challenge to the assumption that humans are rational actors. 2002 –- Daniel Kahneman and Vernon L. Smith In 2002, psychologist Daniel Kahneman and economist Vernon L. Smith were awarded the Nobel Memorial Prize in Economic Sciences. Kahneman was awarded the prize "for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty", while Smith was awarded the prize "for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms." 2017 – Richard Thaler In 2017, economist Richard Thaler was awarded the Nobel Memorial Prize in Economic Sciences for "his contributions to behavioral economics and his pioneering work in establishing that people are predictably irrational in ways that defy economic theory." Thaler was especially recognized for presenting inconsistencies in standard Economic theory and for his formulation of mental accounting and Libertarian paternalism Other Awards 1999 – Andrei Shleifer The work of Andrei Shleifer focused on behavioral finance and made observations on the limits of the efficient market hypothesis. Shleifer received the 1999 John Bates Clark Medal from the American Economic Association for his work. 2001 – Matthew Rabin Matthew Rabin received the "genius" award from the MarArthur Foundation in 2000. The American Economic Association chose Rabin as the recipient of the 2001 John Bates Clark medal. Rabin's awards were given to him primarily on the basis of his work on fairness and reciprocity, and on present bias. 2003 – Sendhil Mullainathan Sendhil Mullainathan was the youngest of the chosen MacArthur Fellows in 2002, receiving a fellowship grant of $500,000 in 2003. Mullainathan was praised by the MacArthur Foundation as working on economics and psychology as an aggregate. Mullainathan's research focused on the salaries of executives on Wall Street; he also has looked at the implications of racial discrimination in markets in the United States. Criticism Taken together, two landmark papers in economic theory which were published before the field of Behavioral Economics emerged, the first is the paper "Uncertainty, Evolution, and Economic Theory" by Armen Alchian from 1950 and the second is the paper "Irrational Behavior and Economic Theory" from 1962 by Gary Becker, both of which were published in the Journal of Political Economy, provide a justification for standard neoclassical economic analysis. Alchian's 1950 paper uses the logic of natural selection, the Evolutionary Landscape model, stochastic processes, probability theory, and several other lines of reasoning to justify many of the results derived from standard supply analysis assuming firms which maximizing their profits, are certain about the future, and have accurate foresight without having to assume any of those things. Becker's 1962 paper shows that downward sloping market demand curves (the most important implication of the law of demand) do not actually require an assumption that the consumers in that market are rational, as is claimed by behavioral economists and they also follow from a wide variety of irrational behavior as well. The lines of reasoning and argumentation used in these two papers is re-expressed and expanded upon in (at least) one other professional economic publication for each of them. As for Alchian's evolutionary economics via natural selection by way of environmental adoption thesis, it is summarized, followed by an explicit exploration of its theoretical implications for Behavioral Economic theory, then illustrated via examples in several different industries including banking, hospitality, and transportation, in the 2014 paper "Uncertainty, Evolution, and Behavioral Economic Theory," by Manne and Zywicki. And the argument made in Becker's 1962 paper, that that a 'pure' increase in the (relative) price (or terms of trade) of good X must reduce the amount of X demanded in the market for good X, is explained in greater detail in chapters (or as he calls them, "Lectures" because this textbook is more or less a transcription of his lectures given in his Price Theory course taught to 1st year PhD students several years earlier) 4 (called The Opportunity Set) and 5 (called Substitution Effects) of Gary Becker's graduate level textbook Economic Theory, originally published in 1971. Besides the three critical aforementioned articles, critics of behavioral economics typically stress the rationality of economic agents. A fundamental critique is provided by Maialeh (2019) who argues that no behavioral research can establish an economic theory. Examples provided on this account include pillars of behavioral economics such as satisficing behavior or prospect theory, which are confronted from the neoclassical perspective of utility maximization and expected utility theory respectively. The author shows that behavioral findings are hardly generalizable and that they do not disprove typical mainstream axioms related to rational behavior. Others, such as the essayist and former trader Nassim Taleb note that cognitive theories, such as prospect theory, are models of decision-making, not generalized economic behavior, and are only applicable to the sort of once-off decision problems presented to experiment participants or survey respondents. It is noteworthy that in the episode of EconTalk in which Taleb said this, he and the host, Russ Roberts discuss the significance of Gary Becker's 1962 paper cited in the first paragraph in this section as an argument against any implications which can be drawn from one shot psychological experiments on market level outcomes outside of laboratory settings, i.e. in the real world. Others argue that decision-making models, such as the endowment effect theory, that have been widely accepted by behavioral economists may be erroneously established as a consequence of poor experimental design practices that do not adequately control subject misconceptions. Despite a great deal of rhetoric, no unified behavioral theory has yet been espoused: behavioral economists have proposed no alternative unified theory of their own to replace neoclassical economics with. David Gal has argued that many of these issues stem from behavioral economics being too concerned with understanding how behavior deviates from standard economic models rather than with understanding why people behave the way they do. Understanding why behavior occurs is necessary for the creation of generalizable knowledge, the goal of science. He has referred to behavioral economics as a "triumph of marketing" and particularly cited the example of loss aversion. Traditional economists are skeptical of the experimental and survey-based techniques that behavioral economics uses extensively. Economists typically stress revealed preferences over stated preferences (from surveys) in the determination of economic value. Experiments and surveys are at risk of systemic biases, strategic behavior and lack of incentive compatibility. Some researchers point out that participants of experiments conducted by behavioral economists are not representative enough and drawing broad conclusions on the basis of such experiments is not possible. An acronym WEIRD has been coined in order to describe the studies participants—as those who come from Western, Educated, Industrialized, Rich, and Democratic societies. Responses Matthew Rabin dismisses these criticisms, countering that consistent results typically are obtained in multiple situations and geographies and can produce good theoretical insight. Behavioral economists, however, responded to these criticisms by focusing on field studies rather than lab experiments. Some economists see a fundamental schism between experimental economics and behavioral economics, but prominent behavioral and experimental economists tend to share techniques and approaches in answering common questions. For example, behavioral economists are investigating neuroeconomics, which is entirely experimental and has not been verified in the field. The epistemological, ontological, and methodological components of behavioral economics are increasingly debated, in particular by historians of economics and economic methodologists. According to some researchers, when studying the mechanisms that form the basis of decision-making, especially financial decision-making, it is necessary to recognize that most decisions are made under stress because, "Stress is the nonspecific body response to any demands presented to it." Related fields Experimental economics Experimental economics is the application of experimental methods, including statistical, econometric, and computational, to study economic questions. Data collected in experiments are used to estimate effect size, test the validity of economic theories, and illuminate market mechanisms. Economic experiments usually use cash to motivate subjects, in order to mimic real-world incentives. Experiments are used to help understand how and why markets and other exchange systems function as they do. Experimental economics have also expanded to understand institutions and the law (experimental law and economics). A fundamental aspect of the subject is design of experiments. Experiments may be conducted in the field or in laboratory settings, whether of individual or group behavior. Variants of the subject outside such formal confines include natural and quasi-natural experiments. Neuroeconomics Neuroeconomics is an interdisciplinary field that seeks to explain human decision making, the ability to process multiple alternatives and to follow a course of action. It studies how economic behavior can shape our understanding of the brain, and how neuroscientific discoveries can constrain and guide models of economics. It combines research methods from neuroscience, experimental and behavioral economics, and cognitive and social psychology. As research into decision-making behavior becomes increasingly computational, it has also incorporated new approaches from theoretical biology, computer science, and mathematics. Neuroeconomics studies decision making by using a combination of tools from these fields so as to avoid the shortcomings that arise from a single-perspective approach. In mainstream economics, expected utility (EU) and the concept of rational agents are still being used. Many economic behaviors are not fully explained by these models, such as heuristics and framing. Behavioral economics emerged to account for these anomalies by integrating social, cognitive, and emotional factors in understanding economic decisions. Neuroeconomics adds another layer by using neuroscientific methods in understanding the interplay between economic behavior and neural mechanisms. By using tools from various fields, some scholars claim that neuroeconomics offers a more integrative way of understanding decision making. Evolutionary psychology An evolutionary psychology perspective states that many of the perceived limitations in rational choice can be explained as being rational in the context of maximizing biological fitness in the ancestral environment, but not necessarily in the current one. Thus, when living at subsistence level where a reduction of resources may result in death, it may have been rational to place a greater value on preventing losses than on obtaining gains. It may also explain behavioral differences between groups, such as males being less risk-averse than females since males have more variable reproductive success than females. While unsuccessful risk-seeking may limit reproductive success for both sexes, males may potentially increase their reproductive success from successful risk-seeking much more than females can. Notable people Economics George Akerlof Werner De Bondt Paul De Grauwe Linda C. Babcock Douglas Bernheim Colin Camerer Armin Falk Urs Fischbacher Tshilidzi Marwala Susan E. Mayer Ernst Fehr Simon Gächter Uri Gneezy David Laibson Louis Lévy-Garboua John A. List George Loewenstein Sendhil Mullainathan John Quiggin Matthew Rabin Reinhard Selten Herbert A. Simon Vernon L. Smith Robert Sugden Larry Summers Richard Thaler Abhijit Banerjee Esther Duflo Kevin Volpp Katy Milkman Finance Malcolm Baker Nicholas Barberis Gunduz Caginalp David Hirshleifer Andrew Lo Michael Mauboussin Terrance Odean Richard L. Peterson Charles Plott Robert Prechter Hersh Shefrin Robert Shiller Andrei Shleifer Robert Vishny Psychology George Ainslie Dan Ariely Ed Diener Ward Edwards Laszlo Garai Djuradj Caranovic Gerd Gigerenzer Daniel Kahneman Ariel Kalil George Katona Walter Mischel Drazen Prelec Eldar Shafir Paul Slovic John Staddon Amos Tversky Moran Cerf See also Adaptive market hypothesis Animal Spirits (Keynes) Behavioralism Behavioral operations research Behavioral Strategy Big Five personality traits Confirmation bias Cultural economics Culture change Economic sociology Emotional bias Fuzzy-trace theory Hindsight bias Homo reciprocans Important publications in behavioral economics List of cognitive biases Methodological individualism Nudge theory Observational techniques Praxeology Priority heuristic Regret theory Repugnancy costs Socioeconomics Socionomics References Citations Sources External links The Behavioral Economics Guide Overview of Behavioral Finance The Institute of Behavioral Finance Stirling Behavioural Science Blog, of the Stirling Behavioural Science Centre at University of Stirling Society for the Advancement of Behavioural Economics Behavioral Economics: Past, Present, Future – Colin F. Camerer and George Loewenstein A History of Behavioural Finance / Economics in Published Research: 1944–1988 MSc Behavioural Economics, MSc in Behavioural Economics at the University of Essex Behavioral Economics of Shipping Business Behavioral finance Financial economics Market trends Microeconomics Prospect theory
0.779546
0.997038
0.777236
Psychological trauma
Psychological trauma (also known as mental trauma, psychiatric trauma, emotional damage, or psychotrauma) is an emotional response caused by severe distressing events that are outside the normal range of human experiences. It must be understood by the affected person as directly threatening the affected person or their loved ones generally with death, severe bodily injury, or sexual violence; indirect exposure, such as from watching television news, may be extremely distressing and can produce an involuntary and possibly overwhelming physiological stress response, but does not produce trauma per se. Examples of distressing events include violence, rape, or a terrorist attack. Short-term reactions such as psychological shock and psychological denial are typically followed. Long-term reactions and effects include bipolar disorder, uncontrollable flashbacks, panic attacks, insomnia, nightmare disorder, difficulties with interpersonal relationships, and post-traumatic stress disorder (PTSD). Physical symptoms including migraines, hyperventilation, hyperhidrosis, and nausea are often developed. As subjective experiences differ between individuals, people react to similar events differently. Most people who experience a potentially traumatic event do not become psychologically traumatized, though they may be distressed and experience suffering. Some will develop PTSD after exposure to a traumatic event, or series of events. This discrepancy in risk rate can be attributed to protective factors some individuals have, that enable them to cope with difficult events, including temperamental and environmental factors, such as resilience and willingness to seek help. Psychotraumatology is the study of psychological trauma. Signs and symptoms People who experience trauma often have problems and difficulties afterwards. The severity of these symptoms depends on the person, the types of trauma involved, and the support and treatment they receive from others. The range of reactions to trauma can be wide and varied, and differ in severity from person to person. After a traumatic experience, a person may re-experience the trauma mentally and physically. For example, the sound of a motorcycle engine may cause intrusive thoughts or a sense of re-experiencing a traumatic experience that involved a similar sound e.g. gunfire. Sometimes a benign stimulus (e.g. noise from a motorcycle) may get connected in the mind with the traumatic experience. This process is called traumatic coupling. In this process, the benign stimulus becomes a trauma reminder, also called a trauma trigger. These can produce uncomfortable and even painful feelings. Re-experiencing can damage people's sense of safety, self, self-efficacy, as well as their ability to regulate emotions and navigate relationships. They may turn to psychoactive drugs, including alcohol, to try to escape or dampen the feelings. These triggers cause flashbacks, which are dissociative experiences where the person feels as though the events are recurring. Flashbacks can range from distraction to complete dissociation or loss of awareness of the current context. Re-experiencing of symptoms is a sign that the body and mind are actively struggling to cope with the traumatic experience. Triggers and cues act as reminders of the trauma and can cause anxiety and other associated emotions. Often the person can be completely unaware of what these triggers are. In many cases, this may lead a person with a traumatic disorder to engage in disruptive behaviors or self-destructive coping mechanisms, often without being fully aware of the nature or causes of their own actions. Panic attacks are an example of a psychosomatic response to such emotional triggers. Consequently, intense feelings of anger may frequently surface, sometimes in inappropriate or unexpected situations, as danger may always seem to be present due to re-experiencing past events. Upsetting memories such as images, thoughts, or flashbacks may haunt the person, and nightmares may be frequent. Insomnia may occur as lurking fears and insecurity keep the person vigilant and on the lookout for danger, both day and night. A messy personal financial scene, as well as debt, are common features in trauma-affected people. Trauma does not only cause changes in one's daily functions, but could also lead to morphological changes. Such epigenetic changes can be passed on to the next generation, thus making genetics one of the components of psychological trauma. However, some people are born with or later develop protective factors such as genetics that help lower their risk of psychological trauma. The person may not remember what actually happened, while emotions experienced during the trauma may be re-experienced without the person understanding why (see Repressed memory). This can lead to the traumatic events being constantly experienced as if they were happening in the present, preventing the subject from gaining perspective on the experience. This can produce a pattern of prolonged periods of acute arousal punctuated by periods of physical and mental exhaustion. This can lead to mental health disorders like acute stress and anxiety disorder, prolonged grief disorder, somatic symptom disorder, conversion disorders, brief psychotic disorder, borderline personality disorder, adjustment disorder, etc. Obsessive- compulsive disorder is another mental health disorder with symptoms similar to that of psychological trauma, such as hyper-vigilance and intrusive thoughts. Research has indicated that individuals who have experienced a traumatic event have been known to use symptoms of obsessive- compulsive disorder, such as compulsive checking of safety, as a way to mitigate the symptoms associated with trauma. In time, emotional exhaustion may set in, leading to distraction, and clear thinking may be difficult or impossible. Emotional detachment, as well as dissociation or "numbing out" can frequently occur. Dissociating from the painful emotion includes numbing all emotion, and the person may seem emotionally flat, preoccupied, distant, or cold. Dissociation includes depersonalisation disorder, dissociative amnesia, dissociative fugue, dissociative identity disorder, etc. Exposure to and re-experiencing trauma can cause neurophysiological changes like slowed myelination, abnormalities in synaptic pruning, shrinking of the hippocampus, cognitive and affective impairment. This is significant in brain scan studies done regarding higher-order function assessment with children and youth who were in vulnerable environments. Some traumatized people may feel permanently damaged when trauma symptoms do not go away and they do not believe their situation will improve. This can lead to feelings of despair, transient paranoid ideation, loss of self-esteem, profound emptiness, suicidality, and frequently, depression. If important aspects of the person's self and world understanding have been violated, the person may call their own identity into question. Often despite their best efforts, traumatized parents may have difficulty assisting their child with emotion regulation, attribution of meaning, and containment of post-traumatic fear in the wake of the child's traumatization, leading to adverse consequences for the child. In such instances, seeking counselling in appropriate mental health services is in the best interests of both the child and the parent(s). Trauma is hard to speak of by those that experience it. The event in question might recur to them in a dream or another medium, but it is rare for them to speak of it. Causes Situational trauma Trauma can be caused by human-made, technological and natural disasters, including war, abuse, violence, vehicle collisions, or medical emergencies. An individual's response to psychological trauma can be varied based on the type of trauma, as well as socio-demographic and background factors. There are several behavioral responses commonly used towards stressors including the proactive, reactive, and passive responses. Proactive responses include attempts to address and correct a stressor before it has a noticeable effect on lifestyle. Reactive responses occur after the stress and possible trauma has occurred and is aimed more at correcting or minimizing the damage of a stressful event. A passive response is often characterized by an emotional numbness or ignorance of a stressor. There is also a distinction between trauma induced by recent situations and long-term trauma which may have been buried in the unconscious from past situations such as child abuse. Trauma is sometimes overcome through healing; in some cases this can be achieved by recreating or revisiting the origin of the trauma under more psychologically safe circumstances, such as with a therapist. More recently, awareness of the consequences of climate change is seen as a source of trauma as individuals contemplate future events as well as experience climate change related disasters. Emotional experiences within these contexts are increasing, and collective processing and engagement with these emotions can lead to increased resilience and post-traumatic growth, as well as a greater sense of belongingness. These outcomes are protective against the devastating impacts of psychological trauma. Stress disorders All psychological traumas originate from stress, a physiological response to an unpleasant stimulus. Long-term stress increases the risk of poor mental health and mental disorders, which can be attributed to secretion of glucocorticoids for a long period of time. Such prolonged exposure causes many physiological dysfunctions such as the suppression of the immune system and increase in blood pressure. Not only does it affect the body physiologically, but a morphological change in the hippocampus also takes place. Studies showed that extreme stress early in life can disrupt normal development of hippocampus and impact its functions in adulthood. Studies surely show a correlation between the size of hippocampus and one's susceptibility to stress disorders. In times of war, psychological trauma has been known as shell shock or combat stress reaction. Psychological trauma may cause an acute stress reaction which may lead to post-traumatic stress disorder (PTSD). PTSD emerged as the label for this condition after the Vietnam War in which many veterans returned to their respective countries demoralized, and sometimes, addicted to psychoactive substances. The symptoms of PTSD must persist for at least one month for diagnosis to be made. The main symptoms of PTSD consist of four main categories: trauma (i.e. intense fear), reliving (i.e. flashbacks), avoidance behavior (i.e. emotional numbing), and hypervigilance (i.e. continuous scanning of the environment for danger). Research shows that about 60% of the US population reported as having experienced at least one traumatic symptom in their lives, but only a small proportion actually develops PTSD. There is a correlation between the risk of PTSD and whether or not the act was inflicted deliberately by the offender. Psychological trauma is treated with therapy and, if indicated, psychotropic medications. The term continuous posttraumatic stress disorder (CTSD) was introduced into the trauma literature by Gill Straker (1987). It was originally used by South African clinicians to describe the effects of exposure to frequent, high levels of violence usually associated with civil conflict and political repression. The term is also applicable to the effects of exposure to contexts in which gang violence and crime are endemic as well as to the effects of ongoing exposure to life threats in high-risk occupations such as police, fire, and emergency services. As one of the processes of treatment, confrontation with their sources of trauma plays a crucial role. While debriefing people immediately after a critical incident has not been shown to reduce incidence of PTSD, coming alongside people experiencing trauma in a supportive way has become standard practice. The impact of PTSD on children is to a degree unknown, but education on coping mechanisms have shown to improve the lives of children who have undergone a traumatic event. Moral injury Moral injury is distress such as guilt or shame following a moral transgression. There are many other definitions some based on different models of causality. Moral injury is associated with post-traumatic stress disorder but is distinguished from it. Moral injury is associated with guilt and shame while PTSD is correlated with fear and anxiety. Vicarious trauma Normally, hearing about or seeing a recording of an event, even if distressing, does not cause trauma; however, an exception is made to the diagnostic criteria for work-related exposures. Vicarious trauma affects workers who witness their clients' trauma. It is more likely to occur in situations where trauma-related work is the norm rather than the exception. Listening with empathy to the clients generates feeling, and seeing oneself in clients' trauma may compound the risk for developing trauma symptoms. Trauma may also result if workers witness situations that happen in the course of their work (e.g. violence in the workplace, reviewing violent video tapes.) Risk increases with exposure and with the absence of help-seeking protective factors and pre-preparation of preventive strategies. Individuals who have a personal history of trauma are also at increased risk for developing vicarious trauma. Vicarious trauma can lead workers to develop more negative views of themselves, others, and the world as a whole, which can compromise their quality of life and ability to work effectively. Theoretical models Shattered assumptions theory Janoff-Bulman, theorises that people generally hold three fundamental assumptions about the world that are built and confirmed over years of experience: the world is benevolent, the world is meaningful, and I am worthy. According to the shattered assumption theory, there are some extreme events that "shatter" an individual's worldviews by severely challenging and breaking assumptions about the world and ourself. Once one has experienced such trauma, it is necessary for an individual to create new assumptions or modify their old ones to recover from the traumatic experience. Therefore, the negative effects of the trauma are simply related to our worldviews, and if we repair these views, we will recover from the trauma. In psychodynamics Psychodynamic viewpoints are controversial, but have been shown to have utility therapeutically. French neurologist Jean-Martin Charcot argued in the 1890s that psychological trauma was the origin of all instances of the mental illness known as hysteria. Charcot's "traumatic hysteria" often manifested as paralysis that followed a physical trauma, typically years later after what Charcot described as a period of "incubation". Sigmund Freud, Charcot's student and the father of psychoanalysis, examined the concept of psychological trauma throughout his career. Jean Laplanche has given a general description of Freud's understanding of trauma, which varied significantly over the course of Freud's career: "An event in the subject's life, defined by its intensity, by the subject's incapacity to respond adequately to it and by the upheaval and long-lasting effects that it brings about in the psychical organization". The French psychoanalyst Jacques Lacan claimed that what he called "The Real" had a traumatic quality external to symbolization. As an object of anxiety, Lacan maintained that The Real is "the essential object which isn't an object any longer, but this something faced with which all words cease and all categories fail, the object of anxiety par excellence". Fred Alford, citing the work of object relations theorist Donald Winnicott, uses the concept of inner other, and internal representation of the social world, with which one converses internally and which is generated through interactions with others. He posits that the inner other is damaged by trauma but can be repaired by conversations with others such as therapists. He relates the concept of the inner other to the work of Albert Camus viewing the inner other as that which removes the absurd. Alford notes how trauma damages trust in social relations due to fear of exploitation and argues that culture and social relations can help people recover from trauma. Diana Fosha, a pioneer of modern psychodynamic perspective, also argues that social relations can help people recover from trauma, but specifically refers to attachment theory and the attachment dynamic of the therapeutic relationship. Fosha argues that the sense of emotional safety and co-regulation that occurs in a psychodynamically oriented therapeutic relationship acts as the secure attachment that is necessary to allow a client to experience and process through their trauma safely and effectively. Diagnosis As "trauma" adopted a more widely defined scope, traumatology as a field developed a more interdisciplinary approach. This is in part due to the field's diverse professional representation including: psychologists, medical professionals, and lawyers. As a result, findings in this field are adapted for various applications, from individual psychiatric treatments to sociological large-scale trauma management. While the field has adopted a number of diverse methodological approaches, many pose their own limitations in practical application. The experience and outcomes of psychological trauma can be assessed in a number of ways. Within the context of a clinical interview, the risk of imminent danger to the self or others is important to address but is not the focus of assessment. In most cases, it will not be necessary to involve contacting emergency services (e.g., medical, psychiatric, law enforcement) to ensure the individuals safety; members of the individual's social support network are much more critical. Understanding and accepting the psychological state of an individual is paramount. There are many misconceptions of what it means for a traumatized individual to be in psychological crisis. These are times when an individual is in inordinate amounts of pain and incapable of self-comfort. If treated humanely and respectfully, the individual is less likely to resort to self harm. In these situations it is best to provide a supportive, caring environment and to communicate to the individual that no matter the circumstance, the individual will be taken seriously rather than being treated as delusional. It is vital for the assessor to understand that what is going on in the traumatized person's head is valid and real. If deemed appropriate, the assessing clinician may proceed by inquiring about both the traumatic event and the outcomes experienced (e.g., post-traumatic symptoms, dissociation, substance abuse, somatic symptoms, psychotic reactions). Such inquiry occurs within the context of established rapport and is completed in an empathic, sensitive, and supportive manner. The clinician may also inquire about possible relational disturbance, such as alertness to interpersonal danger, abandonment issues, and the need for self-protection via interpersonal control. Through discussion of interpersonal relationships, the clinician is better able to assess the individual's ability to enter and sustain a clinical relationship. During assessment, individuals may exhibit activation responses in which reminders of the traumatic event trigger sudden feelings (e.g., distress, anxiety, anger), memories, or thoughts relating to the event. Because individuals may not yet be capable of managing this distress, it is necessary to determine how the event can be discussed in such a way that will not "retraumatize" the individual. It is also important to take note of such responses, as these responses may aid the clinician in determining the intensity and severity of possible post traumatic stress as well as the ease with which responses are triggered. Further, it is important to note the presence of possible avoidance responses. Avoidance responses may involve the absence of expected activation or emotional reactivity as well as the use of avoidance mechanisms (e.g., substance use, effortful avoidance of cues associated with the event, dissociation). In addition to monitoring activation and avoidance responses, clinicians carefully observe the individual's strengths or difficulties with affect regulation (i.e., affect tolerance and affect modulation). Such difficulties may be evidenced by mood swings, brief yet intense depressive episodes, or self-mutilation. The information gathered through observation of affect regulation will guide the clinician's decisions regarding the individual's readiness to partake in various therapeutic activities. Though assessment of psychological trauma may be conducted in an unstructured manner, assessment may also involve the use of a structured interview. Such interviews might include the Clinician-Administered PTSD Scale, Acute Stress Disorder Interview, Structured Interview for Disorders of Extreme Stress, Structured Clinical Interview for DSM-IV Dissociative Disorders - Revised, and Brief Interview for post-traumatic Disorders. Lastly, assessment of psychological trauma might include the use of self-administered psychological tests. Individual scores on such tests are compared to normative data in order to determine how the individual's level of functioning compares to others in a sample representative of the general population. Psychological testing might include the use of generic tests (e.g., MMPI-2, MCMI-III, SCL-90-R) to assess non-trauma-specific symptoms as well as difficulties related to personality. In addition, psychological testing might include the use of trauma-specific tests to assess post-traumatic outcomes. Such tests might include the post-traumatic Stress Diagnostic Scale, Davidson Trauma Scale, Detailed Assessment of post-traumatic Stress, Trauma Symptom Inventory, Trauma Symptom Checklist for Children, Traumatic Life Events Questionnaire, and Trauma-related Guilt Inventory. Children are assessed through activities and therapeutic relationship, some of the activities are play genogram, sand worlds, coloring feelings, self and kinetic family drawing, symbol work, dramatic-puppet play, story telling, Briere's TSCC, etc. Definition The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) defines trauma as the symptoms that occur following exposure to an event (i.e., traumatic event) that involves actual or threatened death, serious injury, or sexual violence. This exposure could come in the form of experiencing the event or witnessing the event, or learning that an extreme violent or accidental event was experienced by a loved one. Trauma symptoms may come in the form of intrusive memories, dreams, or flashbacks; avoidance of reminders of the traumatic event; negative thoughts and feelings; or increased alertness or reactivity. Memories associated with trauma are typically explicit, coherent, and difficult to forget. Due to the complexity of the interaction between traumatic event occurrence and trauma symptomatology, a person's distress response to aversive details of a traumatic event may involve intense fear or helplessness, but ranges according to the context. In children, trauma symptoms can be manifested in the form of disorganized or agitative behaviors. Trauma can be caused by a wide variety of events, but there are a few common aspects. There is frequently a violation of the person's core assumptions about the world and their human rights, putting the person in a state of extreme confusion and insecurity. This is seen when institutions depended upon for survival violate, humiliate, betray, or cause major losses or separations instead of evoking aspects like positive self worth, safe boundaries and personal freedom. Psychologically traumatic experiences often involve physical trauma that threatens one's survival and sense of security. Typical causes and dangers of psychological trauma include harassment; embarrassment; abandonment; abusive relationships; rejection; co-dependence; physical assault; sexual abuse; partner battery; employment discrimination; police brutality; judicial corruption and misconduct; bullying; paternalism; domestic violence; indoctrination; being the victim of an alcoholic parent; the threat or the witnessing of violence (particularly in childhood); life-threatening medical conditions; and medication-induced trauma. Catastrophic natural disasters such as earthquakes and volcanic eruptions; large scale transportation accidents; house or domestic fire; motor collision; mass interpersonal violence like war; terrorist attacks or other mass victimization like sex trafficking; being taken as a hostage or being kidnapped can also cause psychological trauma. Long-term exposure to situations such as extreme poverty or other forms of abuse, such as verbal abuse, exist independently of physical trauma but still generate psychological trauma. Some theories suggest childhood trauma can increase one's risk for mental disorders including post-traumatic stress disorder (PTSD), depression, and substance abuse. Childhood adversity is associated with neuroticism during adulthood. Parts of the brain in a growing child are developing in a sequential and hierarchical order, from least complex to most complex. The brain's neurons change in response to the constant external signals and stimulation, receiving and storing new information. This allows the brain to continually respond to its surroundings and promote survival. The five traditional signals (sight, hearing, taste, smell, and touch) contribute to the developing brain structure and its function. Infants and children begin to create internal representations of their external environment, and in particular, key attachment relationships, shortly after birth. Violent and victimizing attachment figures impact infants' and young children's internal representations. The more frequently a specific pattern of brain neurons is activated, the more permanent the internal representation associated with the pattern becomes. This causes sensitization in the brain towards the specific neural network. Because of this sensitization, the neural pattern can be activated by decreasingly less external stimuli. Child abuse tends to have the most complications, with long-term effects out of all forms of trauma, because it occurs during the most sensitive and critical stages of psychological development. It could lead to violent behavior, possibly as extreme as serial murder. For example, Hickey's Trauma-Control Model suggests that "childhood trauma for serial murderers may serve as a triggering mechanism resulting in an individual's inability to cope with the stress of certain events." Often, psychological aspects of trauma are overlooked even by health professionals: "If clinicians fail to look through a trauma lens and to conceptualize client problems as related possibly to current or past trauma, they may fail to see that trauma victims, young and old, organize much of their lives around repetitive patterns of reliving and warding off traumatic memories, reminders, and affects." Biopsychosocial models offer a broader view of health problems than biomedical models. Effects Evidence suggests that a minority of people who experience severe trauma in adulthood will experience enduring personality change. Personality changes include guilt, distrust, impulsiveness, aggression, avoidance, obsessive behaviour, emotional numbness, loss of interest, hopelessness and altered self-perception. Treatment A number of psychotherapy approaches have been designed with the treatment of trauma in mind—EMDR, progressive counting, somatic experiencing, biofeedback, Internal Family Systems Therapy, and sensorimotor psychotherapy, and Emotional Freedom Technique (EFT) etc. Trauma informed care provides a framework for any person in any discipline or context to promote healing, or at least not re-traumatizing. A 2018 systematic review provided moderate evidence that Eye Movement Desensitization and Reprocessing (EMDR) is effective in reducing PTSD and depression symptoms, and it increases the likelihood of patients no longer meeting the criteria for PTSD. There is a large body of empirical support for the use of cognitive behavioral therapy for the treatment of trauma-related symptoms, including post-traumatic stress disorder. Institute of Medicine guidelines identify cognitive behavioral therapies as the most effective treatments for PTSD. Two of these cognitive behavioral therapies, prolonged exposure and cognitive processing therapy, are being disseminated nationally by the Department of Veterans Affairs for the treatment of PTSD. A 2010 Cochrane review found that trauma-focused cognitive behavioral therapy was effective for individuals with acute traumatic stress symptoms when compared to waiting list and supportive counseling. Seeking Safety is another type of cognitive behavioral therapy that focuses on learning safe coping skills for co-occurring PTSD and substance use problems. While some sources highlight Seeking Safety as effective with strong research support, others have suggested that it did not lead to improvements beyond usual treatment. A review from 2014 showed that a combination of treatments involving dialectical behavior therapy (DBT), often used for borderline personality disorder, and exposure therapy is highly effective in treating psychological trauma. If, however, psychological trauma has caused dissociative disorders or complex PTSD, the trauma model approach (also known as phase-oriented treatment of structural dissociation) has been proven to work better than the simple cognitive approach. Studies funded by pharmaceuticals have also shown that medications such as the new anti-depressants are effective when used in combination with other psychological approaches. At present, the selective serotonin reuptake inhibitor (SSRI) antidepressants sertraline (Zoloft) and paroxetine (Paxil) are the only medications that have been approved by the Food and Drug Administration (FDA) in the United States to treat PTSD. Other options for pharmacotherapy include serotonin-norepinephrine reuptake inhibitor (SNRI) antidepressants and anti-psychotic medications, though none have been FDA approved. Trauma therapy allows processing trauma-related memories and allows growth towards more adaptive psychological functioning. It helps to develop positive coping instead of negative coping and allows the individual to integrate upsetting-distressing material (thoughts, feelings and memories) and to resolve these internally. It also aids in the growth of personal skills like resilience, ego regulation, empathy, etc. Processes involved in trauma therapy are: Psychoeducation: Information dissemination and educating in vulnerabilities and adoptable coping mechanisms. Emotional regulation: Identifying, countering discriminating, grounding thoughts and emotions from internal construction to an external representation. Cognitive processing: Transforming negative perceptions and beliefs about self, others and environment to positive ones through cognitive reconsideration or re-framing. Trauma processing: Systematic desensitization, response activation and counter-conditioning, titrated extinction of emotional response, deconstructing disparity (emotional vs. reality state), resolution of traumatic material (in theory, to a state in which triggers no longer produce harmful distress and the individual is able to express relief.) Emotional processing: Reconstructing perceptions, beliefs and erroneous expectations, habituating new life contexts for auto-activated trauma-related fears, and providing crisis cards with coded emotions and appropriate cognition. (This stage is only initiated in pre-termination phase from clinical assessment and judgement of the mental health professional.) Experiential processing: Visualization of achieved relief state and relaxation methods. A number of complementary approaches to trauma treatment have been implicated as well, including yoga and meditation. There has been recent interest in developing trauma-sensitive yoga practices, but the actual efficacy of yoga in reducing the effects of trauma needs more exploration. In health and social care settings, a trauma informed approach means that care is underpinned by understandings of trauma and its far-reaching implications. Trauma is widespread. For example, 26% of participants in the Adverse Childhood Experiences (ACEs) study were survivors of one ACE and 12.5% were survivors of four or more ACEs. A trauma-informed approach acknowledges the high rates of trauma and means that care providers treat every person as if they might be a survivor of trauma. Measurement of the effectiveness of a universal trauma informed approach is in early stages and is largely based in theory and epidemiology. Trauma informed teaching practice is an educative approach for migrant children from war-torn countries, who have typically experienced complex trauma, and the number of such children entering Canadian schools has led some school jurisdictions to consider new classroom approaches to assist these pupils. Along with complex trauma, these students often have experienced interrupted schooling due to the migration process, and as a consequence may have limited literacy skills in their first language. One study of a Canadian secondary school classroom, as told through journal entries of a student teacher, showed how Blaustein and Kinniburgh's ARC (attachment, regulation and competency) framework was used to support newly arrived refugee students from war zones. Tweedie et al. (2017) describe how key components of the ARC framework, such as establishing consistency in classroom routines; assisting students to identify and self-regulate emotional responses; and enabling student personal goal achievement, are practically applied in one classroom where students have experienced complex trauma. The authors encourage teachers and schools to avoid a deficit lens to view such pupils, and suggest ways schools can structure teaching and learning environments which take into account the extreme stresses these students have encountered. Society and culture Some people, and many self-help books, use the word trauma broadly, to refer to any unpleasant experience, even if the affected person has a psychologically healthy response to the experience. This imprecise language may promote the medicalization of normal human behaviors (e.g., grief after a death) and make discussions of psychological trauma more complex, but it might also encourage people to respond with compassion to the distress and suffering of others. See also References Further reading External links The International Society for Traumatic Stress Studies (ISTSS) Trauma-Focused Cognitive Behavioral Therapy – Medical University of South Carolina National Child Traumatic Stress Network (NCTSN) Trauma Information Pages Psychological stress Abuse Harassment and bullying Counseling Anxiety disorders Post-traumatic stress disorder Victimology Trauma types Adverse childhood experiences
0.778094
0.998717
0.777095
Bottom–up and top–down design
Bottom–up and top–down are both strategies of information processing and ordering knowledge, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership. A top–down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top–down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top–down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top–down approach starts with the big picture, then breaks down into smaller segments. A bottom–up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom–up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose. Product design and development During the development of new products, designers and engineers rely on both bottom–up and top–down approaches. The bottom–up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top–down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top–down approach is taken and almost everything is custom designed. Computer science Software development Part of this section is from the Perl Design Patterns Book. In the software development process, the top–down and bottom–up approaches play a key role. Top–down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top–down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete. Bottom–up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom–up approach. Top–down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top–down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top–down programming was not strictly what he promoted. Top–down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used. Modern software design approaches usually combine top–down and bottom–up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom–up flavor. Programming Top–down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained. In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top–level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design". Parsing Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler. Nanotechnology Top–down and bottom–up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom–up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top–down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications. A top–down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top–down secondary approaches to engineer nanostructures. Bottom–up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom–up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top–down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Neuroscience and psychology These terms are also employed in cognitive sciences including neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically, sensory input is considered bottom–up, and higher cognitive processes, which have more information from other sources, are considered top–down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19). According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that top–down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach." Conversely, psychology defines bottom–up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom–up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom–up processing occurs "when a stimulus is presented long and clearly enough." Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom–up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top–down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom–up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top–down influence. The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom–up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top–down information. In cognition, two thinking approaches are distinguished. "Top–down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom–up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition. Studies in task switching and response selection show that there are differences through the two types of processing. Top–down processing primarily focuses on the attention side, such as task repetition (Schneider, 2015).  Bottom–up processing focuses on item-based learning, such as finding the same object over and over again (Schneider, 2015). Implications for understanding attentional control of response selection in conflict situations are discussed (Schneider, 2015). This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top–down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom–up methods to produce usable interfaces (Zacks & Tversky, 2003). Schooling Undergraduate (or bachelor) students are taught the basis of top–down bottom–up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom–up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations (Koch, 2022). Management and organization In the fields of management and organization, the terms "top–down" and "bottom–up" are used to describe how decisions are made and/or how change is implemented. A "top–down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff. A bottom–up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom–up" decision. A bottom–up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers". Positive aspects of top–down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them (e.g., Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g., Dubois 2002). A bottom–up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change. Public health Both top–down and bottom–up approaches are used in public health. There are many examples of top–down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom–up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare. Architecture Often the École des Beaux-Arts school of design is said to have primarily promoted top–down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project. By contrast, the Bauhaus focused on bottom–up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design). Ecology In ecology top–down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top–down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top–down control has in this example; when the population of otters decreased, the population of the urchins increased. Bottom–up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface. There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems. Philosophy and ethics Top–down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom–up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top–down with bottom–up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements. See also The Cathedral and the Bazaar Pseudocode References cited https://philpapers.org/rec/COHTNO Citations and notes Further reading Corpeño, E (2021). "The Top-Down Approach to Problem Solving: How to Stop Struggling in Class and Start Learning". . Goldstein, E.B. (2010). Sensation and Perception. USA: Wadsworth. Galotti, K. (2008). Cognitive Psychology: In and out of the laboratory. USA: Wadsworth. Dubois, Hans F.W. 2002. Harmonization of the European vaccination policy and the role TQM and reengineering could play. Quality Management in Health Care 10(2): 47–57. J. A. Estes, M. T. Tinker, T. M. Williams, D. F. Doak "Killer Whale Predation on Sea Otters Linking Oceanic and Nearshore Ecosystems", Science, October 16, 1998: Vol. 282. no. 5388, pp. 473 – 476 Luiz Carlos Bresser-Pereira, José María Maravall, and Adam Przeworski, 1993. Economic reforms in new democracies. Cambridge: Cambridge University Press. . External links "Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971) Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998). Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003. K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989. Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches Dichotomies Information science Neuropsychology Software design Hierarchy
0.780101
0.995917
0.776916
Empathy
Empathy is generally described as the ability to take on other's perspective, to understand, feel, and possibly share and respond to their experience. There are more (sometimes conflicting) definitions of empathy that include but are not limited to social, cognitive, and emotional processes primarily concerned with understanding others. Often times, empathy is considered to be a broad term, and broken down into more specific concepts and types that include cognitive empathy, emotional (or affective) empathy, somatic empathy, and spiritual empathy. Empathy is still a topic of research. The major areas of research include the development of empathy, the genetics and neuroscience of empathy, cross-species empathy, and the impairment of empathy. Some researchers have made efforts to quantify empathy through different methods, such as from questionnaires where participants can fill out and then be scored on their answers. Some other research discusses the effects of empathy, benefits and issues caused by a lack of or an abundance of empathy. Discussions of empathy are common in the fields of ethics, politics, business, medicine, culture, and fiction. Etymology The English word empathy is derived from the Ancient Greek (, meaning "physical affection or passion"). That word derives from (, "in, at") and (, "passion" or "suffering"). Theodor Lipps adapted the German aesthetic term ("feeling into") to psychology in 1903, and Edward B. Titchener translated into English as "empathy" in 1909. In modern Greek may mean, depending on context, prejudice, malevolence, malice, or hatred. Definitions Since its introduction into the English language, empathy has had a wide range of (sometimes conflicting) definitions among both researchers and laypeople. Empathy definitions encompass a broad range of phenomena, including caring for other people and having a desire to help them, experiencing emotions that match another person's, discerning what another person is thinking or feeling, and making less distinct the differences between the self and the other. Since empathy involves understanding the emotional states of other people, the way it is characterized derives from the way emotions are characterized. For example, if emotions are characterized by bodily feelings, then understanding the bodily feelings of another will be considered central to empathy. On the other hand, if emotions are characterized by a combination of beliefs and desires, then understanding those beliefs and desires will be more essential to empathy. The ability to imagine oneself as another person is a sophisticated process. However, the basic capacity to recognize emotions in others may be innate and may be achieved unconsciously. Empirical research supports a variety of interventions to improve empathy. Empathy is not all-or-nothing; rather, a person can be more or less empathic toward another. Paradigmatically, a person exhibits empathy when they communicate an accurate recognition of the significance of another person's ongoing intentional actions, associated emotional states, and personal characteristics in a manner that seems accurate and tolerable to the recognized person. This is a nuanced perspective on empathy which assists in the understanding of complex human emotions and interactions. Acknowledging subjective experiences highlights the need for balance and understanding when engaging in empathy. One's ability to recognize the bodily feelings of another is related to one's imitative capacities, and seems to be grounded in an innate capacity to associate the bodily movements and facial expressions one sees in another with the proprioceptive feelings of producing those corresponding movements or expressions oneself. Because empathy is rooted in our ability to imitate their painful experience, people with disorders that inhibit them from social understanding/connection may experience difficulty portraying empathy for others. These people could include individuals diagnosed with Asperger's or autism. Distinctions between empathy and related concepts Compassion and sympathy are terms associated with empathy. A person feels compassion when they notice others are in need, and this feeling motivates that person to help. Like empathy, compassion has a wide range of definitions and purported facets (which overlap with some definitions of empathy). Sympathy is a feeling of care and understanding for someone in need. Some include in sympathy an empathic concern for another person, and the wish to see them better off or happier. Empathy is also related to pity and emotional contagion. One feels pity towards others who might be in trouble or in need of help. This feeling is described as "feeling sorry" for someone. Emotional contagion is when a person (especially an infant or a member of a mob) imitatively "catches" the emotions that others are showing without necessarily recognizing this is happening. Alexithymia describes a deficiency in understanding, processing, or describing one's own emotions (unlike empathy which is about someone else's emotions). Classification Empathy has two major components: , also called emotional empathy, is the ability to respond with an appropriate emotion to another's mental states. Our ability to empathize emotionally is based on emotional contagion: being affected by another's emotional or arousal state. Affective empathy can be subdivided into the following scales: Empathic concern: sympathy and compassion for others in response to their suffering. Personal distress: feelings of discomfort and anxiety in response to another's suffering. There is no consensus regarding whether personal distress is a form of empathy or instead is something distinct from empathy. There may be a developmental aspect to this subdivision. Infants respond to the distress of others by getting distressed themselves; only when they are two years old do they start to respond in other-oriented ways: trying to help, comfort, and share. Affective mentalizing: uses clues like body language, facial expressions, knowledge about the other's beliefs & situation, and context to understand more about what one is empathizing with. is the ability to understand another's perspective or mental state. The terms empathic accuracy, social cognition, perspective-taking, theory of mind, and mentalizing are often used synonymously, but due to a lack of studies comparing theory of mind with types of empathy, it is unclear whether these are equivalent. Although measures of cognitive empathy include self-report questionnaires and behavioral measures, a 2019 meta-analysis found only a negligible association between self-report and behavioral measures, suggesting that people are generally not able to accurately assess their own cognitive empathy abilities. Cognitive empathy can be subdivided into the following scales: Perspective-taking: the tendency to spontaneously adopt others' psychological perspectives. Fantasy: the tendency to identify with fictional characters. Tactical (or strategic) empathy: the deliberate use of perspective-taking to achieve certain desired ends. Emotion regulation: a damper on the emotional contagion process that allows you to empathize without being overwhelmed by the emotion you are empathizing with. The scientific community has not coalesced around a precise definition of these constructs, but there is consensus about this distinction. Affective and cognitive empathy are also independent from one another; someone who strongly empathizes emotionally is not necessarily good in understanding another's perspective. Additional constructs that have been proposed include behavioral empathy (which governs how one chooses to respond to feelings of empathy), social empathy (in which the empathetic person integrates their understanding of broader social dynamics into their empathetic modeling), and ecological empathy (which encompasses empathy directed towards the natural world). In addition, Fritz Breithaupt emphasizes the importance of empathy suppression mechanisms in healthy empathy. Measurement Efforts to measure empathy go back to at least the mid-twentieth century. Researchers approach the measurement of empathy from a number of perspectives. Behavioral measures normally involve raters assessing the presence or absence of certain behaviors in the subjects they are monitoring. Both verbal and non-verbal behaviors have been captured on video by experimenters. Other experimenters required subjects to comment upon their own feelings and behaviors, or those of other people involved in the experiment, as indirect ways of signaling their level of empathic functioning to the raters. Physiological responses tend to be captured by elaborate electronic equipment that has been physically connected to the subject's body. Researchers then draw inferences about that person's empathic reactions from the electronic readings produced. Bodily or "somatic" measures can be seen as behavioral measures at a micro level. They measure empathy through facial and other non-verbally expressed reactions. Such changes are presumably underpinned by physiological changes brought about by some form of "emotional contagion" or mirroring. These reactions, while they appear to reflect the internal emotional state of the empathizer, could also, if the stimulus incident lasted more than the briefest period, reflect the results of emotional reactions based on cognitions associated with role-taking ("if I were him I would feel..."). Picture or puppet-story indices for empathy have been adopted to enable even very young, pre-school subjects to respond without needing to read questions and write answers. Dependent variables (variables that are monitored for any change by the experimenter) for younger subjects have included self reporting on a seven-point smiley face scale and filmed facial reactions. In some experiments, subjects are required to watch video scenarios (either staged or authentic) and to make written responses which are then assessed for their levels of empathy; scenarios are sometimes also depicted in printed form. Self-report measures Measures of empathy also frequently require subjects to self-report upon their own ability or capacity for empathy, using Likert-style numerical responses to a printed questionnaire that may have been designed to reveal the affective, cognitive-affective, or largely cognitive substrates of empathic functioning. Some questionnaires claim to reveal both cognitive and affective substrates. However, a 2019 meta analysis questions the validity of self-report measures of cognitive empathy, finding that such self-report measures have negligibly small correlations with corresponding behavioral measures. Balancing subjective self-perceptions along with observable behaviors can help to contribute to a more reliable assessment of empathy. Such measures are also vulnerable to measuring not empathy but the difference between a person's felt empathy and their standards for how much empathy is appropriate. For example, one researcher found that students scored themselves as less empathetic after taking her empathy class. After learning more about empathy, the students became more exacting in how they judged their own feelings and behavior, expected more from themselves, and so rated themselves more severely. In the field of medicine, a measurement tool for carers is the Jefferson Scale of Physician Empathy, Health Professional Version (JSPE-HP). The Interpersonal Reactivity Index (IRI) is among the oldest published measurement tools still in frequent use (first published in 1983) that provides a multi-dimensional assessment of empathy. It comprises a self-report questionnaire of 28 items, divided into four seven-item scales covering the subdivisions of affective and cognitive empathy described above. More recent self-report tools include The Empathy Quotient (EQ) created by Baron-Cohen and Wheelwright which comprises a self-report questionnaire consisting of 60 items. Another multi-dimensional scale is the Questionnaire of Cognitive and Affective Empathy (QCAE, first published in 2011). The Empathic Experience Scale is a 30-item questionnaire that measures empathy from a phenomenological perspective on intersubjectivity, which provides a common basis for the perceptual experience (vicarious experience dimension) and a basic cognitive awareness (intuitive understanding dimension) of others' emotional states. It is difficult to make comparisons over time using such questionnaires because of how language changes. For example, one study used a single questionnaire to measure 13,737 college students between 1979 and 2009, and found that empathy scores fell substantially over that time. A critic noted these results could be because the wording of the questionnaire had become anachronistically quaint (it used idioms no longer in common use, like "tender feelings", "ill at ease", "quite touched", or "go to pieces" that today's students might not identify with). Development Ontogenetic development By the age of two, children normally begin to exhibit fundamental behaviors of empathy by having an emotional response that corresponds with another person's emotional state. Even earlier, at one year of age, infants have some rudiments of empathy; they understand that, as with their own actions, other people's actions have goals. Toddlers sometimes comfort others or show concern for them. During their second year, they play games of falsehood or pretend in an effort to fool others. Such actions require that the child knows what others believe in order that the child can manipulate those beliefs. According to researchers at the University of Chicago who used functional magnetic resonance imaging (fMRI), children between the ages of seven and twelve, when seeing others being injured, experience brain activity similar that which would occur if the child themself had been injured. Their findings are consistent with previous fMRI studies of pain empathy with adults, and previous findings that vicarious experiencing, particularly of others' distress, is hardwired and present early in life. The research found additional areas of the brain, associated with social and moral cognition, were activated when young people saw another person intentionally hurt by somebody, including regions involved in moral reasoning. Although children are capable of showing some signs of empathy, including attempting to comfort a crying baby, from as early as 18 months to two years, most do not demonstrate a full theory of mind until around the age of four. Theory of mind involves the ability to understand that other people may have beliefs that are different from one's own, and is thought to involve the cognitive component of empathy. Children usually can pass false-belief tasks (a test for a theory of mind) around the age of four. It is theorised that people with autism find using a theory of mind to be very difficult, but there is quite a bit of controversy on this subject. (e.g. the Sally–Anne test). Empathic maturity is a cognitive-structural theory developed at the Yale University School of Nursing. It addresses how adults conceive or understand the personhood of patients. The theory, first applied to nurses and since applied to other professions, postulates three levels of cognitive structures. The third and highest level is a meta-ethical theory of the moral structure of care. Adults who operate with level-III understanding synthesize systems of justice and care-based ethics. Individual differences The Empathic Concern scale assesses other-oriented feelings of sympathy and concern and the Personal Distress scale measures self-oriented feelings of personal anxiety and unease. Researchers have used behavioral and neuroimaging data to analyze extraversion and agreeableness. Both are associated with empathic accuracy and increased brain activity in two brain regions that are important for empathic processing (medial prefrontal cortex and temporoparietal junction). Sex differences On average, females score higher than males on measures of empathy, such as the Empathy Quotient (EQ), while males tend to score higher on the Systemizing Quotient (SQ). Both males and females with autistic spectrum disorders usually score lower on the EQ and higher on SQ (see below for more detail on autism and empathy). Other studies show no significant sex differences, and instead suggest that gender differences are the result of motivational differences, such as upholding stereotypes. Gender stereotypes about men and women can affect how they express emotions. The sex difference is small to moderate, somewhat inconsistent, and is often influenced by the person's motivations or social environment. Bosson et al. say "physiological measures of emotion and studies that track people in their daily lives find no consistent sex differences in the experience of emotion", which "suggests that women may amplify certain emotional expressions, or men may suppress them". However, a 2014 review from Neuroscience & Biobehavioral Reviews reported that there is evidence that "sex differences in empathy have phylogenetic and ontogenetic roots in biology and are not merely cultural byproducts driven by socialization." The review found sex differences in empathy from birth, growing larger with age, and consistent and stable across lifespan. Females, on average, had higher empathy than males, while children with higher empathy, regardless of gender, continue to be higher in empathy throughout development. Analysis of brain event-related potentials found that females who saw human suffering tended to have higher ERP waveforms than males. An investigation of N400 amplitudes found, on average, higher N400 in females in response to social situations, which positively correlated with self-reported empathy. Structural fMRI studies also found females to have larger grey matter volumes in posterior inferior frontal and anterior inferior parietal cortex areas which are correlated with mirror neurons in fMRI literature. Females also tended to have a stronger link between emotional and cognitive empathy. The researchers believe that the stability of these sex differences in development are unlikely to be explained by environmental influences but rather by human evolution and inheritance. Throughout prehistory, women were the primary nurturers and caretakers of children; so this might have led to an evolved neurological adaptation for women to be more aware and responsive to non-verbal expressions. According to the "Primary Caretaker Hypothesis", prehistoric men did not have such selective pressure as primary caretakers. This might explain modern day sex differences in emotion recognition and empathy. A review published in Neuropsychologia found that females tended to be better at recognizing facial affects, expression processing, and emotions in general. Males tended to be better at recognizing specific behaviors such as anger, aggression, and threatening cues. A 2014 meta-analysis, in Cognition and Emotion, found a small female advantage in non-verbal emotional recognition. Environmental influences Some research theorizes that environmental factors, such as parenting style and relationships, affect the development of empathy in children. Empathy promotes pro-social relationships and helps mediate aggression. Caroline Tisot studied how environmental factors like parenting style, parent empathy, and prior social experiences affect the development of empathy in young children. The children studied were asked to complete an effective empathy measure, while the children's parents completed a questionnaire to assess parenting style and the Balanced Emotional Empathy scale. The study found that certain parenting practices, as opposed to parenting style as a whole, contributed to the development of empathy in children. These practices include encouraging the child to imagine the perspectives of others and teaching the child to reflect on his or her own feelings. The development of empathy varied based on the gender of the child and parent. Paternal warmth was significantly positively related to empathy in children, especially boys. Maternal warmth was negatively related to empathy in children, especially girls. Empathy may be disrupted due to brain trauma such as stroke. In most cases, empathy is impaired if a lesion or stroke occurs on the right side of the brain. Damage to the frontal lobe, which is primarily responsible for emotional regulation, can profoundly impact a person's capacity to experience empathy. People with an acquired brain injury also show lower levels of empathy. More than half of those people with a traumatic brain injury self-report a deficit in their empathic capacity. There is some evidence that empathy is a skill that one can improve in with training. Evolution across species Studies in animal behavior and neuroscience indicate that empathy is not restricted to humans (however the interpretation of such research depends in part on how expansive a definition of empathy researchers adopt). Empathy-like behaviors have been observed in primates, both in captivity and in the wild, and in particular in bonobos, perhaps the most empathic primate. One study demonstrated prosocial behavior elicited by empathy in rodents. Rodents demonstrate empathy for cagemates (but not strangers) in pain. An influential study on the evolution of empathy by Stephanie Preston and Frans de Waal discusses a neural perception-action mechanism and postulates a bottom-up model of empathy that ties together all levels, from state matching to perspective-taking. University of Chicago neurobiologist Jean Decety agrees that empathy is not exclusive to humans, but that empathy has deep evolutionary, biochemical, and neurological underpinnings, and that even the most advanced forms of empathy in humans are built on more basic forms and remain connected to core mechanisms associated with affective communication, social attachment, and parental care. Neural circuits involved in empathy and caring include the brainstem, the amygdala, hypothalamus, basal ganglia, insula, and orbitofrontal cortex. Other animals and empathy between species Researchers Zanna Clay and Frans de Waal studied the socio-emotional development of the bonobo chimpanzee. They focused on the interplay of numerous skills such as empathy-related responding, and how different rearing backgrounds of the juvenile bonobo affected their response to stressful events—events related to themselves (e.g. loss of a fight) as well as stressful events of others. They found that bonobos sought out body contact with one another as a coping mechanism. Bonobos sought out more body contact after watching an event distress other bonobos than after their individually experienced stressful event. Mother-reared bonobos sought out more physical contact than orphaned bonobos after a stressful event happened to another. This finding shows the importance of mother-child attachment and bonding in successful socio-emotional development, such as empathic-like behaviors. De Waal suggests the advantages provided to mothers who understand the needs of their children are the reason empathy evolved in the first place. Empathic-like behavior has been observed in chimpanzees in different aspects of their natural behaviors. For example, chimpanzees spontaneously contribute comforting behaviors to victims of aggressive behavior in both natural and unnatural settings, a behavior recognized as consolation. Researchers led by Teresa Romero observed these empathic and sympathetic-like behaviors in chimpanzees in two separate groups. Acts of consolation were observed in both groups. This behavior is also found in humans, particularly in human infants. Another similarity found between chimpanzees and humans is that empathic-like responding was disproportionately provided to kin. Although comforting towards non-family chimpanzees was also observed, as with humans, chimpanzees showed the majority of comfort and concern to close/loved ones. Another similarity between chimpanzee and human expression of empathy is that females provided more comfort than males on average. The only exception to this discovery was that high-ranking males showed as much empathy-like behavior as their female counterparts. This is believed to be because of policing-like behavior and the authoritative status of high-ranking male chimpanzees. Dogs have been hypothesized to share empathic-like responding towards humans. Researchers Custance and Mayer put individual dogs in an enclosure with their owner and a stranger. When the participants were talking or humming, the dog showed no behavioral changes; however when the participants were pretending to cry, the dogs oriented their behavior toward the person in distress whether it be the owner or stranger. The dogs approached the participants when crying in a submissive fashion, by sniffing, licking, and nuzzling the distressed person. The dogs did not approach the participants in the usual form of excitement, tail wagging, or panting. Since the dogs did not direct their empathic-like responses only towards their owner, it is hypothesized that dogs generally seek out humans showing distressing body behavior. Although this could suggest that dogs have the cognitive capacity for empathy, it could also mean that domesticated dogs have learned to comfort distressed humans through generations of being rewarded for that specific behavior. When witnessing chicks in distress, domesticated hens (Gallus gallus domesticus) show emotional and physiological responding. Researchers found that in conditions where the chick was susceptible to danger, the mother hen's heart rate increased, she sounded vocal alarms, she decreased her personal preening, and her body temperature increased. This responding happened whether or not the chick felt as if it were in danger. Mother hens experienced stress-induced hyperthermia only when the chick's behavior correlated with the perceived threat. Humans can empathize with other species. One study of a sample of organisms showed that the strength of human empathic perceptions (and compassionate reactions) toward an organism is negatively correlated with how long ago our species' had a common ancestor. In other words, the more phylogenetically close a species is to us, the more likely we are to feel empathy and compassion towards it. Genetics Measures of empathy show evidence of being genetically influenced. For example, carriers of the deletion variant of ADRA2B show more activation of the amygdala when viewing emotionally arousing images. The gene 5-HTTLPR seems to influence sensitivity to negative emotional information and is also attenuated by the deletion variant of ADRA2b. Carriers of the double G variant of the OXTR gene have better social skills and higher self-esteem. A gene located near LRRN1 on chromosome 3 influences the human ability to read, understand, and respond to emotions in others. Neuroscientific basis of empathy Contemporary neuroscience offers insights into the neural basis of the mind's ability to understand and process emotion. Studies of mirror neurons attempt to measure the neural basis for human mind-reading and emotion-sharing abilities and thereby to explain the basis of the empathy reaction. People who score high on empathy tests have especially busy mirror neuron systems. Empathy is a spontaneous sharing of affect, provoked by witnessing and sympathizing with another's emotional state. The empathic person mirrors or mimics the emotional response they would expect to feel if they were in the other person's place. Unlike personal distress, empathy is not characterized by aversion to another's emotional response. This distinction is vital because empathy is associated with the moral emotion sympathy, or empathic concern, and consequently also prosocial or altruistic action. A person empathizes by feeling what they believe to be the emotions of another, which makes empathy both affective and cognitive. For social beings, negotiating interpersonal decisions is as important to survival as being able to navigate the physical landscape. Meta-analysis of fMRI studies of empathy confirms that different brain areas are activated during affective-perceptual empathy than during cognitive-evaluative empathy. Affective empathy is correlated with increased activity in the insula while cognitive empathy is correlated with activity in the mid cingulate cortex and adjacent dorsomedial prefrontal cortex. A study with patients who experienced different types of brain damage confirmed the distinction between emotional and cognitive empathy. Specifically, the inferior frontal gyrus appears to be responsible for emotional empathy, and the ventromedial prefrontal gyrus seems to mediate cognitive empathy. fMRI has been employed to investigate the functional anatomy of empathy. Observing another person's emotional state activates parts of the neuronal network that are involved in processing that same state in oneself, whether it is disgust, touch, or pain. As these emotional states are being observed, the brain is able activate a network of the brain that is involved in empathy. There are two separate systems of the brain involved with the feeling of empathy: a cognitive system and an emotional system. The cognitive system helps an individual understand another's perspective while the emotional system enables our ability to empathize emotionally. The neuronal network that is activated controls the observers response to these emotional states thus prompting an empathetic response. The study of the neural underpinnings of empathy received increased interest following a paper published by S.D. Preston and Frans de Waal after the discovery of mirror neurons in monkeys that fire both when the creature watches another perform an action and when they themselves perform it. Researchers suggest that paying attention to perceiving another individual's state activates neural representations, and that this activation primes or generates the associated autonomic and somatic responses (perception-action coupling), unless inhibited. This mechanism resembles the common coding theory between perception and action. Another study provides evidence of separate neural pathways activating reciprocal suppression in different regions of the brain associated with the performance of "social" and "mechanical" tasks. These findings suggest that the cognition associated with reasoning about the "state of another person's mind" and "causal/mechanical properties of inanimate objects" are neurally suppressed from occurring at the same time. Mirroring-behavior in motor neurons during empathy may help duplicate feelings. Such sympathetic action may afford access to sympathetic feelings and, perhaps, trigger emotions of kindness and forgiveness. Impairment A difference in distribution between affective and cognitive empathy has been observed in various conditions. Psychopathy and narcissism are associated with impairments in affective but not cognitive empathy, whereas bipolar disorder is associated with deficits in cognitive but not affective empathy. People with borderline personality disorder (BPD) may suffer from impairments in cognitive empathy as well as fluctuating affective empathy, although this topic is controversial. Schizophrenia, too, is associated with deficits in both types of empathy. However, even in people without conditions such as these, the balance between affective and cognitive empathy varies. Atypical empathic responses are associated with some personality disorders such as psychopathy, borderline, narcissistic, and schizoid personality disorders; conduct disorder; schizophrenia; bipolar disorder; and depersonalization. Sex offenders who had been raised in an environment where they were shown a lack of empathy and had endured abuse of the sort they later committed, felt less affective empathy for their victims. Autism and Controversy The subject of whether autism affects empathy is a controversial and complex area of study. Several different factors are proposed to be at play, such as mirror neurons, alexithymia, and more. The double empathy problem theory proposes that prior studies on autism and empathy may have been misinterpreted and that autistic people show the same levels of cognitive empathy towards one another as non-autistic people do. Autism spectrum disorder (ASD) is often correlated to problems with empathy and social communication skills. However, like ASD itself, these issues are often found to be on a spectrum. The suggestion that people with autism are likely to have issues with personal relationships and empathy is a complex issue that has been addressed in many studies. Various research has been exploring these concepts for more than twenty years. Certain studies, like this one from 2004 found connections between ASD and empathy issues. Another study found that empathy problems may be associated to the comorbidity of alexithymia (a struggle to feel emotions) and ASD. However, a more recent study from 2022 found that there were, in fact, no significant differences between the brain sections (medial prefrontal cortex and amygdala) that are associated with empathy. Another study (2023) focusing on ASD and empathy with regards to mirror neurons also reflected on the theory that mirror neurons “may be dysfunctional in ASD.” However, as the researchers state, this connection is not clear and although mirror neurons are correlated to ASD, there is no proven causational relationship between dysfunctional mirror neurons and ASD. The study from 2023 might be considered contradictory to an earlier (2006) study on mirror neurons that found that high-functioning autistic children showed reduced mirror neuron activity in the brain's inferior frontal gyrus while imitating and observing emotional expressions in other children who were considered "neurotypical." The correlation between ASD and empathy is a focus for researchers and many relevant articles can be found in the Journal of Autism and Developmental Disorders. Psychopathy Psychopathy is a personality construct partly characterized by antisocial and aggressive behaviors, as well as emotional and interpersonal deficits including shallow emotions and a lack of remorse and empathy. The Diagnostic and Statistical Manual of Mental Disorders (DSM) and International Classification of Diseases (ICD) list antisocial personality disorder (ASPD) and dissocial personality disorder, stating that these have been referred to as or include what is referred to as psychopathy. Psychopathy is associated with atypical responses to distress cues (e.g. facial and vocal expressions of fear and sadness), including decreased activation of the fusiform and extrastriate cortical regions, which may partly account for impaired recognition of and reduced autonomic responsiveness to expressions of fear, and impairments of empathy. Studies on children with psychopathic tendencies have also shown such associations. The underlying for processing expressions of happiness are functionally intact in psychopaths, although less responsive than in those of controls. The neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear. Some fMRI studies report that emotion perception deficits in psychopathy are pervasive across emotions (positives and negatives). One study on psychopaths found that, under certain circumstances, they could willfully empathize with others, and that their empathic reaction initiated the same way it does for controls. Psychopathic criminals were brain-scanned while watching videos of a person harming another individual. The psychopaths' empathic reaction initiated the same way it did for controls when they were instructed to empathize with the harmed individual, and the area of the brain relating to pain was activated when the psychopaths were asked to imagine how the harmed individual felt. The research suggests psychopaths can switch empathy on at will, which would enable them to be both callous and charming. The team who conducted the study say they do not know how to transform this willful empathy into the spontaneous empathy most people have, though they propose it might be possible to rehabilitate psychopaths by helping them to activate their "empathy switch". Others suggested that it remains unclear whether psychopaths' experience of empathy was the same as that of controls, and also questioned the possibility of devising therapeutic interventions that would make the empathic reactions more automatic. One problem with the theory that the ability to turn empathy on and off constitutes psychopathy is that such a theory would classify socially sanctioned violence and punishment as psychopathy, as these entail suspending empathy towards certain individuals and/or groups. The attempt to get around this by standardizing tests of psychopathy for cultures with different norms of punishment is criticized in this context for being based on the assumption that people can be classified in discrete cultures while cultural influences are in reality mixed and every person encounters a mosaic of influences. Psychopathy may be an artefact of psychiatry's standardization along imaginary sharp lines between cultures, as opposed to an actual difference in the brain. Work conducted by Professor Jean Decety with large samples of incarcerated psychopaths offers additional insights. In one study, psychopaths were scanned while viewing video clips depicting people being intentionally hurt. They were also tested on their responses to seeing short videos of facial expressions of pain. The participants in the high-psychopathy group exhibited significantly less activation in the ventromedial prefrontal cortex, amygdala, and periaqueductal gray parts of the brain, but more activity in the striatum and the insula when compared to control participants. In a second study, individuals with psychopathy exhibited a strong response in pain-affective brain regions when taking an imagine-self perspective, but failed to recruit the neural circuits that were activated in controls during an imagine-other perspective—in particular the ventromedial prefrontal cortex and amygdala—which may contribute to their lack of empathic concern. Researchers have investigated whether people who have high levels of psychopathy have sufficient levels of cognitive empathy but lack the ability to use affective empathy. People who score highly on psychopathy measures are less likely to exhibit affective empathy. There was a strong negative correlation, showing that psychopathy and lack of affective empathy correspond strongly. found those who scored highly on the psychopathy scale do not lack in recognising emotion in facial expressions. Therefore, such individuals do not lack in perspective-talking ability but do lack in compassion . Neuroscientist Antonio R. Damasio and his colleagues showed that subjects with damage to the ventromedial prefrontal cortex lack the ability to empathically feel their way to moral answers, and that when confronted with moral dilemmas, these brain-damaged patients coldly came up with "end-justifies-the-means" answers, leading Damasio to conclude that the point was not that they reached immoral conclusions, but that when they were confronted by a difficult issue – in this case as whether to shoot down a passenger plane hijacked by terrorists before it hits a major city – these patients appear to reach decisions without the anguish that afflicts those with normally functioning brains. According to Adrian Raine, a clinical neuroscientist also at the University of Southern California, one of this study's implications is that society may have to rethink how it judges immoral people: "Psychopaths often feel no empathy or remorse. Without that awareness, people relying exclusively on reasoning seem to find it harder to sort their way through moral thickets. Does that mean they should be held to different standards of accountability?" Despite studies suggesting psychopaths have deficits in emotion perception and imagining others in pain, professor Simon Baron-Cohen claims psychopathy is associated with intact cognitive empathy, which would imply an intact ability to read and respond to behaviors, social cues, and what others are feeling. Psychopathy is, however, associated with impairment in the other major component of empathy—affective (emotional) empathy—which includes the ability to feel the suffering and emotions of others (emotional contagion), and those with the condition are therefore not distressed by the suffering of their victims. Such a dissociation of affective and cognitive empathy has been demonstrated for aggressive offenders. Other conditions Atypical empathic responses are also correlated with a variety of other conditions. Borderline personality disorder is characterized by extensive behavioral and interpersonal difficulties that arise from emotional and cognitive dysfunction. Dysfunctional social and interpersonal behavior plays a role in the emotionally intense way people with borderline personality disorder react. While individuals with borderline personality disorder may show their emotions excessively, their ability to feel empathy is a topic of much dispute with contradictory findings. Some studies assert impairments in cognitive empathy in BPD patients yet no affective empathy impairments, while other studies have found impairments in both affective and cognitive empathy. Fluctuating empathy, fluctuating between normal range of empathy, reduced sense of empathy, and a lack of empathy has been noted to be present in BPD patients in multiple studies, although more research is needed to determine its prevalence, although it is believed to be at least not uncommon and may be a very common phenomenon. BPD is a very heterogenous disorder, with symptoms including empathy ranging wildly between patients. One diagnostic criterion of narcissistic personality disorder is a lack of empathy and an unwillingness or inability to recognize or identify with the feelings and needs of others. Characteristics of schizoid personality disorder include emotional coldness, detachment, and impaired affect corresponding with an inability to be empathic and sensitive towards others. A study conducted by Jean Decety and colleagues at the University of Chicago demonstrated that subjects with aggressive conduct disorder demonstrate atypical empathic responses when viewing others in pain. Subjects with conduct disorder were at least as responsive as controls to the pain of others but, unlike controls, subjects with conduct disorder showed strong and specific activation of the amygdala and ventral striatum (areas that enable a general arousing effect of reward), yet impaired activation of the neural regions involved in self-regulation and metacognition (including moral reasoning), in addition to diminished processing between the amygdala and the prefrontal cortex. Schizophrenia is characterized by impaired affective empathy, as well as severe cognitive and empathy impairments as measured by the Empathy Quotient (EQ). These empathy impairments are also associated with impairments in social cognitive tasks. Bipolar individuals have impaired cognitive empathy and theory of mind, but increased affective empathy. Despite cognitive flexibility being impaired, planning behavior is intact. Dysfunctions in the prefrontal cortex could result in the impaired cognitive empathy, since impaired cognitive empathy has been related with neurocognitive task performance involving cognitive flexibility. Dave Grossman, in his book On Killing, reports on how military training artificially creates depersonalization in soldiers, suppressing empathy and making it easier for them to kill other people. A deadening of empathic response to workmates, customers and the like is one of the three key components of occupational burnout, according to the conceptualization behind its primary diagnostic instrument, the Maslach Burnout Inventory. The term Empathy Deficit Disorder (EDD) has gained popularity online, but it is not a diagnosis under the DSM-5. The term was coined in an article by Douglas LaBier. In the article, he acknowledges that he "made it up, so you won't find it listed in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders" and that his conclusions are derived from personal experience alone. His conclusions have not been validated through clinical studies, nor have studies identified EDD as a separate disorder rather than a symptom associated with previously established diagnoses that do appear in the DSM-5. Effects The capacity to empathize is a revered trait in society. Empathy is considered a motivating factor for unselfish, prosocial behavior, whereas a lack of empathy is related to antisocial behavior. Apart from the automatic tendency to recognize the emotions of others, one may also deliberately engage in empathic reasoning. Such empathic engagement helps an individual understand and anticipate the behavior of another. Two general methods have been identified: An individual may mentally simulate fictitious versions of the beliefs, desires, character traits, and context of another individual to see what emotional feelings this provokes. Or, an individual may simulate an emotional feeling and then analyze the environment to discover a suitable reason for the emotional feeling to be appropriate for that specific environment. An empathizer's emotional background may affect or distort how they perceive the emotions in others. Societies that promote individualism have lower ability for empathy. The judgments that empathy provides about the emotional states of others are not certain ones. Empathy is a skill that gradually develops throughout life, and which improves the more contact we have with . Empathizers report finding it easier to take the perspective of another person in a situation when they have experienced a similar situation, and that they experience greater empathic understanding. Research regarding whether similar past experience makes the empathizer more accurate is mixed. The extent to which a person's emotions are publicly observable, or mutually recognized as such has significant social consequences. Empathic recognition may or may not be welcomed or socially desirable. This is particularly the case when we recognize the emotions that someone has towards us during real time interactions. Based on a metaphorical affinity with touch, philosopher Edith Wyschogrod claims that the proximity entailed by empathy increases the potential vulnerability of either party. Benefits of empathizing People who score more highly on empathy questionnaires also report having more positive relationships with other people. They report "greater life satisfaction, more positive affect, less negative affect, and less depressive symptoms than people who had lower empathy scores". Children who exhibit more empathy also have more resilience. Empathy can be an aesthetic pleasure, "by widening the scope of that which we experience... by providing us with more than one perspective of a situation, thereby multiplying our experience... and... by intensifying that experience." People can use empathy to borrow joy from the joy of children discovering things or playing make-believe, or to satisfy our curiosity about other people's lives. Empathic inaccuracy and empathic bias People can severely overestimate how much they understand others. When people empathize with another, they may oversimplify that other person in order to make them more legible. It may improve empathic accuracy for the empathizer to explicitly ask the person empathized with for confirmation of the empathic hypothesis. However, people may be reluctant to abandon their empathic hypotheses even when they are explicitly denied. Because we oversimplify people in order to make them legible enough to empathize with, we can come to misapprehend how cohesive other people are. We may come to think of ourselves as lacking a strong, integral self in comparison. Fritz Breithaupt calls this the "empathic endowment effect". Because the empathic person must temporarily dampen their own sense of self in order to empathize with the other, and because the other seems to have a magnified and extra-cohesive sense of self, the empathic person may suffer from this and may "project onto others the self that they are lacking" and envy "that which they must give up in order to be able to feel empathy: a strong self". Some research suggests that people are more able and willing to empathize with those most similar to themselves. In particular, empathy increases with similarities in culture and living conditions. Empathy is more likely to occur between individuals whose interaction is more frequent. In one experiment, researchers gave two groups of men wristbands according to which football team they supported. Each participant received a mild electric shock, then watched another go through the same pain. When the wristbands matched, both : with pain, and empathic pain. If they supported opposing teams, the observer was found to have little empathy. Disadvantages of empathy Psychologist Paul Bloom, author of Against Empathy, points out that empathic bias can result in tribalism and violent responses in the name of helping people of the same "tribe" or social group, for example when empathic bias is exploited by demagogues. He proposes "rational compassion" as an alternative; one example is using effective altruism to decide on charitable donations rationally, rather than by relying on emotional responses to images in the media. Higher empathy tends to reduce the accuracy of deception detection, and emotion recognition training does not improve deception detection. Empathy can also be exploited by sympathetic beggars. Bloom points to the example of street children in India, who can get many donations because they are adorable but this results in their enslavement by organized crime. Bloom says that though someone might feel better about themselves and find more meaning in life than they give to the person in front of them, in some cases they would do less harm and in many cases do more good in the world by giving to an effective charity through an impersonal website. Bloom believes improper use of empathy and social intelligence can lead to shortsighted actions and parochialism. Bloom points out that parents who have too much short-term empathy might create long-term problems for their children, by neglecting discipline, helicopter parenting, or deciding not to get their children vaccinated because of the short-term discomfort. People experiencing too much empathy after a disaster may continue to send donations like canned goods or used clothing even after being asked to stop or to send cash instead, and this can make the situation worse by creating the need to dispose of useless donations and taking resources away from helpful activities. Bloom also finds empathy can encourage unethical behavior when it causes people to care more about attractive people than ugly people, or people of one's own race vs. people of a different race. The attractiveness bias can also affect wildlife conservation efforts, increasing the amount of money devoted and laws passed to protect cute and photogenic animals, while taking attention away from species that are more ecologically important. Empathy and power People tend to empathize less when they have more social or political power. For example, people from lower-class backgrounds exhibit better empathic accuracy than those from upper-class backgrounds. In a variety of "priming" experiments, people who were asked to recall a situation in which they had power over someone else then demonstrated reduced ability to mirror others, to comprehend their viewpoints, or to learn from their perspectives. Empathy and violence Bloom says that although psychopaths have low empathy, the correlation between low empathy and violent behavior as documented in scientific studies is "zero". Other measures are much more predictive of violent behavior, such as lack of self-control. Compassion fatigue Excessive empathy can lead to "empathic distress fatigue", especially if it is associated with pathological altruism. The risks are fatigue, occupational burnout, guilt, shame, anxiety, and depression. Tania Singer says that health care workers and caregivers must be objective regarding the emotions of others. They should not over-invest their own emotions in the other, at the risk of draining away their own resourcefulness. Paul Bloom points out that high-empathy nurses tend to spend less time with their patients, to avoid feeling negative emotions associated with witnessing suffering. Empathy backfire Despite empathy being often portrayed as a positive attribute, whether or not the people who express empathy are viewed favorably depends on who they show empathy for. Such is the case in which a third party observes a subject showing empathy for someone of questionable character or generally viewed as unethical; that third party might not like or respect the subject for it. This is called "empathy backfire". Influence on helping behavior Investigators into the social response to natural disasters researched the characteristics associated with individuals who help victims. Researchers found that cognitive empathy, rather than emotional empathy, predicted helping behavior towards victims. Taking on the perspectives of others (cognitive empathy) may allow these helpers to better empathize with victims without as much discomfort, whereas sharing the emotions of the victims (emotional empathy) can cause emotional distress, helplessness, and victim-blaming, and may lead to avoidance rather than helping. Individuals who expressed concern for the vulnerable (i.e. affective empathy) were more willing to accept the COVID-19 pandemic lockdown measures that create distress. People who understand how empathic feelings evoke altruistic motivation may adopt strategies for suppressing or avoiding such feelings. Such numbing, or loss of the capacity to feel empathy for clients, is a possible factor in the experience of burnout among case workers in helping professions. People can better cognitively control their actions the more they understand how altruistic behavior emerges, whether it is from minimizing sadness or the arousal of mirror neurons. Empathy-induced altruism may not always produce pro-social effects. For example, it could lead one to exert oneself on behalf of those for whom empathy is felt at the expense of other potential pro-social goals, thus inducing a type of bias. Researchers suggest that individuals are willing to act against the greater collective good or to violate their own moral principles of fairness and justice if doing so will benefit a person for whom empathy is felt. Empathy-based socialization differs from inhibition of egoistic impulses through shaping, modeling, and internalized guilt. Therapeutic programs to foster altruistic impulses by encouraging perspective-taking and empathic feelings might enable individuals to develop more satisfactory interpersonal relations, especially in the long-term. Empathy-induced altruism can improve attitudes toward stigmatized groups, racial attitudes, and actions toward people with AIDS, the homeless, and convicts. Such resulting altruism also increases cooperation in competitive situations. Empathy is good at prompting prosocial behaviors that are informal, unplanned, and directed at someone who is immediately present, but is not as good at prompting more abstractly-considered, long-term prosocial behavior. Empathy can not only be a precursor to one's own helpful acts, but can also be a way of inviting help from others. If you mimic the posture, facial expressions, and vocal style of someone you are with, you can thereby encourage them to help you and to form a favorable opinion of you. Empathic anger and distress Anger Empathic anger is an emotion, a form of empathic distress. Empathic anger is felt in a situation where someone else is being hurt by another person or thing. Empathic anger affects desires to help and to punish. Two sub-categories of empathic anger are state empathic anger (current empathic anger) and trait empathic anger (tendency or predisposition to experience empathic anger). The higher a person's perspective-taking ability, the less angry they are in response to a provocation. Empathic concern does not, however, significantly predict anger response, and higher personal distress is associated with increased anger. Distress Empathic distress is feeling the perceived pain of another person. This feeling can be transformed into empathic anger, feelings of injustice, or guilt. These emotions can be perceived as pro-social; however, views differ as to whether they serve as motives for moral behavior. Stoic philosophers believed that to condition your emotional disposition on the emotions or fortunes of someone else is foolish. Cicero said that someone who feels distress at another's misfortune is committing as much of an error as an envious person who feels distress at another's good fortune. Disciplinary approaches Philosophy Ethics In the 2007 book The Ethics of Care and Empathy, philosopher Michael Slote introduces a theory of care-based ethics that is grounded in empathy. He claims that moral motivation does, and should, stem from a basis of empathic response, and that our natural reaction to situations of moral significance are explained by empathy. He explains that the limits and obligations of empathy, and in turn morality, are natural. These natural obligations include a greater empathic and moral obligation to family and friends and to those close to us in time and space. Our moral obligation to such people seems naturally stronger to us than that to strangers at a distance. Slote explains that this is due to the natural process of empathy. He asserts that actions are wrong if and only if they reflect or exhibit a deficiency of fully developed empathic concern for others on the part of the agent. Phenomenology In phenomenology, empathy describes the experience of something from the other's viewpoint, without confusion between self and other. This is based on the concept of agency. In the most basic sense, phenomenology is the experience of the other's body as "my body over there." In most other respects, however, it is an experience viewed through the person's own eyes; in experiencing empathy, what is experienced is not "my" experience, even though I experience it. Empathy is also considered to be the condition of intersubjectivity and, as such, the source of the constitution of objectivity. History Some postmodernist historians such as Keith Jenkins have debated whether or not it is possible to empathize with people from the past. Jenkins argues that empathy only enjoys such a privileged position in the present because it corresponds harmoniously with the dominant liberal discourse of modern society and can be connected to John Stuart Mill's concept of reciprocal freedom. Jenkins argues the past is a foreign country and as we do not have access to the epistemological conditions of bygone ages we are unable to empathize with those who lived then. Psychotherapy Heinz Kohut introduced the principle of empathy in psychoanalysis. His principle applies to the method of unconscious material. Business and management Because empathy seems to have potential to improve customer relations, employee morale, and personnel management capability, it has been studied in a business context. In the 2009 book Wired to Care, strategy consultant Dev Patnaik argues that a major flaw in contemporary business practice is a lack of empathy inside large corporations. He states that without empathy people inside companies struggle to make intuitive decisions, and often get fooled into believing they understand their business if they have quantitative research to rely upon. He says that companies can create a sense of empathy for customers, pointing to Nike, Harley-Davidson, and IBM as examples of "Open Empathy Organizations". Such companies, he claims, see new opportunities more quickly than competitors, adapt to change more easily, and create workplaces that offer employees a greater sense of mission in their jobs. In the 2011 book The Empathy Factor, organizational consultant Marie Miyashiro similarly argues for bringing empathy to the workplace, and suggests Nonviolent Communication as an effective mechanism for achieving this. In studies by the Management Research Group, empathy was found to be the strongest predictor of ethical leadership behavior out of 22 competencies in its management model, and empathy was one of the three strongest predictors of senior executive effectiveness. The leadership consulting firm Development Dimensions International found in 2016 that 20% of U.S. employers offered empathy training to managers. A study by the Center for Creative Leadership found empathy to be positively correlated to job performance among employees as well. Patricia Moore pioneered using empathic techniques to better understand customers. For example, she used makeup and prosthetics to simulate the experience of elderly people, and used the insights from this to inspire friendlier products for that customer segment. Design engineers at Ford Motor Company wore prosthetics to simulate pregnancy and old age, to help them design cars that would work better for such customers. Fidelity Investments trains its telephone customer service employees in a virtual reality app that puts them in a (dramatized) customer's home so they can experience what it is like to be on the other side of their conversations. Evolution of cooperation Empathic perspective-taking plays important roles in sustaining cooperation in human societies, as studied by evolutionary game theory. In game theoretical models, indirect reciprocity refers to the mechanism of cooperation based on moral reputations that are assigned to individuals based on their perceived adherence a set of moral rules called social norms. It has been shown that if reputations and individuals disagree on the moral standing of others (for example, because they use different moral evaluation rules or make errors of judgement), then cooperation will not be sustained. However, when individuals have the capacity for empathic perspective-taking, altruistic behavior can once again evolve. Moreover, evolutionary models also revealed that empathic perspective-taking itself can evolve, promoting prosocial behavior in human populations. In educational contexts Another growing focus of investigation is how empathy manifests in education between teachers and learners. Although there is general agreement that empathy is essential in educational settings, research found that it is difficult to develop empathy in trainee teachers. Learning by teaching is one method used to teach empathy. Students transmit new content to their classmates, so they have to reflect continuously on those classmates' mental processes. This develops the students' feeling for group reactions and networking. Carl R. Rogers pioneered research in effective psychotherapy and teaching which espoused that empathy coupled with unconditional positive regard or caring for students and authenticity or congruence were the most important traits for a therapist or teacher to have. Other research and meta-analyses corroborated the importance of these person-centered traits. Within medical education, a hidden curriculum appears to dampen or even reduce medical student empathy. In intercultural contexts According to one theory, empathy is one of seven components involved in the effectiveness of intercultural communication. This theory also states that empathy is learnable. However, research also shows that people experience more difficulty empathizing with others who are different from them in characteristics such as status, culture, religion, language, skin colour, gender, and age. To build intercultural empathy in others, psychologists employ empathy training. Researchers William Weeks, Paul Pedersen, et al. state that people who develop intercultural empathy can interpret experiences or perspectives from more than one worldview. Intercultural empathy can also improve self-awareness and critical awareness of one's own interaction style as conditioned by one's cultural views and promote a view of self-as-process. In fiction Lynn Hunt argued in Inventing Human Rights: A History that the concept of human rights developed how it did and when it did in part as a result of the influence of mid-eighteenth-century European novelists, particularly those whose use of the epistolatory novel form gave readers a more vivid sense that they were gaining access to the candid details of a real life. "The epistolatory novel did not just reflect important cultural and social changes of the time. Novel reading actually helped create new kinds of feelings including a recognition of shared psychological experiences, and these feelings then translated into new cultural and social movements including human rights." The power of empathy has become a frequent ability in fiction, specifically in that of superhero media. "Empaths" have the ability to sense/feel the emotions and bodily sensations of others and, in some cases, influence or control them. Although sometimes a specific power held by specific characters such as the Marvel Comics character Empath, the power has also been frequently linked to that of telepathy such as in the case of Jean Grey. The rebooted television series Charmed portrays the character Maggie Vera as a witch with the power of empathy. Her powers later expand to allow her to control the emotions of others as well as occasionally concentrate emotion into pure energy. In season four she learns to replicate people's powers by empathically understanding them. In the 2013 NBC television show reinterpretation of Hannibal we are introduced in the first episode to Will Graham. Graham is unique in that he seems to have exceptionally high levels of both cognitive and emotional empathy, combined with an eidetic memory and imagination. These abilities help him understand the motives of some of the most depraved killers. Hannibal Lecter calls his ability "pure empathy". Graham can assume the viewpoint of virtually anyone he meets, even viewpoints that sicken him. When evaluating a crime scene, he uses his imagination and empathy to almost become the killer, feeling what they were feeling during a murder. See also References Further reading External links Cognitive neuroscience Concepts in ethics Concepts in metaphysics Emotions Interpersonal relationships Life skills Moral psychology Social emotions
0.777251
0.9992
0.77663
Comparative psychology
Comparative psychology is the scientific study of the behavior and mental processes of non-human animals, especially as these relate to the phylogenetic history, adaptive significance, and development of behavior. The phrase comparative psychology may be employed in either a narrow or a broad meaning. In its narrow meaning, it refers to the study of the similarities and differences in the psychology and behavior of different species. In a broader meaning, comparative psychology includes comparisons between different biological and socio-cultural groups, such as species, sexes, developmental stages, ages, and ethnicities. Research in this area addresses many different issues, uses many different methods and explores the behavior of many different species, from insects to primates. Comparative psychology is sometimes assumed to emphasize cross-species comparisons, including those between humans and animals. However, some researchers feel that direct comparisons should not be the sole focus of comparative psychology and that intense focus on a single organism to understand its behavior is just as desirable; if not more so. Donald Dewsbury reviewed the works of several psychologists and their definitions and concluded that the object of comparative psychology is to establish principles of generality focusing on both proximate and ultimate causation. Using a comparative approach to behavior allows one to evaluate the target behavior from four different, complementary perspectives, developed by Niko Tinbergen. First, one may ask how pervasive the behavior is across species (i.e. how common is the behavior between animal species?). Second, one may ask how the behavior contributes to the lifetime reproductive success of the individuals demonstrating the behavior (i.e. does the behavior result in animals producing more offspring than animals not displaying the behavior)? Theories addressing the ultimate causes of behavior are based on the answers to these two questions. Third, what mechanisms are involved in the behavior (i.e. what physiological, behavioral, and environmental components are necessary and sufficient for the generation of the behavior)? Fourth, a researcher may ask about the development of the behavior within an individual (i.e. what maturational, learning, social experiences must an individual undergo in order to demonstrate a behavior)? Theories addressing the proximate causes of behavior are based on answers to these two questions. For more details see Tinbergen's four questions. History The 9th century scholar al-Jahiz wrote works on the social organization and communication methods of animals like ants. The 11th century Arabic writer Ibn al-Haytham (Alhazen) wrote the Treatise on the Influence of Melodies on the Souls of Animals, an early treatise dealing with the effects of music on animals. In the treatise, he demonstrates how a camel's pace could be hastened or slowed with the use of music, and shows other examples of how music can affect animal behavior, experimenting with horses, birds and reptiles. Through to the 19th century, a majority of scholars in the Western world continued to believe that music was a distinctly human phenomenon, but experiments since then have vindicated Ibn al-Haytham's view that music does indeed have an effect on animals. Charles Darwin was central in the development of comparative psychology; it is thought that psychology should be spoken in terms of "pre-" and "post-Darwin" because his contributions were so influential. Darwin's theory led to several hypotheses, one being that the factors that set humans apart, such as higher mental, moral and spiritual faculties, could be accounted for by evolutionary principles. In response to the vehement opposition to Darwinism was the "anecdotal movement" led by George Romanes who set out to demonstrate that animals possessed a "rudimentary human mind". Romanes is most famous for two major flaws in his work: his focus on anecdotal observations and entrenched anthropomorphism. Near the end of the 19th century, several scientists existed whose work was also very influential. Douglas Alexander Spalding was called the "first experimental biologist", and worked mostly with birds; studying instinct, imprinting, and visual and auditory development. Jacques Loeb emphasized the importance of objectively studying behavior, Sir John Lubbock is credited with first using mazes and puzzle devices to study learning and Conwy Lloyd Morgan is thought to be "the first ethologist in the sense in which we presently use the word". Although the field initially attempted to include a variety of species, by the early 1950s it had focused primarily on the white lab rat and the pigeon, and the topic of study was restricted to learning, usually in mazes. This stunted state of affairs was pointed out by Beach (1950) and although it was generally agreed with, no real change took place. He repeated the charges a decade later, again with no results. In the meantime, in Europe, ethology was making strides in studying a multitude of species and a plethora of behaviors. There was friction between the two disciplines where there should have been cooperation, but comparative psychologists refused, for the most part, to broaden their horizons. This state of affairs ended with the triumph of ethology over comparative psychology, culminating in the Nobel Prize being given to ethologists, combined with a flood of informative books and television programs on ethological studies that came to be widely seen and read in the United States. At present, comparative psychology in the United States is moribund. Throughout the long history of comparative psychology, repeated attempts have been made to enforce a more disciplined approach, in which similar studies are carried out on animals of different species, and the results interpreted in terms of their different phylogenetic or ecological backgrounds. Behavioral ecology in the 1970s gave a more solid base of knowledge against which a true comparative psychology could develop. However, the broader use of the term "comparative psychology" is enshrined in the names of learned societies and academic journals, not to mention in the minds of psychologists of other specialisms, so the label of the field is never likely to disappear completely. A persistent question with which comparative psychologists have been faced is the relative intelligence of different species of animal. Indeed, some early attempts at a genuinely comparative psychology involved evaluating how well animals of different species could learn different tasks. These attempts floundered; in retrospect it can be seen that they were not sufficiently sophisticated, either in their analysis of the demands of different tasks, or in their choice of species to compare. However, the definition of "intelligence" in comparative psychology is deeply affected by anthropomorphism; experiments focused on simple tasks, complex problems, reversal learning, learning sets, and delayed alternation were plagued with practical and theoretical problems. In the literature, "intelligence" is defined as whatever is closest to human performance and neglects behaviors that humans are usually incapable of (e.g. echolocation). Specifically, comparative researchers encounter problems associated with individual differences, differences in motivation, differences in reinforcement, differences in sensory function, differences in motor capacities, and species-typical preparedness (i.e. some species have evolved to acquire some behaviors quicker than other behaviors). Species studied A wide variety of species have been studied by comparative psychologists. However, a small number have dominated the scene. Ivan Pavlov's early work used dogs; although they have been the subject of occasional studies, since then they have not figured prominently. Increasing interest in the study of abnormal animal behavior has led to a return to the study of most kinds of domestic animal. Thorndike began his studies with cats, but American comparative psychologists quickly shifted to the more economical rat, which remained the almost invariable subject for the first half of the 20th century and continues to be used. Skinner introduced the use of pigeons, and they continue to be important in some fields. There has always been interest in studying various species of primate; important contributions to social and developmental psychology were made by Harry F. Harlow's studies of maternal deprivation in rhesus monkeys. Cross-fostering studies have shown similarities between human infants and infant chimpanzees. Kellogg and Kellogg (1933) aimed to look at heredity and environmental effects of young primates. They found that a cross-fostered chimpanzee named Gua was better at recognizing human smells and clothing and that the Kelloggs' infant (Donald) recognised humans better by their faces. The study ended nine months after it had begun, after the infant began to imitate the noises of Gua. Nonhuman primates have also been used to show the development of language in comparison with human development. For example, Gardner (1967) successfully taught the female chimpanzee Washoe 350 words in American Sign Language. Washoe subsequently passed on some of this teaching to her adopted offspring, Loulis. A criticism of Washoe's acquisition of sign language focused on the extent to which she actually understood what she was signing. Her signs may have just been based on an association to get a reward, such as food or a toy. Other studies concluded that apes do not understand linguistic input, but may form an intended meaning of what is being communicated. All great apes have been reported to have the capacity of allospecific symbolic production. Interest in primate studies has increased with the rise in studies of animal cognition. Other animals thought to be intelligent have also been increasingly studied. Examples include various species of corvid, parrots—especially the grey parrot—and dolphins. Alex (Avian Learning EXperiment) is a well known case study (1976–2007) which was developed by Pepperberg, who found that the African gray parrot Alex did not only mimic vocalisations but understood the concepts of same and different between objects. The study of non-human mammals has also included the study of dogs. Due to their domestic nature and personalities, dogs have lived closely with humans, and parallels in communication and cognitive behaviours have therefore been recognised and further researched. Joly-Mascheroni and colleagues (2008) demonstrated that dogs may be able to catch human yawns and suggested a level of empathy in dogs, a point that is strongly debated. Pilley and Reid found that a Border Collie named Chaser was able to successfully identify and retrieve 1022 distinct objects/toys. Animal cognition Researchers who study animal cognition are interested in understanding the mental processes that control complex behavior, and much of their work parallels that of cognitive psychologists working with humans. For example, there is extensive research with animals on attention, categorization, concept formation, memory, spatial cognition, and time estimation. Much research in these and other areas is related directly or indirectly to behaviors important to survival in natural settings, such as navigation, tool use, and numerical competence. Thus, comparative psychology and animal cognition are heavily overlapping research categories. Disorders of animal behavior Veterinary surgeons recognize that the psychological state of a captive or domesticated animal must be taken into account if its behavior and health are to be understood and optimized. Common causes of disordered behavior in captive or pet animals are lack of stimulation, inappropriate stimulation, or overstimulation. These conditions can lead to disorders, unpredictable and unwanted behavior, and sometimes even physical symptoms and diseases. For example, rats who are exposed to loud music for a long period will ultimately develop unwanted behaviors that have been compared with human psychosis, like biting their owners. The way dogs behave when understimulated is widely believed to depend on the breed as well as on the individual animal's character. For example, huskies have been known to ruin gardens and houses if they are not allowed enough activity. Dogs are also prone to psychological damage if they are subjected to violence. If they are treated very badly, they may become dangerous. The systematic study of disordered animal behavior draws on research in comparative psychology, including the early work on conditioning and instrumental learning, but also on ethological studies of natural behavior. However, at least in the case of familiar domestic animals, it also draws on the accumulated experience of those who have worked closely with the animals. Human-animal relationships The relationship between humans and animals has long been of interest to anthropologists as one pathway to an understanding the evolution of human behavior. Similarities between the behavior of humans and animals have sometimes been used in an attempt to understand the evolutionary significance of particular behaviors. Differences in the treatment of animals have been said to reflect a society's understanding of human nature and the place of humans and animals in the scheme of things. Domestication has been of particular interest. For example, it has been argued that, as animals became domesticated, humans treated them as property and began to see them as inferior or fundamentally different from humans. Ingold remarks that in all societies children have to learn to differentiate and separate themselves from others. In this process, strangers may be seen as "not people", and like animals. Ingold quoted Sigmund Freud: "Children show no trace of arrogance which urges adult civilized men to draw a hard-and-fast line between their own nature and that of all other animals. Children have no scruples over allowing animals to rank as their full equals." With maturity however, humans find it hard to accept that they themselves are animals, so they categorize, separating humans from animals, and animals into wild animals and tame animals, and tame animals into house pets and livestock. Such divisions can be seen as similar to categories of humans: who is part of a human community and someone who is not—that is, the outsider. The New York Times ran an article that showed the psychological benefits of animals, more specifically of children with their pets. It has been proven that having a pet does in fact improve kids' social skills. In the article, Dr. Sue Doescher, a psychologist involved in the study, stated, "It made the children more cooperative and sharing." It was also shown that these kids were more confident with themselves and able to be more empathic with other children. Furthermore, in an edition of Social Science and Medicine it was stated, "A random survey of 339 residents from Perth, Western Australia were selected from three suburbs and interviewed by telephone. Pet ownership was found to be positively associated with some forms of social contact and interaction, and with perceptions of neighborhood friendliness. After adjustment for demographic variables, pet owners scored higher on social capital and civic engagement scales." Results like these let us know that owning a pet provides opportunities for neighborly interaction, among many other chances for socialization among people. Topics of study Individual behavior General descriptions Orientation (interaction with environment) Locomotion Ingestive behavior Hoarding Nest building Exploration Play Tonic immobility (playing dead) Other miscellaneous behaviors (personal grooming, hibernation, etc.) Reproductive behavior General descriptions Developmental psychology Control (nervous system and endocrine system) Imprinting Evolution of sexual characteristics/behaviors Social behavior Imitation Behavior genetics Instincts Sensory-perceptual processes Neural and endocrine correlates of behavior Motivation Evolution Learning Qualitative and functional comparisons Consciousness and mind Notable comparative psychologists Noted comparative psychologists, in this broad sense, include: Aristotle Frank Beach F.J.J. Buytendijk Charles Darwin James Mark Baldwin Allen and Beatrix Gardner Harry F. Harlow Donald Hebb Richard Herrnstein L.T. Hobhouse Clark L. Hull Linus Kline Wolfgang Köhler Konrad Lorenz Emil Wolfgang Menzel Jr. Neal E. Miller C. Lloyd Morgan O. Hobart Mowrer Robert Lockhard Ivan Pavlov Irene Pepperberg George Romanes Thorleif Schjelderup-Ebbe Sara Shettleworth B.F. Skinner Willard Small Edward C. Tolman Edward L. Thorndike Margaret Floy Washburn John B. Watson Wilhelm Wundt Many of these were active in fields other than animal psychology; this is characteristic of comparative psychologists. Related fields Fields of psychology and other disciplines that draw upon, or overlap with, comparative psychology include: Animal cognition Behavioral ecology Operant conditioning Ethology Evolutionary neuroscience Experimental analysis of behavior Neuroethology Physiological psychology Psychopharmacology Trans-species psychology Notes References Further reading External links Animal Psychology Behavioural sciences Comparisons
0.788886
0.984389
0.776571
Somatic experiencing
Somatic Experiencing (SE) is a form of alternative therapy aimed at treating trauma and stress-related disorders, such as PTSD. The primary goal of SE is to modify the trauma-related stress response through bottom-up processing. The client's attention is directed toward internal sensations, (interoception, proprioception and kinaesthesis), rather than to cognitive or emotional experiences. The method was developed by Peter A. Levine. SE sessions are normally held in person and involve clients tracking their physical experiences. Practitioners are often mental health practitioners such as social workers, psychologists, therapists, psychiatrists, rolfers, Feldenkrais practitioners, yoga and Daoyin therapists, educators, clergy, occupational therapists, etc. Theory and methods Basis Somatic Experiencing (also known as Somatic Therapy) is heavily predicated on psychoanalyst Wilhelm Reich's theories of blocked emotion and how this emotion is held and released from the body. It differs from traditional talk therapies such as CBT, which has a main focus on the mind and not the body, by prioritizing disturbing thoughts and behavior patterns and seeking to change them. Rather, Somatic Therapy treats the body as the starting point for healing. It is less about desensitizing people to uncomfortable sensations, and more about relieving tension in the body. Many Western somatic psychotherapy approaches are based on either Reich or Elsa Gindler. Gindler's vision preceded Reich's and greatly influenced him. Gindler's direct link to the United States was Charlotte Selver. Selver greatly influenced Peter Levine's work and the development of fine somatic tracking. Selver taught thousands of Americans her "sensory awareness" method at Esalen Institute, including Peter Levine. Somatic Experiencing, like many of its sister modalities, is beholden to both Gindler and Reich. Each method has its own twist that differentiates it in style "in a manner alike to the different sects of an overarching religion" and even becoming "cult-like" at one time. Definitions Payne et al. describe SE as "not a form of exposure therapy" in that it "avoids direct and intense evocation of traumatic memories, instead approaching the charged memories indirectly and very gradually". Leitch et al. describe the approach similarly as "working with small gradations of traumatic activation alternated with the use of somatic resources. Working with small increments of traumatic material is a key component of SE, as is the development of somatic resources". In SE people "gently and incrementally reimagine and experience" and are "slowly working in graduated "doses"". Anderson et al., however, states that SE "includes techniques known from interoceptive exposure for panic attacks, by combining arousal reduction strategies with mild exposure therapy." Systematic desensitization One of the first exposure therapies, systematic desensitization, which was developed by Joseph Wolpe in the 1940s to treat anxiety disorders and phobias, is similarly described. Wolpe's systematic desensitization "consists of exposing the patient, while in a state of emotional calmness, to a small "dose" of something he fears" using imaginal methods that allow the therapist to "control precisely the beginning and ending of each presentation". This graduated exposure is similar to the SE concept of "titration". Wolpe also relied on relaxation responses alternating with incremental or graduated exposure to anxiety-provoking stimuli, and this practice was standard within cognitive-behavioral protocols long before Somatic Experiencing arrived on the scene as a trademarked approach in 1989. Pendulation One element of Somatic Experiencing therapy is "pendulation", a supposed natural intrinsic rhythm of the organism between contraction and expansion. The concept and its comparison to unicellular organisms can be traced to Wilhelm Reich, the father of somatic psychotherapy. Alexander Lowen and John Pierrakos, both psychiatrists, built upon Reich's foundational theories, developing Bioenergetics, and also compared the rhythm of this life force energy to a pendulum. The SE concept of the "healing vortex", is grounded in Ackert Ahsen's "law of bipolarity" according to Eckberg. Levine credits his inspiration for the healing vortex to a dream and not Ahsen. This principle involves the pendulatory tendency to weave back and forth between traumatic material and healing images and parasympathetic responses. Ahsen's "principle of bipolar configurations" asserts that "every significant eidetic state involves configuration . . . around two opposed nuclei which contend against each other. Every ISM of the negative type has a counter-ISM of the positive type." SIBAM (Sensation, Image, Behavior, Affect and Meaning) Peter Levine indicates that during the 1970's he "developed a model" called SIBAM, which broke down experience into five channels of Sensation, Image, Behavior, Affect and Meaning (or Cognition). SIBAM is considered both a model of experience and a model of dissociation. Multimodal Therapy, developed by Arnold Lazarus in the 1970's, is similar to the SIBAM model in that it broke down experience into Behavior, Affect, Sensation, Image, and Cognition (or Meaning). Somatic Experiencing integrates the tracking of Eugene Gendlin's "felt sense" into the model. Peter Levine has made good use of Gendlin’s focusing approach in Somatic Experiencing. "Dr. Levine emphasizes that the felt sense is the medium through which we understand all sensation, and that it reflects our total experience at a given moment." Lazarus also incorporated Eugene Gendlin's Focusing method into his model as a technique to circumvent cognitive blocks. Incorporation of this "bottom up" "felt sense" method is shared by both SE and Multimodal Therapy. Lazarus, like Levine, was heavily influenced by Akhter Ahsen's "ISM unity" or "eidetic" concept. In 1968, Ahsen explains the ISM this way: "It is a tri-dimensional unity. . . . With this image is attached a characteristic body feeling peculiar to the image, which we call the somatic pattern. With this somatic pattern is attached a third state composed of a constellation of vague and clear meanings, which we call the meaning." It is important to note that sensation, for Ahsen, included affective and physiological states. Ahsen went on to apply his ISM concept to traumatic experiences, which is strikingly similar to Peter Levine's later developed model. In the SIBAM model, like in the ISM model, the separate dimensions of experience in trauma can be "dissociated from one another". Coupling dynamics In the Somatic Experiencing method there is the concept of "coupling dynamics" in which the "under-coupled" state, where the traumatic experience exists, not as a unity, but as dissociated elements of the SIBAM. In SE "the arousal in one element can trigger the arousal in other elements (overcoupling) or it can restrict arousal in other elements (undercoupling)." An SE therapist "often has to work to uncouple responses (if responses are overcoupled) or to find ways to couple them (if the responses are undercoupled) in order for therapy to progress and to help the individual to restore balance in his or her emotional life." Ashen's description clearly matches this concept. Additionally, treatment of "post-traumatic stress through imagery", like SE, "emphasizes exploitation of the somatic aspect over the visual component of Ashen's ISM model because of the strong emotional and physiological components that present themselves frontally in these cases." Stress According to SE, post-traumatic stress symptoms originate from an "overreaction of the innate stress system due to the overwhelming character of the traumatic event. In the traumatic situation, people are unable to complete the initiated psychological and physiological defensive reaction." Standard cognitive behavioral understanding of PTSD and anxiety disorders was grounded in an understanding of fight, flight freeze mechanisms in addition to conscious and unconscious, preprogramed, automatic primal defensive action systems. SE is theorised to work through the "generation of new corrective interoceptive experiences" or the therapeutic ‘renegotiating’ of the traumatic response. Somatic Experiencing claims it is unique in this manner and therefore may be more effective than cognitive behavioral models due to this focus. The coupling dynamics model/SIBAM model in SE, however, is reminiscent to the pavlovian fear conditioning and extinction models underlying exposure based extinction paradigms of cognitive behavior therapy. Additionally, graduated exposure therapy and other fear extinction methods are similarly theorized to work due to the power of corrective experiences enhanced by "active coping" methods. Discharge In Somatic Experiencing therapy, "discharge" is facilitated in response to arousal to enable the client's body to return to a controlled condition. Discharge may be in the form of tears, a warm sensation, unconscious movement, the ability to breathe easily again, or other responses that demonstrate the autonomic nervous system returning to its baseline. The intention of this process is to reinforce the client's inherent capacity to self-regulate. The charge/discharge concept in Somatic Experiencing has its origins in Reichian therapy and Bioenergetics. Levine's predecessors in the somatic psychotherapy field clearly understood the dynamics of shock trauma and the failure of mobilization of fight or flight impulses in creating symptoms of anxiety neuroses and to maintain a chronic "state of emergency". They also understood that healing involved the completion of this "charge" associated with the truncated fight or flight impulses. Polyvagal theory Somatic Experiencing is also predicated on the Polyvagal Theory of human emotion developed by Stephen Porges. Many of the tenets of the Polyvagal theory incorporated in the Somatic Experiencing training are controversial and unproven. The SE therapy concepts such as "dorsal vagal shutdown" with bradycardia that are used to describe "freeze" and collapse states of trauma patients is controversial since it appears the ventral vagal branch, not the dorsal vagal branch, mediates this lowered heart rate and blood pressure state. Neurophysiological studies have shown that the dorsal motor nucleus has little to do with traumatic or psychologically related heart rate responses. Link to shamanism Levine's model, influenced by his work with shamans of "several cultures", makes wider connections "to myth and shamanism" and is "connected to these traditions". Levine "uses a story from shamanistic medicine to describe the work of body-centred trauma counselling. In shamanism, it is believed that when a person is overwhelmed by tragedy his soul will leave his body, a belief which is concordant with our present understanding of dissociation." Levine even notes that while developing his "theoretical biophysics doctoral dissertation on accumulated stress, as well as on my body-mind approach to resolving stress and healing trauma" he had a mystical experience where he engaged in a year-long socratic dialogue with an apparition of Albert Einstein. After reportedly having a "profound" dream Peter Levine believed he had been "assigned" the task "to protect this ancient knowledge from the Celtic Stone Age temples, and the Tibetan tradition, and to bring it to the scientific Western way of looking at things..." Evidence A 2019 systemic literature review noted that a stronger investment in clinical trials was needed to determine the efficacy of Somatic Experiencing. A 2021 literature review noted that "SE attracts growing interest in clinical application despite the lack of empirical research. Yet, the current evidence base is weak and does not (yet) fully accomplish the high standards for clinical effectiveness research." Regulation Unlike some of its sister somatic modalities (biodynamic craniosacral therapy, polarity therapy, etc.), Somatic Experiencing is not listed as an exempt modality from massage practice acts in the United States, and is not eligible to belong to The Federation of Therapeutic Massage, Bodywork and Somatic Practice Organizations, which was formed to protect the members' right to practice as an independent profession. Members of the Federation each have a professional regulating body with an enforceable code of ethics and standards of practice, continuing education requirements, a process of certifying and ensuring competency and a minimum of 500 hours of training. Somatic Experiencing practitioners do not meet any of these criteria unless they are already certified or licensed in another discipline. While the model has a growing evidence base as a modality "for treating people with post-traumatic stress disorder (PTSD)" that "integrates body awareness into the psychotherapeutic process", it is important to note that not all Somatic Experiencing practitioners practice psychotherapy and therefore have varying scopes of practice, for example, not all are qualified to work with people with mental disorders. SE instructs participants that they "are responsible for operating within their professional scope of practice and for abiding by state and federal laws". See also Further reading Peter A. Levine, Trauma and Memory: Brain and Body in a Search for the Living Past: A Practical Guide for Understanding and Working with Traumatic Memory Paperback – Illustrated, North Atlantic Books, October 27, 2015 Peter A. Levine, Waking the Tiger: Healing Trauma: explains how trauma effects the brain-body: Paperback - North Atlantic Books, July 7, 1997 References Alternative medical systems Mind–body interventions Post-traumatic stress disorder Somatics Somatic psychology Therapy
0.7788
0.996921
0.776401
History of psychology
Psychology is defined as "the scientific study of behavior and mental processes". Philosophical interest in the human mind and behavior dates back to the ancient civilizations of Egypt, Persia, Greece, China, and India. Psychology as a field of experimental study began in 1854 in Leipzig, Germany, when Gustav Fechner created the first theory of how judgments about sensory experiences are made and how to experiment on them. Fechner's theory, recognized today as Signal Detection Theory, foreshadowed the development of statistical theories of comparative judgment and thousands of experiments based on his ideas (Link, S. W. Psychological Science, 1995). In 1879, Wilhelm Wundt founded the first psychological laboratory dedicated exclusively to psychological research in Leipzig, Germany. Wundt was also the first person to refer to himself as a psychologist. A notable precursor to Wundt was Ferdinand Ueberwasser (1752–1812), who designated himself Professor of Empirical Psychology and Logic in 1783 and gave lectures on empirical psychology at the Old University of Münster, Germany. Other important early contributors to the field include Hermann Ebbinghaus (a pioneer in the study of memory), William James (the American father of pragmatism), and Ivan Pavlov (who developed the procedures associated with classical conditioning). Soon after the development of experimental psychology, various kinds of applied psychology appeared. G. Stanley Hall brought scientific pedagogy to the United States from Germany in the early 1880s. John Dewey's educational theory of the 1890s was another example. Also in the 1890s, Hugo Münsterberg began writing about the application of psychology to industry, law, and other fields. Lightner Witmer established the first psychological clinic in the 1890s. James McKeen Cattell adapted Francis Galton's anthropometric methods to generate the first program of mental testing in the 1890s. In Vienna, meanwhile, Sigmund Freud independently developed an approach to the study of the mind called psychoanalysis, which became a highly influential theory in psychology. The 20th century saw a reaction to Edward Titchener's critique of Wundt's empiricism. This contributed to the formulation of behaviorism by John B. Watson, which was popularized by B. F. Skinner through operant conditioning. Behaviorism proposed emphasizing the study of overt behavior, due to the fact that it could be quantified and easily measured. Early behaviorists considered the study of the mind too vague for productive scientific study. However, Skinner and his colleagues did study thinking as a form of covert behavior to which they could apply the same principles as overt behavior. The final decades of the 20th century saw the rise of cognitive science, an interdisciplinary approach to studying the human mind. Cognitive science again considers the mind as a subject for investigation, using the tools of cognitive psychology, linguistics, computer science, philosophy, behaviorism, and neurobiology. This form of investigation has proposed that a wide understanding of the human mind is possible, and that such an understanding may be applied to other research domains, such as artificial intelligence. There are conceptual divisions of psychology in "forces" or "waves", based on its schools and historical trends. This terminology was popularized among the psychologists to differentiate a growing humanism in therapeutic practice from the 1930s onwards, called the "third force", in response to the deterministic tendencies of Watson's behaviourism and Freud's psychoanalysis. Proponents of Humanistic psychology included Carl Rogers, Abraham Maslow, Gordon Allport, Erich Fromm, and Rollo May. Their humanistic concepts are also related to existential psychology, Viktor Frankl's logotherapy, positive psychology (which has Martin Seligman as one of the leading proponents), C. R. Cloninger's approach to well-being and character development, as well as to transpersonal psychology, incorporating such concepts as spirituality, self-transcendence, self-realization, self-actualization, and mindfulness. In cognitive behavioral psychotherapy, similar terms have also been incorporated, by which "first wave" is considered the initial behavioral therapy; a "second wave", Albert Ellis's cognitive therapy; and a "third wave", with the acceptance and commitment therapy, which emphasizes one's pursuit of values, methods of self-awareness, acceptance and psychological flexibility, instead of challenging negative thought schemes. A "fourth wave" would be the one that incorporates transpersonal concepts and positive flourishing, in a way criticized by some researchers for its heterogeneity and theoretical direction dependent on the therapist's view. A "fifth wave" has now been proposed by a group of researchers seeking to integrate earlier concepts into a unifying theory. Early psychological thought Many cultures throughout history have speculated on the nature of the mind, heart, soul, spirit, brain, etc. For instance, in Ancient Egypt, the Edwin Smith Papyrus contains an early description of the brain, and some speculations on its functions (described in a medical/surgical context) and the descriptions could be related to Imhotep who was the first Egyptian physician who anatomized and discovered the body of the human being. Though other medical documents of ancient times were full of incantations and applications meant to turn away disease-causing demons and other superstition, the Edwin Smith Papyrus gives remedies to almost 50 conditions and only two contain incantations to ward off evil. Ancient Greek philosophers, from Thales (fl. 550 BC) through even to the Roman period, developed an elaborate theory of what they termed the psuchẽ (psyche) (from which the first half of "psychology" is derived), as well as other "psychological" terms – nous, thumos, logistikon, etc. Classical Greece (fifth century BC), philosophers taught "naturalism", the belief that laws of nature shape our world, as opposed to gods and demons determining human fate. Alcmaeon, for example, believed the brain, not the heart, was the "organ of thought. "He tracked the ascending sensory nerves from the body to the brain, theorizing that mental activity originated in the region where the central nervous system is located and that the cause of mental illness resided within the brain. He applied this understanding to classify mental diseases and treatments. One of the most influential Ancient Greek influences on psychology came from the accounts of Plato (especially in the Republic), Pythagoras and of Aristotle (esp. Peri Psyches, also known by its Latin title, De Anima). Plato's tripartite theory of the soul, Chariot Allegory and concepts such as eros defined the subsequent Western Philosophy views of the psyche and anticipated modern psychological proposals. For example, concepts such as id, ego, super-ego and libido were interpreted by psychoanalysts as having been anticipated by Plato, to the extent that "in 1920, Freud decided to present Plato as the precursor of his own theory, as part of a strategy directed to define the scientific and cultural collocation of psychoanalysis". Other Hellenistic philosophers, namely the Stoics and Epicurians, diverged from the Classical Greek tradition in several important ways, especially in their concern with questions of the physiological basis of the mind. The Roman physician Galen addressed these issues most elaborately and influentially of all. The Greek tradition influenced some Christian and Islamic thought on the topic. In the Judeo-Christian tradition, the Manual of Discipline (from the Dead Sea Scrolls, – 61 AD) notes the division of human nature into two temperaments or opposing spirits of either veracity or perversity. Walter M. Freeman proposes that Thomism is the philosophical system explaining cognition that is most compatible with neurodynamics, in a 2008 article in the journal Mind and Matter entitled "Nonlinear Brain Dynamics and Intention According to Aquinas". In Asia, China had a long history of administering tests of ability as part of its education system. Chinese texts from 2500 years ago mention neuropsychiatric illness, including descriptions of mania and psychosis with or without epilepsy. "Imbalance" was the mechanism of psychosis. Other conditions described include confusion, visual illusions, intoxication, stress, and even malingering. Psychological theories about stages of human development can be traced to the time of Confucius, about 2500 years ago. In the 6th century AD, Lin Xie carried out an early experiment, in which he asked people to draw a square with one hand and at the same time draw a circle with the other (ostensibly to test people's vulnerability to distraction). It has been cited that this was the first psychology experiment. India had a theory of "the self" in its Vedanta philosophical writings. Additionally, Indians thought about the individual's self as being enclosed by different levels known as koshas. Additionally, the Sankya philosophy said that the mind has five components, including manas (lower mind), ahankara (sense of I-ness), chitta (memory bank of mind), buddhi (intellect), and atman (self/soul). Patanjali was one of the founders of the yoga tradition, sometime between 200 and 400 BC (pre-dating Buddhist psychology) and a student of the Vedas. He developed the science of breath and mind and wrote his knowledge in the form of between 194 and 196 aphorisms called the Yoga Sutras of Patanjali. He developed modern Yoga for psychological resilience and balance. He is reputed to have used yoga therapeutically for anxiety, depression and mental disorders as common then as now. Buddhist philosophies have developed several psychological theories (see Buddhism and psychology), formulating interpretations of the mind and concepts such as aggregates (skandhas), emptiness (sunyata), non-self (anatta), mindfulness and Buddha-nature, which are addressed today by theorists of humanistic and transpersonal psychology. Several Buddhist lineages have developed notions analogous to those of modern Western psychology, such as the unconscious, personal development and character improvement, the latter being part of the Noble Eightfold Path and expressed, for example, in the Tathagatagarbha Sutra. Hinayana traditions, such as the Theravada, focus more on individual meditation, while Mahayana traditions also emphasize the attainment of a Buddha nature of wisdom (prajña) and compassion (karuṇā) in the realization of the boddhisattva ideal, but affirming it more metaphysically, in which charity and helping sentient beings is cosmically fundamental. Buddhist monk and scholar D. T. Suzuki describes the importance of the individual's inner enlightenment and the self-realization of the mind. Researcher David Germano, in his thesis on Longchenpa, also shows the importance of self-actualization in the dzogchen teaching lineage. Medieval Muslim physicians also developed practices to treat patients with a variety of "diseases of the mind". Ahmed ibn Sahl al-Balkhi (850–934) was among the first, in this tradition, to discuss disorders related to both the body and the mind. Al-Balkhi recognized that the body and the soul can be healthy or sick, or "balanced or imbalanced". He wrote that imbalance of the body can result in fever, headaches and other bodily illnesses, while imbalance of the soul can result in anger, anxiety, sadness and other nafs-related symptoms. Avicenna, similarly, did early work in the treatment of nafs-related illnesses, and developed a system for associating changes in the mind with inner feelings. Avicenna also described phenomena we now recognize as neuropsychiatric conditions, including hallucination, mania, nightmare, melancholia, dementia, epilepsy and tremor. Ancient and medieval thinkers who discussed issues related to psychology included: Socrates of Athens (c. 470 – 399 BC). Emphasized virtue ethics. In epistemology, understood dialectic to be central to the pursuit of truth. As early as the 4th century BC, the Greek physician Hippocrates theorized that mental disorders had physical rather than supernatural causes. Plato's tripartite theory of the soul, Chariot Allegory and concepts such as eros defined the subsequent Western Philosophy views of the psyche and anticipated modern psychological proposals. Alcmaeon theorizes the brain in the seat of the mind. In 387 BC, Plato suggested that the brain is where mental processes take place. Boethius and his work represented an imaginary psychological dialogue between himself and philosophy, with philosophy personified as a woman, arguing that despite the apparent inequality of the world. In the 6th century AD, Lin Xie carried out an early psychological analysis experiment. It has been cited that this was the first psychology experiment. Ali ibn Sahl Rabban al-Tabari, who developed al-‘ilaj al-nafs (sometimes translated as "psychotherapy"), Padmasambhava was the 8th-century medicine Buddha of Tibet, called from the then Buddhist India to tame the Tibetans, and was instrumental in developing Tibetan psychiatric medicine. Patanjali founded Yoga and the method of psychological balance and resilience through breathing exercises and inner peace. Abu al-Qasim al-Zahrawi (Abulcasis), described head surgery. Ibn Tufail, who anticipated the tabula rasa argument and nature versus nurture debate. William of Ockham who has lot of interests in writing about logic and invented occams razor. Thomas Aquinas whose works allocated notion regarded emotions. Albertus magnus describes metaphysical morals in psychology and philosophical theories. Maimonides described rabies and belladonna intoxication. Witelo is considered a precursor of perception psychology. His Perspectiva contains much material in psychology, outlining views that are close to modern notions on the association of idea and on the subconscious. Further development Many of the Ancients' writings would have been lost without the efforts of Muslim, Christian, and Jewish translators in the House of Wisdom, the House of Knowledge, and other such institutions in the Islamic Golden Age, whose glosses and commentaries were later translated into Latin in the 12th century. However, it is not clear how these sources first came to be used during the Renaissance, and their influence on what would later emerge as the discipline of psychology is a topic of scholarly debate. Etymology and the early usage of the word The first print use of the term "psychology", that is, Greek-inspired neo-Latin psychologia, is dated to multiple works dated 1525. Etymology has long been attributed to the German scholastic philosopher Rudolf Göckel (1547–1628, often known under the Latin form Rodolphus Goclenius), who published the Psychologia hoc est: de hominis perfectione, animo et imprimis ortu hujus... in Marburg in 1590. Croatian humanist Marko Marulić (1450–1524) likely used the term in the title of a Latin treatise entitled Psichiologia de ratione animae humanae (c.1510–1517). Although the treatise itself has not been preserved, its title appears in a list of Marulic's works compiled by his younger contemporary, Franjo Bozicevic-Natalis in his "Vita Marci Maruli Spalatensis" (Krstić, 1964). The term did not come into popular usage until the German Rationalist philosopher, Christian Wolff (1679–1754) used it in his works Psychologia empirica (1732) and Psychologia rationalis (1734). This distinction between empirical and rational psychology was picked up in Denis Diderot's (1713–1780) and Jean le Rond d'Alembert's (1717–1783) Encyclopédie (1751–1784) and was popularized in France by Maine de Biran (1766–1824). In England, the term "psychology" overtook "mental philosophy" in the middle of the 19th century, especially in the work of William Hamilton (1788–1856). Enlightenment psychological thought Early psychology was regarded as the study of the soul (in the Christian sense of the term). The modern philosophical form of psychology was heavily influenced by the works of René Descartes (1596–1650), and the debates that he generated, of which the most relevant were the objections to his Meditations on First Philosophy (1641), published with the text. Also important to the later development of psychology were his Passions of the Soul (1649) and Treatise on Man (completed in 1632 but, along with the rest of The World, withheld from publication after Descartes heard of the Catholic Church's condemnation of Galileo; it was eventually published posthumously, in 1664). Although not educated as a physician, Descartes did extensive anatomical studies of bulls' hearts and was considered important enough that William Harvey responded to him. Descartes was one of the first to endorse Harvey's model of the circulation of the blood, but disagreed with his metaphysical framework to explain it. Descartes dissected animals and human cadavers and as a result was familiar with the research on the flow of blood leading to the conclusion that the body is a complex device that is capable of moving without the soul, thus contradicting the "Doctrine of the Soul". The emergence of psychology as a medical discipline was given a major boost by Thomas Willis, not only in his reference to psychology (the "Doctrine of the Soul") in terms of brain function, but through his detailed 1672 anatomical work, and his treatise ("Two Discourses on the Souls of Brutes"—meaning "beasts"). However, Willis acknowledged the influence of Descartes's rival, Pierre Gassendi, as an inspiration for his work. The philosophers of the British Empiricist and Associationist schools had a profound impact on the later course of experimental psychology. John Locke's An Essay Concerning Human Understanding (1689), George Berkeley's Treatise Concerning the Principles of Human Knowledge (1710), and David Hume's A Treatise of Human Nature (1739–1740) were particularly influential, as were David Hartley's Observations on Man (1749) and John Stuart Mill's A System of Logic. (1843). Also notable was the work of some Continental Rationalist philosophers, especially Baruch Spinoza's (1632–1677) On the Improvement of the Understanding (1662) and Gottfried Wilhelm Leibniz's (1646–1716) New Essays on Human Understanding (completed 1705, published 1765). Another important contribution was Friedrich August Rauch's (1806–1841) book Psychology: Or, A View of the Human Soul; Including Anthropology (1840), the first English exposition of Hegelian philosophy for an American audience. German idealism pioneered the proposition of the unconscious, which Jung considered to have been described psychologically for the first time by physician and philosopher Carl Gustav Carus. Also notable was its use by Friedrich Wilhelm Joseph von Schelling (1775–1835), and by Eduard von Hartmann in Philosophy of the Unconscious (1869); psychologist Hans Eysenck writes in Decline and Fall of the Freudian Empire (1985) that Hartmann's version of the unconscious is very similar to Freud's. The Danish philosopher Søren Kierkegaard also influenced the humanistic, existential, and modern psychological schools with his works The Concept of Anxiety (1844) and The Sickness Unto Death (1849). Transition to contemporary psychology Also influential on the emerging discipline of psychology were debates surrounding the efficacy of Mesmerism (a precursor to hypnosis) and the value of phrenology. The former was developed in the 1770s by Austrian physician Franz Mesmer (1734–1815) who claimed to use the power of gravity, and later of "animal magnetism", to cure various physical and mental ills. As Mesmer and his treatment became increasingly fashionable in both Vienna and Paris, it also began to come under the scrutiny of suspicious officials. In 1784, an investigation was commissioned in Paris by King Louis XVI which included American ambassador Benjamin Franklin, chemist Antoine Lavoisier and physician Joseph-Ignace Guillotin (later the popularizer of the guillotine). They concluded that Mesmer's method was useless. Abbé Faria, an Indo-Portuguese priest, revived public attention in animal magnetism. Unlike Mesmer, Faria claimed that the effect was 'generated from within the mind' by the power of expectancy and cooperation of the patient. Although disputed, the "magnetic" tradition continued among Mesmer's students and others, resurfacing in England in the 19th century in the work of the physician John Elliotson (1791–1868), and the surgeons James Esdaile (1808–1859), and James Braid (1795–1860) (who reconceptualized it as property of the subject's mind rather than a "power" of the Mesmerist's, and relabeled it "hypnotism"). Mesmerism also continued to have a strong social (if not medical) following in England through the 19th century (see Winter, 1998). Faria's approach was significantly extended by the clinical and theoretical work of Ambroise-Auguste Liébeault and Hippolyte Bernheim of the Nancy School. Faria's theoretical position, and the subsequent experiences of those in the Nancy School made significant contributions to the later autosuggestion techniques of Émile Coué. It was adopted for the treatment of hysteria by the director of Paris's Salpêtrière Hospital, Jean-Martin Charcot (1825–1893). Phrenology began as "organology", a theory of brain structure developed by the German physician, Franz Joseph Gall (1758–1828). Gall argued that the brain is divided into a large number of functional "organs", each responsible for particular human mental abilities and dispositions – hope, love, spirituality, greed, language, the abilities to detect the size, form, and color of objects, etc. He argued that the larger each of these organs are, the greater the power of the corresponding mental trait. Further, he argued that one could detect the sizes of the organs in a given individual by feeling the surface of that person's skull. Gall's ultra-localizationist position with respect to the brain was soon attacked, most notably by French anatomist Pierre Flourens (1794–1867), who conducted ablation studies (on chickens) which purported to demonstrate little or no cerebral localization of function. Although Gall had been a serious (if misguided) researcher, his theory was taken by his assistant, Johann Gaspar Spurzheim (1776–1832), and developed into the profitable, popular enterprise of phrenology, which soon spawned, especially in Britain, a thriving industry of independent practitioners. In the hands of Scottish religious leader George Combe (1788–1858) (whose book The Constitution of Man was one of the best-sellers of the century), phrenology became strongly associated with political reform movements and egalitarian principles (see, e.g., Shapin, 1975; but also see van Wyhe, 2004). Spurzheim soon spread phrenology to America as well, where itinerant practical phrenologists assessed the mental well-being of willing customers (see Sokal, 2001; Thompson 2021). The development of modern psychology was closely linked to psychiatry in the eighteenth and nineteenth centuries (see History of psychiatry), when the treatment of the mentally ill in hospices was revolutionized after Europeans first considered their pathological conditions. In fact, there was no distinction between the two areas in psychotherapeutic practice, in an era when there was still no drug treatment (of the so-called psychopharmacologicy revolution from 1950) for mental disorders, and its early theorists and pioneering clinical psychologists generally had medical background. The first to implement in the Western a humanitarian and scientific treatment of mental health, based on Enlightenment ideas, were the French alienists, who developed the empirical observation of psychopathology, describing the clinical conditions, their physiological relationships and classifying them. It was called the rationalist-empirical school, which most known exponents were Pinel, Esquirol, Falret, Morel and Magnan. In the late nineteenth century, the French current was gradually overcome by the German field of study. At first, the German school was influenced by romantic ideals and gave rise to a line of mental process speculators, based more on empathy than reason. They became known as Psychiker, mentalists or psychologists, with different currents being highlighted by Reil (creator of the word "psychiatry"), Heinroth (first to use the term "psychosomatic") Ideler and Carus. In the middle of the century, a "somatic reaction" formed against the speculative doctrines of mentalism, and it was based on neuroanatomy and neuropathology. In it, those who made important contributions to the psychopathological classification were Griesinger, Westphal, Krafft-Ebbing and Kahlbaum, which, in their turn, would influence Wernicke and Meynert. Kraepelin revolutionized as the first to define the diagnostic aspects of mental disorders in syndromes, and the work of psychological classification was followed to the contemporary field by contributions from Schneider, Kretschmer, Leonhard, and Jaspers. In Great Britain, there stand out in the nineteenth century Alexander Bain founder of the first journal of psychology, Mind, and writer of reference books on the subject at the time, such as Mental Science: The Compendium of Psychology, and the History of Philosophy (1868), and Henry Maudsley. In Switzerland, Bleuler coined the terms "depth psychology", "schizophrenia", "schizoid" and "autism". In the United States, the Swiss psychiatrist Adolf Meyer maintained that the patient should be regarded as an integrated "psychobiological" whole, emphasizing psychosocial factors, concepts that propitiated the so-called psychosomatic medicine. Emergence of German experimental psychology Until the middle of the 19th century, psychology was widely regarded as a branch of philosophy. Whether it could become an independent scientific discipline was questioned already earlier on: Immanuel Kant (1724–1804) declared in his Metaphysical Foundations of Natural Science (1786) that psychology might perhaps never become a "proper" natural science because its phenomena cannot be quantified, among other reasons. Kant proposed an alternative conception of an empirical investigation of human thought, feeling, desire, and action, and lectured on these topics for over twenty years (1772/73–1795/96). His Anthropology from a Pragmatic Point of View (1798), which resulted from these lectures, looks like an empirical psychology in many respects. Johann Friedrich Herbart (1776–1841) took issue with what he viewed as Kant's conclusion and attempted to develop a mathematical basis for a scientific psychology. Although he was unable to empirically realize the terms of his psychological theory, his efforts did lead scientists such as Ernst Heinrich Weber (1795–1878) and Gustav Theodor Fechner (1801–1887) to attempt to measure the mathematical relationships between the physical magnitudes of external stimuli and the psychological intensities of the resulting sensations. Fechner (1860) is the originator of the term psychophysics. Meanwhile, individual differences in reaction time had become a critical issue in the field of astronomy, under the name of the "personal equation". Early researches by Friedrich Wilhelm Bessel (1784–1846) in Königsberg and Adolf Hirsch led to the development of a highly precise chronoscope by Matthäus Hipp that, in turn, was based on a design by Charles Wheatstone for a device that measured the speed of artillery shells (Edgell & Symes, 1906). Other timing instruments were borrowed from physiology (e.g., Carl Ludwig's kymograph) and adapted for use by the Utrecht ophthalmologist Franciscus Donders (1818–1899) and his student Johan Jacob de Jaager in measuring the duration of simple mental decisions. The 19th century was also the period in which physiology, including neurophysiology, professionalized and saw some of its most significant discoveries. Among its leaders were Charles Bell (1774–1843) and François Magendie (1783–1855) who independently discovered the distinction between sensory and motor nerves in the spinal column, Johannes Müller (1801–1855) who proposed the doctrine of specific nerve energies, Emil du Bois-Reymond (1818–1896) who studied the electrical basis of muscle contraction, Pierre Paul Broca (1824–1880) and Carl Wernicke (1848–1905) who identified areas of the brain responsible for different aspects of language, as well as Gustav Fritsch (1837–1927), Eduard Hitzig (1839–1907), and David Ferrier (1843–1924) who localized sensory and motor areas of the brain. One of the principal founders of experimental physiology, Hermann Helmholtz (1821–1894), conducted studies of a wide range of topics that would later be of interest to psychologists – the speed of neural transmission, the natures of sound and color, and of our perceptions of them, etc. In the 1860s, while he held a position in Heidelberg, Helmholtz engaged as an assistant a young physician named Wilhelm Wundt. Wundt employed the equipment of the physiology laboratory – chronoscope, kymograph, and various peripheral devices – to address more complicated psychological questions than had, until then, been investigated experimentally. In particular he was interested in the nature of apperception – the point at which a perception occupies the central focus of conscious awareness. In 1864 Wundt took up a professorship in Zürich, where he published his landmark textbook, Grundzüge der physiologischen Psychologie (Principles of Physiological Psychology, 1874). Moving to a more prestigious professorship in Leipzig in 1875, Wundt founded a laboratory specifically dedicated to original research in experimental psychology in 1879, the first laboratory of its kind in the world. In 1883, he launched a journal in which to publish the results of his, and his students', research, Philosophische Studien (Philosophical Studies) (For more on Wundt, see, e.g., Bringmann & Tweney, 1980; Rieber & Robinson, 2001). Wundt attracted a large number of students not only from Germany, but also from abroad. Among his most influential American students were G. Stanley Hall (who had already obtained a PhD from Harvard under the supervision of William James), James McKeen Cattell (who was Wundt's first assistant), and Frank Angell (who founded laboratories at both Cornell and Stanford). The most influential British student was Edward Bradford Titchener (who later became professor at Cornell). Experimental psychology laboratories were soon also established at Berlin by Carl Stumpf (1848–1936) and at Göttingen by Georg Elias Müller (1850–1934). Another major German experimental psychologist of the era, though he did not direct his own research institute, was Hermann Ebbinghaus (1850–1909). Psychoanalysis Experimentation was not the only approach to psychology in the German-speaking world at this time. Starting in the 1890s, employing the case study technique, the Viennese physician Sigmund Freud developed and applied the methods of hypnosis, free association, and dream interpretation to reveal putatively unconscious beliefs and desires that he argued were the underlying causes of his patients' "hysteria". He dubbed this approach psychoanalysis. Freudian psychoanalysis is particularly notable for the emphasis it places on the course of an individual's sexual development in pathogenesis. Psychoanalytic concepts have had a strong and lasting influence on Western culture, particularly on the arts. Although its scientific contribution is still a matter of debate, both Freudian and Jungian psychology revealed the existence of compartmentalized thinking, in which some behavior and thoughts are hidden from consciousness – yet operative as part of the complete personality. Hidden agendas, a bad conscience, or a sense of guilt, are examples of the existence of mental processes in which the individual is not conscious, through choice or lack of understanding, of some aspects of their personality and subsequent behavior. Psychoanalysis examines mental processes which affect the ego. An understanding of these theoretically allows the individual greater choice and consciousness with a healing effect in neurosis and occasionally in psychosis, both of which Richard von Krafft-Ebing defined as "diseases of the personality". Freud founded the International Psychoanalytic Association in 1910, inspired also by Ferenczi. Main theoretical successors were Anna Freud (his daughter) and Melane Klein, particularly in child psychoanalysis, both inaugurating competing concepts; in addition to those who became dissidents and developed interpretations different from Freud's psychoanalytic one, thus called by some neo-freudians, or more correctly post-freudians: the most known are Alfred Adler (individual psychology), Carl Gustav Jung (analytical psychology), Otto Rank, Karen Horney, Erik Erikson and Erich Fromm. Jung was an associate of Freud's who later broke with him over Freud's emphasis on sexuality. Working with concepts of the unconscious first noted during the 1800s (by John Stuart Mill, Krafft-Ebing, Pierre Janet, Théodore Flournoy and others), Jung defined four mental functions which relate to and define the ego, the conscious self: Sensation, which tell consciousness that something is there. Feelings, which consist of value judgments, and motivate our reaction to what we have sensed. Intellect, an analytic function that compares the sensed event to all known others and gives it a class and category, allowing us to understand a situation within a historical process, personal or public. And intuition, a mental function with access to deep behavioral patterns, being able to suggest unexpected solutions or predict unforeseen consequences, "as if seeing around corners" as Jung put it. Jung insisted on an empirical psychology on which theories must be based on facts and not on the psychologist's projections or expectations. Early American Around 1875 the Harvard physiology instructor (as he then was), William James, opened a small experimental psychology demonstration laboratory for use with his courses. The laboratory was never used, at that time, for original research, and so controversy remains as to whether it is to be regarded as the "first" experimental psychology laboratory or not. In 1878, James gave a series of lectures at Johns Hopkins University entitled "The Senses and the Brain and their Relation to Thought" in which he argued, contra Thomas Henry Huxley, that consciousness is not epiphenomenal, but must have an evolutionary function, or it would not have been naturally selected in humans. The same year James was contracted by Henry Holt to write a textbook on the "new" experimental psychology. If he had written it quickly, it would have been the first English-language textbook on the topic. It was twelve years, however, before his two-volume The Principles of Psychology would be published. In the meantime textbooks were published by George Trumbull Ladd of Yale (1887) and James Mark Baldwin then of Lake Forest College (1889). William James was one of the founders of the American Society for Psychical Research in 1885, which studied psychic phenomena (parapsychology), before the creation of the American Psychological Association in 1892. James was also president of the British society that inspired the United States' one, the Society for Psychical Research, founded in 1882, which investigated psychology and the paranormal on topics such as mediumship, dissociation, telepathy and hypnosis, and it innovated research in psychology, by which, according to science historian Andreas Sommer, were "devised methodological innovations such as randomized study designs" and conducted "the first experiments investigating the psychology of eyewitness testimony (Hodgson and Davey, 1887), [and] empirical and conceptual studies illuminating mechanisms of dissociation and hypnotism"; Its members also initiated and organised the International Congresses of Physiological/Experimental psychology. In 1879 Charles Sanders Peirce was hired as a philosophy instructor at Johns Hopkins University. Although better known for his astronomical and philosophical work, Peirce also conducted what are perhaps the first American psychology experiments, on the subject of color vision, published in 1877 in the American Journal of Science (see Cadwallader, 1974). Peirce and his student Joseph Jastrow published "On Small Differences in Sensation" in the Memoirs of the National Academy of Sciences, in 1884. In 1882, Peirce was joined at Johns Hopkins by G. Stanley Hall, who opened the first American research laboratory devoted to experimental psychology in 1883. Peirce was forced out of his position by scandal and Hall was awarded the only professorship in philosophy at Johns Hopkins. In 1887 Hall founded the American Journal of Psychology, which published work primarily emanating from his own laboratory. In 1888 Hall left his Johns Hopkins professorship for the presidency of the newly founded Clark University, where he remained for the rest of his career. Soon, experimental psychology laboratories were opened at the University of Pennsylvania (in 1887, by James McKeen Cattell), Indiana University (1888, William Lowe Bryan), the University of Wisconsin (1888, Joseph Jastrow), Clark University (1889, Edmund Sanford), the McLean Asylum (1889, William Noyes), and the University of Nebraska (1889, Harry Kirke Wolfe). However, it was Princeton University's Eno Hall, built in 1924, that became the first university building in the United States to be devoted entirely to experimental psychology when it became the home of the university's Department of Psychology. In 1890, William James' The Principles of Psychology finally appeared, and rapidly became the most influential textbook in the history of American psychology. It laid many of the foundations for the sorts of questions that American psychologists would focus on for years to come. The book's chapters on consciousness, emotion, and habit were particularly agenda-setting. One of those who felt the impact of James' Principles was John Dewey, then professor of philosophy at the University of Michigan. With his junior colleagues, James Hayden Tufts (who founded the psychology laboratory at Michigan) and George Herbert Mead, and his student James Rowland Angell, this group began to reformulate psychology, focusing more strongly on the social environment and on the activity of mind and behavior than the psychophysics-inspired physiological psychology of Wundt and his followers had heretofore. Tufts left Michigan for another junior position at the newly founded University of Chicago in 1892. A year later, the senior philosopher at Chicago, Charles Strong, resigned, and Tufts recommended to Chicago president William Rainey Harper that Dewey be offered the position. After initial reluctance, Dewey was hired in 1894. Dewey soon filled out the department with his Michigan companions Mead and Angell. These four formed the core of the Chicago School of psychology. In 1892, G. Stanley Hall invited 30-some psychologists and philosophers to a meeting at Clark with the purpose of founding a new American Psychological Association (APA). (On the history of the APA, see Evans, Staudt Sexton, & Cadwallader, 1992.) The first annual meeting of the APA was held later that year, hosted by George Stuart Fullerton at the University of Pennsylvania. Almost immediately tension arose between the experimentally and philosophically inclined members of the APA. Edward Bradford Titchener and Lightner Witmer launched an attempt to either establish a separate "Section" for philosophical presentations, or to eject the philosophers altogether. After nearly a decade of debate, a Western Philosophical Association was founded and held its first meeting in 1901 at the University of Nebraska. The following year (1902), an American Philosophical Association held its first meeting at Columbia University. These ultimately became the Central and Eastern Divisions of the modern American Philosophical Association. In 1894, a number of psychologists, unhappy with the parochial editorial policies of the American Journal of Psychology approached Hall about appointing an editorial board and opening the journal out to more psychologists not within Hall's immediate circle. Hall refused, so James McKeen Cattell (then of Columbia) and James Mark Baldwin (then of Princeton) co-founded a new journal, Psychological Review, which rapidly grew to become a major outlet for American psychological researchers. Beginning in 1895, James Mark Baldwin (Princeton, Hopkins) and Edward Bradford Titchener (Cornell) entered into an increasingly acrimonious dispute over the correct interpretation of some anomalous reaction time findings that had come from the Wundt laboratory (originally reported by Ludwig Lange and James McKeen Cattell). In 1896, James Rowland Angell and Addison W. Moore (Chicago) published a series of experiments in Psychological Review appearing to show that Baldwin was the more correct of the two. However, they interpreted their findings in light of John Dewey's new approach to psychology, which rejected the traditional stimulus-response understanding of the reflex arc in favor of a "circular" account in which what serves as "stimulus" and what as "response" depends on how one views the situation. The full position was laid out in Dewey's landmark article "The Reflex Arc Concept in Psychology" which also appeared in Psychological Review in 1896. Titchener responded in Philosophical Review (1898, 1899) by distinguishing his austere "structural" approach to psychology from what he termed the Chicago group's more applied "functional" approach, and thus began the first major theoretical rift in American psychology between Structuralism and Functionalism. The group at Columbia, led by James McKeen Cattell, Edward L. Thorndike, and Robert S. Woodworth, was often regarded as a second (after Chicago) "school" of American Functionalism (see, e.g., Heidbredder, 1933), although they never used that term themselves, because their research focused on the applied areas of mental testing, learning, and education. Dewey was elected president of the APA in 1899, while Titchener dropped his membership in the association. (In 1904, Titchener formed his own group, eventually known as the Society of Experimental Psychologists.) Jastrow promoted the functionalist approach in his APA presidential address of 1900, and Angell adopted Titchener's label explicitly in his influential textbook of 1904 and his APA presidential address of 1906. In reality, Structuralism was, more or less, confined to Titchener and his students. (It was Titchener's former student E. G. Boring, writing A History of Experimental Psychology [1929/1950, the most influential textbook of the 20th century about the discipline], who launched the common idea that the structuralism/functionalism debate was the primary fault line in American psychology at the turn of the 20th century.) Functionalism, broadly speaking, with its more practical emphasis on action and application, better suited the American cultural "style" and, perhaps more important, was more appealing to pragmatic university trustees and private funding agencies. Early French Jules Baillarger founded the in 1847, one of the first associations of its kind and which published the . France already had a pioneering tradition in psychological study, and it was relevant the publication of ("Summary of a Psychology Course") in 1831 by Adolphe Garnier, who also published the ("Treatise of the Faculties of the Soul, comprising the history of major psychological theories") in 1852. Garnier was called "the best monument of psychological science of our time" by Revue des Deux Mondes in 1864. In no small measure because of the conservatism of the reign of Louis Napoléon (president, 1848–1852; emperor as "Napoléon III", 1852–1870), academic philosophy in France through the middle part of the 19th century was controlled by members of the eclectic and spiritualist schools, led by figures such as Victor Cousin (1792–1867), Thédodore Jouffroy (1796–1842), and Paul Janet (1823–1899). These were traditional metaphysical schools, opposed to regarding psychology as a natural science. With the ouster of Napoléon III after the débacle of the Franco-Prussian War, new paths, both political and intellectual, became possible. From the 1870 forward, a steadily increasing interest in positivist, materialist, evolutionary, and deterministic approaches to psychology developed, influenced by, among others, the work of Hyppolyte Taine (1828–1893) (e.g., De L'Intelligence, 1870) and Théodule Ribot (1839–1916) (e.g., La Psychologie Anglaise Contemporaine, 1870). In 1876, Ribot founded Revue Philosophique (the same year as Mind was founded in Britain), which for the next generation would be virtually the only French outlet for the "new" psychology (Plas, 1997). Although not a working experimentalist himself, Ribot's many books were to have profound influence on the next generation of psychologists. These included especially his (1873) and La Psychologie Allemande Contemporaine (1879). In the 1880s, Ribot's interests turned to psychopathology, writing books on disorders of memory (1881), will (1883), and personality (1885), and where he attempted to bring to these topics the insights of general psychology. Although in 1881 he lost a Sorbonne professorship in the History of Psychological Doctrines to traditionalist Jules Soury (1842–1915), from 1885 to 1889 he taught experimental psychology at the Sorbonne. In 1889 he was awarded a chair at the Collège de France in Experimental and Comparative Psychology, which he held until 1896 (Nicolas, 2002). France's primary psychological strength lay in the field of psychopathology. The chief neurologist at the Salpêtrière Hospital in Paris, Jean-Martin Charcot (1825–1893), had been using the recently revivied and renamed (see above) practice of hypnosis to "experimentally" produce hysterical symptoms in some of his patients. Two of his students, Alfred Binet (1857–1911) and Pierre Janet (1859–1947), adopted and expanded this practice in their own work. In 1889, Binet and his colleague Henri Beaunis (1830–1921) co-founded, at the Sorbonne, the first experimental psychology laboratory in France. Just five years later, in 1894, Beaunis, Binet, and a third colleague, Victor Henri (1872–1940), co-founded the first French journal dedicated to experimental psychology, L'Année Psychologique. In the first years of the 20th century, Binet was requested by the French government to develop a method for the newly founded universal public education system to identify students who would require extra assistance to master the standardized curriculum. In response, with his collaborator Théodore Simon (1873–1961), he developed the Binet–Simon Intelligence Test, first published in 1905 (revised in 1908 and 1911). Although the test was used to effect in France, it would find its greatest success (and controversy) in the United States, where it was translated into English by Henry H. Goddard (1866–1957), the director of the Training School for the Feebleminded in Vineland, New Jersey, and his assistant, Elizabeth Kite (a translation of the 1905 edition appeared in the Vineland Bulletin in 1908, but much better known was Kite's 1916 translation of the 1908 edition, which appeared in book form). The translated test was used by Goddard to advance his eugenics agenda with respect to those he deemed congenitally feeble-minded, especially immigrants from non-Western European countries. Binet's test was revised by Stanford professor Lewis M. Terman (1877–1956) into the Stanford–Binet IQ test in 1916. With Binet's death in 1911, the Sorbonne laboratory and L'Année Psychologique fell to Henri Piéron (1881–1964). Piéron's orientation was more physiological that Binet's had been. Pierre Janet became the leading psychiatrist in France, being appointed to the Salpêtrière (1890–1894), the Sorbonne (1895–1920), and the Collège de France (1902–1936). In 1904, he co-founded the with fellow Sorbonne professor Georges Dumas (1866–1946), a student and faithful follower of Ribot. Whereas Janet's teacher, Charcot, had focused on the neurological bases of hysteria, Janet was concerned to develop a scientific approach to psychopathology as a mental disorder. His theory that mental pathology results from conflict between unconscious and conscious parts of the mind, and that unconscious mental contents may emerge as symptoms with symbolic meanings led to a public priority dispute with Sigmund Freud. Early British Although the British had the first scholarly journal dedicated to the topic of psychology – Mind, founded in 1876 by Alexander Bain and edited by George Croom Robertson – it was quite a long while before experimental psychology developed there to challenge the strong tradition of "mental philosophy". The experimental reports that appeared in Mind in the first two decades of its existence were almost entirely authored by Americans, especially G. Stanley Hall and his students (notably Henry Herbert Donaldson) and James McKeen Cattell. Francis Galton's (1822–1911) anthropometric laboratory opened in 1884. There people were tested on a wide variety of physical (e.g., strength of blow) and perceptual (e.g., visual acuity) attributes. In 1886 Galton was visited by James McKeen Cattell who would later adapt Galton's techniques in developing his own mental testing research program in the United States. Galton was not primarily a psychologist, however. The data he accumulated in the anthropometric laboratory primarily went toward supporting his case for eugenics. To help interpret the mounds of data he accumulated, Galton developed a number of important statistical techniques, including the precursors to the scatterplot and the product-moment correlation coefficient (later perfected by Karl Pearson, 1857–1936). Soon after, Charles Spearman (1863–1945) developed the correlation-based statistical procedure of factor analysis in the process of building a case for his two-factor theory of intelligence, published in 1901. Spearman believed that people have an inborn level of general intelligence or g which can be crystallized into a specific skill in any of a number of narrow content area (s, or specific intelligence). Laboratory psychology of the kind practiced in Germany and the United States was slow in coming to Britain. Although the philosopher James Ward (1843–1925) urged Cambridge University to establish a psychophysics laboratory from the mid-1870s forward, it was not until the 1891 that they put so much as £50 toward some basic apparatus (Bartlett, 1937). A laboratory was established through the assistance of the physiology department in 1897 and a lectureship in psychology was established which first went to W. H. R. Rivers (1864–1922). Soon Rivers was joined by C. S. Myers (1873–1946) and William McDougall (1871–1938). This group showed as much interest in anthropology as psychology, going with Alfred Cort Haddon (1855–1940) on the famed Torres Straits expedition of 1898. In 1901 the Psychological Society was established (which renamed itself the British Psychological Society in 1906), and in 1904 Ward and Rivers co-founded the British Journal of Psychology. Early Russian Insofar as psychology was regarded as the science of the soul and institutionally part of philosophy courses in theology schools, psychology was present in Russia from the second half of the 18th century. By contrast, if by psychology we mean a separate discipline, with university chairs and people employed as psychologists, then it appeared only after the October Revolution. All the same, by the end of the 19th century, many different kinds of activities called psychology had spread in philosophy, natural science, literature, medicine, education, legal practice, and even military science. Psychology was as much a cultural resource as it was a defined area of scholarship. The question, "Who Is to Develop Psychology and How?", was of such importance that Ivan Sechenov, a physiologist and doctor by training and a teacher in institutions of higher education, chose it as the title for an essay in 1873. His question was rhetorical, for he was already convinced that physiology was the scientific basis on which to build psychology. The response to Sechenov's popular essay included one, in 1872–1873, from a liberal professor of law, Konstantin Kavelin. He supported a psychology drawing on ethnographic materials about national character, a program that had existed since 1847, when the ethnographic division of the recently founded Russian Geographical Society circulated a request for information on the people's way of life, including "intellectual and moral abilities". This was part of a larger debate about national character, national resources, and national development, in the context of which a prominent linguist, Alexander Potebnja, began, in 1862, to publish studies of the relation between mentality and language. Although it was the history and philology departments that traditionally taught courses in psychology, it was the medical schools that first introduced psychological laboratories and courses on experimental psychology. As early as the 1860s and 1870s, I. M. Balinskii (1827–1902) at the Military-Surgical Academy (which changed its name in the 1880s to the Military Medical Academy) in St. Petersburg and Sergey Korsakov, a psychiatrist at Moscow university, began to purchase psychometric apparatus. Vladimir Bekhterev created the first laboratory—a special space for psychological experiments—in Kazan' in 1885. At a meeting of the Moscow Psychological Society in 1887, the psychiatrists Grigory Rossolimo and Ardalion Tokarskii (1859–1901) demonstrated both Wundt's experiments and hypnosis. In 1895, Tokarskii set up a psychological laboratory in the psychiatric clinic of Moscow university with the support of its head, Korsakov, to teach future psychiatrists about what he promoted as new and necessary techniques. In January 1884, the philosophers Matvei Troitskii and Iakov Grot founded the Moscow Psychological Society. They wished to discuss philosophical issues, but because anything called "philosophical" could attract official disapproval, they used "psychological" as a euphemism. In 1907, Georgy Chelpanov announced a 3-year course in psychology based on laboratory work and a well-structured teaching seminar. In the following years, Chelpanov traveled in Europe and the United States to see existing institutes; the result was a luxurious four-story building for the Psychological Institute of Moscow with well-equipped laboratories, opening formally on March 23, 1914. Second generation German Würzburg School In 1896, one of Wilhelm Wundt's former Leipzig laboratory assistants, Oswald Külpe (1862–1915), founded a new laboratory in Würzburg. Külpe soon surrounded himself with a number of younger psychologists, the so-called Würzburg School, most notably Narziß Ach (1871–1946), Karl Bühler (1879–1963), Ernst Dürr (1878–1913), Karl Marbe (1869–1953), and Henry Jackson Watt (1879–1925). Collectively, they developed a new approach to psychological experimentation that flew in the face of many of Wundt's restrictions. Wundt had drawn a distinction between the old philosophical style of self-observation (Selbstbeobachtung) in which one introspected for extended durations on higher thought processes, and inner perception (innere Wahrnehmung) in which one could be immediately aware of a momentary sensation, feeling, or image (Vorstellung). The former was declared to be impossible by Wundt, who argued that higher thought could not be studied experimentally through extended introspection, but only humanistically through Völkerpsychologie (folk psychology). Only the latter was a proper subject for experimentation. The Würzburgers, by contrast, designed experiments in which the experimental subject was presented with a complex stimulus (for example a Nietzschean aphorism or a logical problem) and after processing it for a time (for example interpreting the aphorism or solving the problem), retrospectively reported to the experimenter all that had passed through his consciousness during the interval. In the process, the Würzburgers claimed to have discovered a number of new elements of consciousness (over and above Wundt's sensations, feelings, and images) including Bewußtseinslagen (conscious sets), Bewußtheiten (awarenesses), and Gedanken (thoughts). In the English-language literature, these are often collectively termed "imageless thoughts", and the debate between Wundt and the Würzburgers, the "imageless thought controversy". Wundt referred to the Würzburgers' studies as "sham" experiments and criticized them vigorously. Wundt's most significant English student, Edward Bradford Titchener, then working at Cornell, intervened in the dispute, claiming to have conducted extended introspective studies in which he was able to resolve the Würzburgers' imageless thoughts into sensations, feelings, and images. He thus, paradoxically, used a method of which Wundt did not approve in order to affirm Wundt's view of the situation. The imageless thought debate is often said to have been instrumental in undermining the legitimacy of all introspective methods in experimental psychology and, ultimately, in bringing about the behaviorist revolution in American psychology. It was not without its own delayed legacy, however. Herbert A. Simon (1981) cites the work of one Würzburg psychologist in particular, Otto Selz (1881–1943), for having inspired him to develop his famous problem-solving computer algorithms (such as Logic Theorist and General Problem Solver) and his "thinking out loud" method for protocol analysis. In addition, Karl Popper studied psychology under Bühler and Selz in the 1920s, and appears to have brought some of their influence, unattributed, to his philosophy of science. Gestalt psychology Whereas the Würzburgers debated with Wundt mainly on matters of method, another German movement, centered in Berlin, took issue with the widespread assumption that the aim of psychology should be to break consciousness down into putative basic elements. Instead, they argued that the psychological "whole" has priority and that the "parts" are defined by the structure of the whole, rather than vice versa. Thus, the school was named Gestalt, a German term meaning approximately "form" or "configuration". It was led by Max Wertheimer (1880–1943), Wolfgang Köhler (1887–1967), and Kurt Koffka (1886–1941). Wertheimer had been a student of Austrian philosopher, Christian von Ehrenfels (1859–1932), who claimed that in addition to the sensory elements of a perceived object, there is an extra element which, though in some sense derived from the organization of the standard sensory elements, is also to be regarded as an element in its own right. He called this extra element Gestalt-qualität or "form-quality". For instance, when one hears a melody, one hears the notes plus something in addition to them which binds them together into a tune – the Gestalt-qualität. It is the presence of this Gestalt-qualität which, according to Ehrenfels, allows a tune to be transposed to a new key, using completely different notes, but still retain its identity. Wertheimer took the more radical line that "what is given me by the melody does not arise ... as a secondary process from the sum of the pieces as such. Instead, what takes place in each single part already depends upon what the whole is", (1925/1938). In other words, one hears the melody first and only then may perceptually divide it up into notes. Similarly in vision, one sees the form of the circle first – it is given "im-mediately" (i.e. its apprehension is not mediated by a process of part-summation). Only after this primary apprehension might one notice that it is made up of lines or dots or stars. Gestalt-Theorie (Gestalt psychology) was officially initiated in 1912 in an article by Wertheimer on the phi-phenomenon; a perceptual illusion in which two stationary but alternately flashing lights appear to be a single light moving from one location to another. Contrary to popular opinion, his primary target was not behaviorism, as it was not yet a force in psychology. The aim of his criticism was, rather, the atomistic psychologies of Hermann von Helmholtz (1821–1894), Wilhelm Wundt (1832–1920), and other European psychologists of the time. The two men who served as Wertheimer's subjects in the phi experiment were Köhler and Koffka. Köhler was an expert in physical acoustics, having studied under physicist Max Planck (1858–1947), but had taken his degree in psychology under Carl Stumpf (1848–1936). Koffka was also a student of Stumpf's, having studied movement phenomena and psychological aspects of rhythm. In 1917 Köhler (1917/1925) published the results of four years of research on learning in chimpanzees. Köhler showed, contrary to the claims of most other learning theorists, that animals can learn by "sudden insight" into the "structure" of a problem, over and above the associative and incremental manner of learning that Ivan Pavlov (1849–1936) and Edward Lee Thorndike (1874–1949) had demonstrated with dogs and cats, respectively. The terms "structure" and "organization" were focal for the Gestalt psychologists. Stimuli were said to have a certain structure, to be organized in a certain way, and that it is to this structural organization, rather than to individual sensory elements, that the organism responds. When an animal is conditioned, it does not simply respond to the absolute properties of a stimulus, but to its properties relative to its surroundings. To use a favorite example of Köhler's, if conditioned to respond in a certain way to the lighter of two gray cards, the animal generalizes the relation between the two stimuli rather than the absolute properties of the conditioned stimulus: it will respond to the lighter of two cards in subsequent trials even if the darker card in the test trial is of the same intensity as the lighter one in the original training trials. In 1921 Koffka published a Gestalt-oriented text on developmental psychology, Growth of the Mind. With the help of American psychologist Robert Ogden, Koffka introduced the Gestalt point of view to an American audience in 1922 by way of a paper in Psychological Bulletin. It contains criticisms of then-current explanations of a number of problems of perception, and the alternatives offered by the Gestalt school. Koffka moved to the United States in 1924, eventually settling at Smith College in 1927. In 1935 Koffka published his Principles of Gestalt Psychology. This textbook laid out the Gestalt vision of the scientific enterprise as a whole. Science, he said, is not the simple accumulation of facts. What makes research scientific is the incorporation of facts into a theoretical structure. The goal of the Gestaltists was to integrate the facts of inanimate nature, life, and mind into a single scientific structure. This meant that science would have to swallow not only what Koffka called the quantitative facts of physical science but the facts of two other "scientific categories": questions of order and questions of Sinn, a German word which has been variously translated as significance, value, and meaning. Without incorporating the meaning of experience and behavior, Koffka believed that science would doom itself to trivialities in its investigation of human beings. Having survived the onslaught of the Nazis up to the mid-1930s, all the core members of the Gestalt movement were forced out of Germany to the United States by 1935. Köhler published another book, Dynamics in Psychology, in 1940 but thereafter the Gestalt movement suffered a series of setbacks. Koffka died in 1941 and Wertheimer in 1943. Wertheimer's long-awaited book on mathematical problem-solving, Productive Thinking, was published posthumously in 1945 but Köhler was now left to guide the movement without his two long-time colleagues. Emergence of behaviorism in America As a result of the conjunction of a number of events in the early 20th century, behaviorism gradually emerged as the dominant school in American psychology. First among these was the increasing skepticism with which many viewed the concept of consciousness: although still considered to be the essential element separating psychology from physiology, its subjective nature and the unreliable introspective method it seemed to require, troubled many. William James' 1904 Journal of Philosophy.... article "Does Consciousness Exist?", laid out the worries explicitly. Second was the gradual rise of a rigorous animal psychology. In addition to Edward Lee Thorndike's work with cats in puzzle boxes in 1898, the start of research in which rats learn to navigate mazes was begun by Willard Small (1900, 1901 in American Journal of Psychology). Robert M. Yerkes's 1905 Journal of Philosophy... article "Animal Psychology and the Criteria of the Psychic" raised the general question of when one is entitled to attribute consciousness to an organism. The following few years saw the emergence of John Broadus Watson (1878–1959) as a major player, publishing his dissertation on the relation between neurological development and learning in the white rat (1907, Psychological Review Monograph Supplement; Carr & Watson, 1908, J. Comparative Neurology & Psychology). Another important rat study was published by Henry H. Donaldson (1908, J. Comparative Neurology & Psychology). The year 1909 saw the first English-language account of Ivan Pavlov's studies of conditioning in dogs (Yerkes & Morgulis, 1909, Psychological Bulletin). A third factor was the rise of Watson to a position of significant power within the psychological community. In 1908, Watson was offered a junior position at Johns Hopkins by James Mark Baldwin. In addition to heading the Johns Hopkins department, Baldwin was the editor of the influential journals, Psychological Review and Psychological Bulletin. Only months after Watson's arrival, Baldwin was forced to resign his professorship due to scandal. Watson was suddenly made head of the department and editor of Baldwin's journals. He resolved to use these powerful tools to revolutionize psychology in the image of his own research. In 1913 he published in Psychological Review the article that is often called the "manifesto" of the behaviorist movement, "Psychology as the Behaviorist Views It". There he argued that psychology "is a purely objective experimental branch of natural science", "introspection forms no essential part of its methods..." and "The behaviorist... recognizes no dividing line between man and brute". The following year, 1914, his first textbook, Behavior went to press. Although behaviorism took some time to be accepted as a comprehensive approach (see Samelson, 1981), (in no small part because of the intervention of World War I), by the 1920s Watson's revolution was well underway. The central tenet of early behaviorism was that psychology should be a science of behavior, not of the mind, and rejected internal mental states such as beliefs, desires, or goals. Watson himself, however, was forced out of Johns Hopkins by scandal in 1920. Although he continued to publish during the 1920s, he eventually moved on to a career in advertising (see Coon, 1994). Among the behaviorists who continued on, there were a number of disagreements about the best way to proceed. Neo-behaviorists such as Edward C. Tolman, Edwin Guthrie, Clark L. Hull, and B. F. Skinner debated issues such as (1) whether to reformulate the traditional psychological vocabulary in behavioral terms or discard it in favor of a wholly new scheme, (2) whether learning takes place all at once or gradually, (3) whether biological drives should be included in the new science in order to provide a "motivation" for behavior, and (4) to what degree any theoretical framework is required over and above the measured effects of reinforcement and punishment on learning. By the late 1950s, Skinner's formulation had become dominant, and it remains a part of the modern discipline under the rubric of Behavior Analysis. Its application (Applied Behavior Analysis) has become one of the most useful fields of psychology. Behaviorism was the ascendant experimental model for research in psychology for much of the 20th century, largely due to the creation and successful application (not least of which in advertising) of conditioning theories as scientific models of human behaviour. Second generation francophone Genevan School In 1918, Jean Piaget (1896–1980) turned away from his early training in natural history and began post-doctoral work in psychoanalysis in Zurich. Later Piaget rejected psychoanalysis, as he thought it was insufficiently empirical. In 1919, he moved to Paris to work at the Binet-Simon Lab. However, Binet had died in 1911 and Simon lived and worked in Rouen. His supervision therefore came (indirectly) from Pierre Janet, Binet's old rival and a professor at the Collège de France. The job in Paris was relatively simple: to use the statistical techniques he had learned as a natural historian, studying molluscs, to standardize Cyril Burt's intelligence test for use with French children. Yet without direct supervision, he soon found a remedy to this boring work: exploring why children made the mistakes they did. Applying his early training in psychoanalytic interviewing, Piaget began to intervene directly with the children: "Why did you do that?" (etc.) It was from this that the ideas formalized in his later stage theory first emerged. In 1921, Piaget moved to Geneva to work with Édouard Claparède at the Rousseau Institute. They formed what is now known as the Genevan School. In 1936, Piaget received his first honorary doctorate from Harvard. In 1955, the International Center for Genetic Epistemology was founded: an interdisciplinary collaboration of theoreticians and scientists, devoted to the study of topics related to Piaget's theory. In 1969, Piaget received the "distinguished scientific contributions" award from the American Psychological Association. Soviet Marxist psychology In the early twentieth century, Ivan Pavlov's behavioral and conditioning experiments became the most internationally recognized Russian achievements. With the creation of the Soviet Union in 1922, Marxism was introduced as an overall philosophical and methodological framework in scientific research. In 1920s, state ideology promoted a tendency to the psychology of Bekhterev's reflexologist reductionism in its Marxist interpretation and to historical materialism, while idealistic philosophers and psychologists were harshly criticized. Another variation of Marxist version of psychology that got popularity mostly in Moscow and centered in the local Institute of Psychology was Konstantin Kornilov's (the Director of this Institute) reactology that became the main view, besides a small group of the members of the Vygotsky-Luria Circle that, besides its namesakes Lev Vygotsky, and Alexander Luria, included Bluma Zeigarnik, Alexei Leontiev and others, and in 1920s embraced a deterministic "instrumental psychology" version of Cultural-historical psychology. Many works by Vygotsky were not published chronologically because of Soviet censorship but primarily because of Vygotsky's failure to build a consistent psychological theory of consciousness, A few attempts were made in 1920s at formulating the core of theoretical framework of the "genuinely Marxist" psychology, but all these failed and were characterized in early 1930s as either right- or left-wing deviations of reductionist "mechanicism" or "menshevising idealism". It was Sergei Rubinstein in mid 1930s, who formulated the key principles, on which the entire Soviet variation of Marxist psychology would be based, and, thus become the genuine pioneer and the founder of this psychological discipline in the Marxist disguise in the Soviet Union. In late 1940s-early 1950s, Lysenkoism somewhat affected Russian psychology, yet gave it a considerable impulse for a reaction and unification that resulted in institutional and disciplinary integration of psychological community in the postwar Soviet Union. Cognitivism Noam Chomsky's (1959) review of Skinner's book Verbal Behavior (which aimed to explain language acquisition in a behaviorist framework) is considered one of the major theoretical challenges to the type of radical (as in 'root') behaviorism that Skinner taught. Chomsky claimed that language could not be learned solely from the sort of operant conditioning that Skinner postulated. Chomsky argued that people could produce an infinite variety of sentences unique in structure and meaning and that these could not possibly be generated solely through the experience of natural language. As an alternative, he concluded that there must be internal mental structures – states of mind of the sort that behaviorism rejected as illusory. The issue is not whether mental activities exist; it is whether they can be shown to be the causes of behavior. Similarly, the work by Albert Bandura showed that children could learn by social observation, without any change in overt behaviour, and so must (according to him) be accounted for by internal representations. The rise of computer technology also promoted the metaphor of mental function as information processing. This, combined with a scientific approach to studying the mind, as well as a belief in internal mental states, led to the rise of cognitivism as the dominant model of the mind. Links between brain and nervous system function were also becoming common, partly due to the experimental work of people like Charles Sherrington and Donald Hebb, and partly due to studies of people with brain injury (see cognitive neuropsychology). With the development of technologies for accurately measuring brain function, neuropsychology and cognitive neuroscience have become some of the most active areas in contemporary psychology. With the increasing involvement of other disciplines (such as philosophy, computer science, and neuroscience) in the quest to understand the mind, the umbrella discipline of cognitive science has been created as a means of focusing such efforts constructively. Scholarly journals There are three "primary journals" where specialist histories of psychology are published: History of Psychology (journal) Journal of the History of the Behavioral Sciences History of the Human Sciences In addition, there are a large number of "friendly journals" where historical material can often be found. These are discussed in History of Psychology (discipline). See also Notes References External links Scholarly societies and associations Cheiron: The International Society for the History of Behavioral & Social Sciences European Society for the History of the Human Sciences Forum for the History of Human Science History & Philosophy Section of the British Psychological Society History & Philosophy of Psychology Section of the Canadian Psychological Association Society for the History of Psychology (American Psychological Association Division 26) Internet resources History of Psychology History of Psychology - Poster with visual overview. E-textbooks The History of Psychology - e-text about the historical and philosophical background of psychology by C. George Boeree Mind and Body: René Descartes to William James e-text by Robert H. Wozniak History of Psychology Textbook Chapter Gerhard Medicus (2017). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Collections of primary source texts Classics in the History of Psychology - on-line full texts of 250+ historically significant primary source articles, chapters, & books, ed. by Christopher D. Green Fondation Jean Piaget - Collection of primary sources by, and secondary sources about, Jean Piaget (in French; edited by Jean-Jacques Ducret and Wolfgang Schachner) The Mead Project - collection of writings by George Herbert Mead and other related thinkers (e.g., Dewey, James, Baldwin, Cooley, Veblen, Sapir), ed. by Lloyd Gordon Ward and Robert Throop Sir Francis Galton, F.R.S. William James Site ed. by Frank Pajares History of Phrenology on the Web ed. by John van Wyhe Frederic Bartlett Archive - A collection of Bartlett's own writings and related material maintained by Humboldt Prize Winner Professor Brady Wagoner (University of Aalborg), the late Professor Gerard Duveen (University of Cambridge) and Professor Alex Gillespie (LSE) Collections of secondary scholarship on the history of psychology History & Theory of Psychology Eprint Archive - Open access on-line depository of articles on the history and theory of psychology Advances in the History of Psychology - Blog edited by Jeremy Burman of York University (Toronto, Canada), advised by Christopher D. Green Websites of physical archives The Archives of the History of American Psychology - Large collection of documents and objects at the University of Akron, directed by David Baker Archives of the American Psychological Association directed by Wade Pickren Archives of the British Psychological Society Multimedia resources An Academy in Crisis: The Hiring of James Mark Baldwin and James Gibson Hume at the University of Toronto in 1889 - 40-min. video documentary by Christopher D. Green Toward a School of Their Own: The Prehistory of American Functionalist Psychology - 64-min. video documentary by Christopher D. Green This Week in the History of Psychology - 30-episode podcast series by Christopher D. Green Psychology
0.778485
0.997183
0.776292
Mental disorder
A mental disorder, also referred to as a mental illness, a mental health condition, or a psychiatric disability, is a behavioral or mental pattern that causes significant distress or impairment of personal functioning. A mental disorder is also characterized by a clinically significant disturbance in an individual's cognition, emotional regulation, or behavior, often in a social context. Such disturbances may occur as single episodes, may be persistent, or may be relapsing–remitting. There are many different types of mental disorders, with signs and symptoms that vary widely between specific disorders. A mental disorder is one aspect of mental health. The causes of mental disorders are often unclear. Theories incorporate findings from a range of fields. Disorders may be associated with particular regions or functions of the brain. Disorders are usually diagnosed or assessed by a mental health professional, such as a clinical psychologist, psychiatrist, psychiatric nurse, or clinical social worker, using various methods such as psychometric tests, but often relying on observation and questioning. Cultural and religious beliefs, as well as social norms, should be taken into account when making a diagnosis. Services for mental disorders are usually based in psychiatric hospitals, outpatient clinics, or in the community, Treatments are provided by mental health professionals. Common treatment options are psychotherapy or psychiatric medication, while lifestyle changes, social interventions, peer support, and self-help are also options. In a minority of cases, there may be involuntary detention or treatment. Prevention programs have been shown to reduce depression. In 2019, common mental disorders around the globe include: depression, which affects about 264 million people; dementia, which affects about 50 million; bipolar disorder, which affects about 45 million; and schizophrenia and other psychoses, which affect about 20 million people. Neurodevelopmental disorders include attention deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and intellectual disability, of which onset occurs early in the developmental period. Stigma and discrimination can add to the suffering and disability associated with mental disorders, leading to various social movements attempting to increase understanding and challenge social exclusion. Definition The definition and classification of mental disorders are key issues for researchers as well as service providers and those who may be diagnosed. For a mental state to be classified as a disorder, it generally needs to cause dysfunction. Most international clinical documents use the term mental "disorder", while "illness" is also common. It has been noted that using the term "mental" (i.e., of the mind) is not necessarily meant to imply separateness from the brain or body. According to the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), published in 1994, a mental disorder is a psychological syndrome or pattern that is associated with distress (e.g., via a painful symptom), disability (impairment in one or more important areas of functioning), increased risk of death, or causes a significant loss of autonomy; however, it excludes normal responses such as the grief from loss of a loved one and also excludes deviant behavior for political, religious, or societal reasons not arising from a dysfunction in the individual. DSM-IV predicates the definition with caveats, stating that, as in the case with many medical terms, mental disorder "lacks a consistent operational definition that covers all situations", noting that different levels of abstraction can be used for medical definitions, including pathology, symptomology, deviance from a normal range, or etiology, and that the same is true for mental disorders, so that sometimes one type of definition is appropriate and sometimes another, depending on the situation. In 2013, the American Psychiatric Association (APA) redefined mental disorders in the DSM-5 as "a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning." The final draft of ICD-11 contains a very similar definition. The terms "mental breakdown" or "nervous breakdown" may be used by the general population to mean a mental disorder. The terms "nervous breakdown" and "mental breakdown" have not been formally defined through a medical diagnostic system such as the DSM-5 or ICD-10 and are nearly absent from scientific literature regarding mental illness. Although "nervous breakdown" is not rigorously defined, surveys of laypersons suggest that the term refers to a specific acute time-limited reactive disorder involving symptoms such as anxiety or depression, usually precipitated by external stressors. Many health experts today refer to a nervous breakdown as a mental health crisis. Nervous illness In addition to the concept of mental disorder, some people have argued for a return to the old-fashioned concept of nervous illness. In How Everyone Became Depressed: The Rise and Fall of the Nervous Breakdown (2013), Edward Shorter, a professor of psychiatry and the history of medicine, says: Classifications There are currently two widely established systems that classify mental disorders: ICD-11 Chapter 06: Mental, behavioural or neurodevelopmental disorders, part of the International Classification of Diseases produced by the WHO (in effect since 1 January 2022). Diagnostic and Statistical Manual of Mental Disorders (DSM-5) produced by the APA since 1952. Both of these list categories of disorder and provide standardized criteria for diagnosis. They have deliberately converged their codes in recent revisions so that the manuals are often broadly comparable, although significant differences remain. Other classification schemes may be used in non-western cultures, for example, the Chinese Classification of Mental Disorders, and other manuals may be used by those of alternative theoretical persuasions, such as the Psychodynamic Diagnostic Manual. In general, mental disorders are classified separately from neurological disorders, learning disabilities or intellectual disability. Unlike the DSM and ICD, some approaches are not based on identifying distinct categories of disorder using dichotomous symptom profiles intended to separate the abnormal from the normal. There is significant scientific debate about the relative merits of categorical versus such non-categorical (or hybrid) schemes, also known as continuum or dimensional models. A spectrum approach may incorporate elements of both. In the scientific and academic literature on the definition or classification of mental disorder, one extreme argues that it is entirely a matter of value judgements (including of what is normal) while another proposes that it is or could be entirely objective and scientific (including by reference to statistical norms). Common hybrid views argue that the concept of mental disorder is objective even if only a "fuzzy prototype" that can never be precisely defined, or conversely that the concept always involves a mixture of scientific facts and subjective value judgments. Although the diagnostic categories are referred to as 'disorders', they are presented as medical diseases, but are not validated in the same way as most medical diagnoses. Some neurologists argue that classification will only be reliable and valid when based on neurobiological features rather than clinical interview, while others suggest that the differing ideological and practical perspectives need to be better integrated. The DSM and ICD approach remains under attack both because of the implied causality model and because some researchers believe it better to aim at underlying brain differences which can precede symptoms by many years. Dimensional models The high degree of comorbidity between disorders in categorical models such as the DSM and ICD have led some to propose dimensional models. Studying comorbidity between disorders have demonstrated two latent (unobserved) factors or dimensions in the structure of mental disorders that are thought to possibly reflect etiological processes. These two dimensions reflect a distinction between internalizing disorders, such as mood or anxiety symptoms, and externalizing disorders such as behavioral or substance use symptoms. A single general factor of psychopathology, similar to the g factor for intelligence, has been empirically supported. The p factor model supports the internalizing-externalizing distinction, but also supports the formation of a third dimension of thought disorders such as schizophrenia. Biological evidence also supports the validity of the internalizing-externalizing structure of mental disorders, with twin and adoption studies supporting heritable factors for externalizing and internalizing disorders. A leading dimensional model is the Hierarchical Taxonomy of Psychopathology. Disorders There are many different categories of mental disorder, and many different facets of human behavior and personality that can become disordered. Anxiety disorder An anxiety disorder is anxiety or fear that interferes with normal functioning may be classified as an anxiety disorder. Commonly recognized categories include specific phobias, generalized anxiety disorder, social anxiety disorder, panic disorder, agoraphobia, obsessive–compulsive disorder and post-traumatic stress disorder. Mood disorder Other affective (emotion/mood) processes can also become disordered. Mood disorder involving unusually intense and sustained sadness, melancholia, or despair is known as major depression (also known as unipolar or clinical depression). Milder, but still prolonged depression, can be diagnosed as dysthymia. Bipolar disorder (also known as manic depression) involves abnormally "high" or pressured mood states, known as mania or hypomania, alternating with normal or depressed moods. The extent to which unipolar and bipolar mood phenomena represent distinct categories of disorder, or mix and merge along a dimension or spectrum of mood, is subject to some scientific debate. Psychotic disorder Patterns of belief, language use and perception of reality can become dysregulated (e.g., delusions, thought disorder, hallucinations). Psychotic disorders in this domain include schizophrenia, and delusional disorder. Schizoaffective disorder is a category used for individuals showing aspects of both schizophrenia and affective disorders. Schizotypy is a category used for individuals showing some of the characteristics associated with schizophrenia, but without meeting cutoff criteria. Personality disorder Personality—the fundamental characteristics of a person that influence thoughts and behaviors across situations and time—may be considered disordered if judged to be abnormally rigid and maladaptive. Although treated separately by some, the commonly used categorical schemes include them as mental disorders, albeit on a separate axis II in the case of the DSM-IV. A number of different personality disorders are listed, including those sometimes classed as eccentric, such as paranoid, schizoid and schizotypal personality disorders; types that have described as dramatic or emotional, such as antisocial, borderline, histrionic or narcissistic personality disorders; and those sometimes classed as fear-related, such as anxious-avoidant, dependent, or obsessive–compulsive personality disorders. Personality disorders, in general, are defined as emerging in childhood, or at least by adolescence or early adulthood. The ICD also has a category for enduring personality change after a catastrophic experience or psychiatric illness. If an inability to sufficiently adjust to life circumstances begins within three months of a particular event or situation, and ends within six months after the stressor stops or is eliminated, it may instead be classed as an adjustment disorder. There is an emerging consensus that personality disorders, similar to personality traits in general, incorporate a mixture of acute dysfunctional behaviors that may resolve in short periods, and maladaptive temperamental traits that are more enduring. Furthermore, there are also non-categorical schemes that rate all individuals via a profile of different dimensions of personality without a symptom-based cutoff from normal personality variation, for example through schemes based on dimensional models. Eating disorder An eating disorder is a serious mental health condition that involves an unhealthy relationship with food and body image. They can cause severe physical and psychological problems. Eating disorders involve disproportionate concern in matters of food and weight. Categories of disorder in this area include anorexia nervosa, bulimia nervosa, exercise bulimia or binge eating disorder. Sleep disorder Sleep disorders are associated with disruption to normal sleep patterns. A common sleep disorder is insomnia, which is described as difficulty falling and/or staying asleep. Other sleep disorders include narcolepsy, sleep apnea, REM sleep behavior disorder, chronic sleep deprivation, and restless leg syndrome. Narcolepsy is a condition of extreme tendencies to fall asleep whenever and wherever. People with narcolepsy feel refreshed after their random sleep, but eventually get sleepy again. Narcolepsy diagnosis requires an overnight stay at a sleep center for analysis, during which doctors ask for a detailed sleep history and sleep records. Doctors also use actigraphs and polysomnography. Doctors will do a multiple sleep latency test, which measures how long it takes a person to fall asleep. Sleep apnea, when breathing repeatedly stops and starts during sleep, can be a serious sleep disorder. Three types of sleep apnea include obstructive sleep apnea, central sleep apnea, and complex sleep apnea. Sleep apnea can be diagnosed at home or with polysomnography at a sleep center. An ear, nose, and throat doctor may further help with the sleeping habits. Sexuality related Sexual disorders include dyspareunia and various kinds of paraphilia (sexual arousal to objects, situations, or individuals that are considered abnormal or harmful to the person or others). Other Impulse control disorder: People who are abnormally unable to resist certain urges or impulses that could be harmful to themselves or others, may be classified as having an impulse control disorder, and disorders such as kleptomania (stealing) or pyromania (fire-setting). Various behavioral addictions, such as gambling addiction, may be classed as a disorder. Obsessive–compulsive disorder can sometimes involve an inability to resist certain acts but is classed separately as being primarily an anxiety disorder. Substance use disorder: This disorder refers to the use of drugs (legal or illegal, including alcohol) that persists despite significant problems or harm related to its use. Substance dependence and substance abuse fall under this umbrella category in the DSM. Substance use disorder may be due to a pattern of compulsive and repetitive use of a drug that results in tolerance to its effects and withdrawal symptoms when use is reduced or stopped. Dissociative disorder: People with severe disturbances of their self-identity, memory, and general awareness of themselves and their surroundings may be classified as having these types of disorders, including depersonalization derealization disorder or dissociative identity disorder (which was previously referred to as multiple personality disorder or "split personality"). Cognitive disorder: These affect cognitive abilities, including learning and memory. This category includes delirium and mild and major neurocognitive disorder (previously termed dementia). Developmental disorder: These disorders initially occur in childhood. Some examples include autism spectrum disorder, oppositional defiant disorder and conduct disorder, and attention deficit hyperactivity disorder (ADHD), which may continue into adulthood. Conduct disorder, if continuing into adulthood, may be diagnosed as antisocial personality disorder (dissocial personality disorder in the ICD). Popular labels such as psychopath (or sociopath) do not appear in the DSM or ICD but are linked by some to these diagnoses. Somatoform disorders may be diagnosed when there are problems that appear to originate in the body that are thought to be manifestations of a mental disorder. This includes somatization disorder and conversion disorder. There are also disorders of how a person perceives their body, such as body dysmorphic disorder. Neurasthenia is an old diagnosis involving somatic complaints as well as fatigue and low spirits/depression, which is officially recognized by the ICD-10 but no longer by the DSM-IV. Factitious disorders are diagnosed where symptoms are thought to be reported for personal gain. Symptoms are often deliberately produced or feigned, and may relate to either symptoms in the individual or in someone close to them, particularly people they care for. There are attempts to introduce a category of relational disorder, where the diagnosis is of a relationship rather than on any one individual in that relationship. The relationship may be between children and their parents, between couples, or others. There already exists, under the category of psychosis, a diagnosis of shared psychotic disorder where two or more individuals share a particular delusion because of their close relationship with each other. There are a number of uncommon psychiatric syndromes, which are often named after the person who first described them, such as Capgras syndrome, De Clerambault syndrome, Othello syndrome, Ganser syndrome, Cotard delusion, and Ekbom syndrome, and additional disorders such as the Couvade syndrome and Geschwind syndrome. Signs and symptoms Course The onset of psychiatric disorders usually occurs from childhood to early adulthood. Impulse-control disorders and a few anxiety disorders tend to appear in childhood. Some other anxiety disorders, substance disorders, and mood disorders emerge later in the mid-teens. Symptoms of schizophrenia typically manifest from late adolescence to early twenties. The likely course and outcome of mental disorders vary and are dependent on numerous factors related to the disorder itself, the individual as a whole, and the social environment. Some disorders may last a brief period of time, while others may be long-term in nature. All disorders can have a varied course. Long-term international studies of schizophrenia have found that over a half of individuals recover in terms of symptoms, and around a fifth to a third in terms of symptoms and functioning, with many requiring no medication. While some have serious difficulties and support needs for many years, "late" recovery is still plausible. The World Health Organization (WHO) concluded that the long-term studies' findings converged with others in "relieving patients, carers and clinicians of the chronicity paradigm which dominated thinking throughout much of the 20th century." A follow-up study by Tohen and coworkers revealed that around half of people initially diagnosed with bipolar disorder achieve symptomatic recovery (no longer meeting criteria for the diagnosis) within six weeks, and nearly all achieve it within two years, with nearly half regaining their prior occupational and residential status in that period. Less than half go on to experience a new episode of mania or major depression within the next two years. Disability Some disorders may be very limited in their functional effects, while others may involve substantial disability and support needs. In this context, the terms psychiatric disability and psychological disability are sometimes used instead of mental disorder. The degree of ability or disability may vary over time and across different life domains. Furthermore, psychiatric disability has been linked to institutionalization, discrimination and social exclusion as well as to the inherent effects of disorders. Alternatively, functioning may be affected by the stress of having to hide a condition in work or school, etc., by adverse effects of medications or other substances, or by mismatches between illness-related variations and demands for regularity. It is also the case that, while often being characterized in purely negative terms, some mental traits or states labeled as psychiatric disabilities can also involve above-average creativity, non-conformity, goal-striving, meticulousness, or empathy. In addition, the public perception of the level of disability associated with mental disorders can change. Nevertheless, internationally, people report equal or greater disability from commonly occurring mental conditions than from commonly occurring physical conditions, particularly in their social roles and personal relationships. The proportion with access to professional help for mental disorders is far lower, however, even among those assessed as having a severe psychiatric disability. Disability in this context may or may not involve such things as: Basic activities of daily living. Including looking after the self (health care, grooming, dressing, shopping, cooking etc.) or looking after accommodation (chores, DIY tasks, etc.) Interpersonal relationships. Including communication skills, ability to form relationships and sustain them, ability to leave the home or mix in crowds or particular settings Occupational functioning. Ability to acquire an employment and hold it, cognitive and social skills required for the job, dealing with workplace culture, or studying as a student. In terms of total disability-adjusted life years (DALYs), which is an estimate of how many years of life are lost due to premature death or to being in a state of poor health and disability, psychiatric disabilities rank amongst the most disabling conditions. Unipolar (also known as Major) depressive disorder is the third leading cause of disability worldwide, of any condition mental or physical, accounting for 65.5 million years lost. The first systematic description of global disability arising in youth, in 2011, found that among 10- to 24-year-olds nearly half of all disability (current and as estimated to continue) was due to psychiatric disabilities, including substance use disorders and conditions involving self-harm. Second to this were accidental injuries (mainly traffic collisions) accounting for 12 percent of disability, followed by communicable diseases at 10 percent. The psychiatric disabilities associated with most disabilities in high-income countries were unipolar major depression (20%) and alcohol use disorder (11%). In the eastern Mediterranean region, it was unipolar major depression (12%) and schizophrenia (7%), and in Africa it was unipolar major depression (7%) and bipolar disorder (5%). Suicide, which is often attributed to some underlying mental disorder, is a leading cause of death among teenagers and adults under 35. There are an estimated 10 to 20 million non-fatal attempted suicides every year worldwide. Risk factors The predominant view is that genetic, psychological, and environmental factors all contribute to the development or progression of mental disorders. Different risk factors may be present at different ages, with risk occurring as early as during prenatal period. Genetics A number of psychiatric disorders are linked to a family history (including depression, narcissistic personality disorder and anxiety). Twin studies have also revealed a very high heritability for many mental disorders (especially autism and schizophrenia). Although researchers have been looking for decades for clear linkages between genetics and mental disorders, that work has not yielded specific genetic biomarkers yet that might lead to better diagnosis and better treatments. Statistical research looking at eleven disorders found widespread assortative mating between people with mental illness. That means that individuals with one of these disorders were two to three times more likely than the general population to have a partner with a mental disorder. Sometimes people seemed to have preferred partners with the same mental illness. Thus, people with schizophrenia or ADHD are seven times more likely to have affected partners with the same disorder. This is even more pronounced for people with Autism spectrum disorders who are 10 times more likely to have a spouse with the same disorder. Environment During the prenatal stage, factors like unwanted pregnancy, lack of adaptation to pregnancy or substance use during pregnancy increases the risk of developing a mental disorder. Maternal stress and birth complications including prematurity and infections have also been implicated in increasing susceptibility for mental illness. Infants neglected or not provided optimal nutrition have a higher risk of developing cognitive impairment. Social influences have also been found to be important, including abuse, neglect, bullying, social stress, traumatic events, and other negative or overwhelming life experiences. Aspects of the wider community have also been implicated, including employment problems, socioeconomic inequality, lack of social cohesion, problems linked to migration, and features of particular societies and cultures. The specific risks and pathways to particular disorders are less clear, however. Nutrition also plays a role in mental disorders. In schizophrenia and psychosis, risk factors include migration and discrimination, childhood trauma, bereavement or separation in families, recreational use of drugs, and urbanicity. In anxiety, risk factors may include parenting factors including parental rejection, lack of parental warmth, high hostility, harsh discipline, high maternal negative affect, anxious childrearing, modelling of dysfunctional and drug-abusing behavior, and child abuse (emotional, physical and sexual). Adults with imbalance work to life are at higher risk for developing anxiety. For bipolar disorder, stress (such as childhood adversity) is not a specific cause, but does place genetically and biologically vulnerable individuals at risk for a more severe course of illness. Drug use Mental disorders are associated with drug use including: cannabis, alcohol and caffeine, use of which appears to promote anxiety. For psychosis and schizophrenia, usage of a number of drugs has been associated with development of the disorder, including cannabis, cocaine, and amphetamines. There has been debate regarding the relationship between usage of cannabis and bipolar disorder. Cannabis has also been associated with depression. Adolescents are at increased risk for tobacco, alcohol and drug use; Peer pressure is the main reason why adolescents start using substances. At this age, the use of substances could be detrimental to the development of the brain and place them at higher risk of developing a mental disorder. Chronic disease People living with chronic conditions like HIV and diabetes are at higher risk of developing a mental disorder. People living with diabetes experience significant stress from the biological impact of the disease, which places them at risk for developing anxiety and depression. Diabetic patients also have to deal with emotional stress trying to manage the disease. Conditions like heart disease, stroke, respiratory conditions, cancer, and arthritis increase the risk of developing a mental disorder when compared to the general population. Personality traits Risk factors for mental illness include a propensity for high neuroticism or "emotional instability". In anxiety, risk factors may include temperament and attitudes (e.g. pessimism). Causal models Mental disorders can arise from multiple sources, and in many cases there is no single accepted or consistent cause currently established. An eclectic or pluralistic mix of models may be used to explain particular disorders. The primary paradigm of contemporary mainstream Western psychiatry is said to be the biopsychosocial model which incorporates biological, psychological and social factors, although this may not always be applied in practice. Biological psychiatry follows a biomedical model where many mental disorders are conceptualized as disorders of brain circuits likely caused by developmental processes shaped by a complex interplay of genetics and experience. A common assumption is that disorders may have resulted from genetic and developmental vulnerabilities, exposed by stress in life (for example in a diathesis–stress model), although there are various views on what causes differences between individuals. Some types of mental disorders may be viewed as primarily neurodevelopmental disorders. Evolutionary psychology may be used as an overall explanatory theory, while attachment theory is another kind of evolutionary-psychological approach sometimes applied in the context of mental disorders. Psychoanalytic theories have continued to evolve alongside and cognitive-behavioral and systemic-family approaches. A distinction is sometimes made between a "medical model" or a "social model" of psychiatric disability. Diagnosis Psychiatrists seek to provide a medical diagnosis of individuals by an assessment of symptoms, signs and impairment associated with particular types of mental disorder. Other mental health professionals, such as clinical psychologists, may or may not apply the same diagnostic categories to their clinical formulation of a client's difficulties and circumstances. The majority of mental health problems are, at least initially, assessed and treated by family physicians (in the UK general practitioners) during consultations, who may refer a patient on for more specialist diagnosis in acute or chronic cases. Routine diagnostic practice in mental health services typically involves an interview known as a mental status examination, where evaluations are made of appearance and behavior, self-reported symptoms, mental health history, and current life circumstances. The views of other professionals, relatives, or other third parties may be taken into account. A physical examination to check for ill health or the effects of medications or other drugs may be conducted. Psychological testing is sometimes used via paper-and-pen or computerized questionnaires, which may include algorithms based on ticking off standardized diagnostic criteria, and in rare specialist cases neuroimaging tests may be requested, but such methods are more commonly found in research studies than routine clinical practice. Time and budgetary constraints often limit practicing psychiatrists from conducting more thorough diagnostic evaluations. It has been found that most clinicians evaluate patients using an unstructured, open-ended approach, with limited training in evidence-based assessment methods, and that inaccurate diagnosis may be common in routine practice. In addition, comorbidity is very common in psychiatric diagnosis, where the same person meets the criteria for more than one disorder. On the other hand, a person may have several different difficulties only some of which meet the criteria for being diagnosed. There may be specific problems with accurate diagnosis in developing countries. More structured approaches are being increasingly used to measure levels of mental illness. HoNOS is the most widely used measure in English mental health services, being used by at least 61 trusts. In HoNOS a score of 0–4 is given for each of 12 factors, based on functional living capacity. Research has been supportive of HoNOS, although some questions have been asked about whether it provides adequate coverage of the range and complexity of mental illness problems, and whether the fact that often only 3 of the 12 scales vary over time gives enough subtlety to accurately measure outcomes of treatment. Criticism Since the 1980s, Paula Caplan has been concerned about the subjectivity of psychiatric diagnosis, and people being arbitrarily "slapped with a psychiatric label." Caplan says because psychiatric diagnosis is unregulated, doctors are not required to spend much time interviewing patients or to seek a second opinion. The Diagnostic and Statistical Manual of Mental Disorders can lead a psychiatrist to focus on narrow checklists of symptoms, with little consideration of what is actually causing the person's problems. So, according to Caplan, getting a psychiatric diagnosis and label often stands in the way of recovery. In 2013, psychiatrist Allen Frances wrote a paper entitled "The New Crisis of Confidence in Psychiatric Diagnosis", which said that "psychiatric diagnosis... still relies exclusively on fallible subjective judgments rather than objective biological tests." Frances was also concerned about "unpredictable overdiagnosis." For many years, marginalized psychiatrists (such as Peter Breggin, Thomas Szasz) and outside critics (such as Stuart A. Kirk) have "been accusing psychiatry of engaging in the systematic medicalization of normality." More recently these concerns have come from insiders who have worked for and promoted the American Psychiatric Association (e.g., Robert Spitzer, Allen Frances). A 2002 editorial in the British Medical Journal warned of inappropriate medicalization leading to disease mongering, where the boundaries of the definition of illnesses are expanded to include personal problems as medical problems or risks of diseases are emphasized to broaden the market for medications. Gary Greenberg, a psychoanalyst, in his book "the Book of Woe", argues that mental illness is really about suffering and how the DSM creates diagnostic labels to categorize people's suffering. Indeed, the psychiatrist Thomas Szasz, in his book "the Medicalization of Everyday Life", also argues that what is psychiatric illness, is not always biological in nature (i.e. social problems, poverty, etc.), and may even be a part of the human condition. Potential routine use of MRI/fMRI in diagnosis in 2018 the American Psychological Association commissioned a review to reach a consensus on whether modern clinical MRI/fMRI will be able to be used in the diagnosis of mental health disorders. The criteria presented by the APA stated that the biomarkers used in diagnosis should: "have a sensitivity of at least 80% for detecting a particular psychiatric disorder" should "have a specificity of at least 80% for distinguishing this disorder from other psychiatric or medical disorders" "should be reliable, reproducible, and ideally be noninvasive, simple to perform, and inexpensive" proposed biomarkers should be verified by 2 independent studies each by a different investigator and different population samples and published in a peer-reviewed journal. The review concluded that although neuroimaging diagnosis may technically be feasible, very large studies are needed to evaluate specific biomarkers which were not available. Prevention The 2004 WHO report "Prevention of Mental Disorders" stated that "Prevention of these disorders is obviously one of the most effective ways to reduce the [disease] burden." The 2011 European Psychiatric Association (EPA) guidance on prevention of mental disorders states "There is considerable evidence that various psychiatric conditions can be prevented through the implementation of effective evidence-based interventions." A 2011 UK Department of Health report on the economic case for mental health promotion and mental illness prevention found that "many interventions are outstandingly good value for money, low in cost and often become self-financing over time, saving public expenditure". In 2016, the National Institute of Mental Health re-affirmed prevention as a research priority area. Parenting may affect the child's mental health, and evidence suggests that helping parents to be more effective with their children can address mental health needs. Universal prevention (aimed at a population that has no increased risk for developing a mental disorder, such as school programs or mass media campaigns) need very high numbers of people to show effect (sometimes known as the "power" problem). Approaches to overcome this are (1) focus on high-incidence groups (e.g. by targeting groups with high risk factors), (2) use multiple interventions to achieve greater, and thus more statistically valid, effects, (3) use cumulative meta-analyses of many trials, and (4) run very large trials. Management Treatment and support for mental disorders are provided in psychiatric hospitals, clinics or a range of community mental health services. In some countries services are increasingly based on a recovery approach, intended to support individual's personal journey to gain the kind of life they want. There is a range of different types of treatment and what is most suitable depends on the disorder and the individual. Many things have been found to help at least some people, and a placebo effect may play a role in any intervention or medication. In a minority of cases, individuals may be treated against their will, which can cause particular difficulties depending on how it is carried out and perceived. Compulsory treatment while in the community versus non-compulsory treatment does not appear to make much of a difference except by maybe decreasing victimization. Lifestyle Lifestyle strategies, including dietary changes, exercise and quitting smoking may be of benefit. Therapy There is also a wide range of psychotherapists (including family therapy), counselors, and public health professionals. In addition, there are peer support roles where personal experience of similar issues is the primary source of expertise. A major option for many mental disorders is psychotherapy. There are several main types. Cognitive behavioral therapy (CBT) is widely used and is based on modifying the patterns of thought and behavior associated with a particular disorder. Other psychotherapies include dialectic behavioral therapy (DBT) and interpersonal psychotherapy (IPT). Psychoanalysis, addressing underlying psychic conflicts and defenses, has been a dominant school of psychotherapy and is still in use. Systemic therapy or family therapy is sometimes used, addressing a network of significant others as well as an individual. Some psychotherapies are based on a humanistic approach. There are many specific therapies used for particular disorders, which may be offshoots or hybrids of the above types. Mental health professionals often employ an eclectic or integrative approach. Much may depend on the therapeutic relationship, and there may be problems with trust, confidentiality and engagement. Medication A major option for many mental disorders is psychiatric medication and there are several main groups. Antidepressants are used for the treatment of clinical depression, as well as often for anxiety and a range of other disorders. Anxiolytics (including sedatives) are used for anxiety disorders and related problems such as insomnia. Mood stabilizers are used primarily in bipolar disorder. Antipsychotics are used for psychotic disorders, notably for positive symptoms in schizophrenia, and also increasingly for a range of other disorders. Stimulants are commonly used, notably for ADHD. Despite the different conventional names of the drug groups, there may be considerable overlap in the disorders for which they are actually indicated, and there may also be off-label use of medications. There can be problems with adverse effects of medication and adherence to them, and there is also criticism of pharmaceutical marketing and professional conflicts of interest. However, these medications in combination with non-pharmacological methods, such as cognitive-behavioral therapy (CBT) are seen to be most effective in treating mental disorders. Other Electroconvulsive therapy (ECT) is sometimes used in severe cases when other interventions for severe intractable depression have failed. ECT is usually indicated for treatment resistant depression, severe vegetative symptoms, psychotic depression, intense suicidal ideation, depression during pregnancy, and catatonia. Psychosurgery is considered experimental but is advocated by some neurologists in certain rare cases. Counseling (professional) and co-counseling (between peers) may be used. Psychoeducation programs may provide people with the information to understand and manage their problems. Creative therapies are sometimes used, including music therapy, art therapy or drama therapy. Lifestyle adjustments and supportive measures are often used, including peer support, self-help groups for mental health and supported housing or supported employment (including social firms). Some advocate dietary supplements. Reasonable accommodations (adjustments and supports) might be put in place to help an individual cope and succeed in environments despite potential disability related to mental health problems. This could include an emotional support animal or specifically trained psychiatric service dog. cannabis is specifically not recommended as a treatment. Epidemiology Mental disorders are common. Worldwide, more than one in three people in most countries report sufficient criteria for at least one at some point in their life. In the United States, 46% qualify for a mental illness at some point. An ongoing survey indicates that anxiety disorders are the most common in all but one country, followed by mood disorders in all but two countries, while substance disorders and impulse-control disorders were consistently less prevalent. Rates varied by region. A review of anxiety disorder surveys in different countries found average lifetime prevalence estimates of 16.6%, with women having higher rates on average. A review of mood disorder surveys in different countries found lifetime rates of 6.7% for major depressive disorder (higher in some studies, and in women) and 0.8% for Bipolar I disorder. In the United States the frequency of disorder is: anxiety disorder (28.8%), mood disorder (20.8%), impulse-control disorder (24.8%) or substance use disorder (14.6%). A 2004 cross-Europe study found that approximately one in four people reported meeting criteria at some point in their life for at least one of the DSM-IV disorders assessed, which included mood disorders (13.9%), anxiety disorders (13.6%), or alcohol disorder (5.2%). Approximately one in ten met the criteria within a 12-month period. Women and younger people of either gender showed more cases of the disorder. A 2005 review of surveys in 16 European countries found that 27% of adult Europeans are affected by at least one mental disorder in a 12-month period. An international review of studies on the prevalence of schizophrenia found an average (median) figure of 0.4% for lifetime prevalence; it was consistently lower in poorer countries. Studies of the prevalence of personality disorders (PDs) have been fewer and smaller-scale, but one broad Norwegian survey found a five-year prevalence of almost 1 in 7 (13.4%). Rates for specific disorders ranged from 0.8% to 2.8%, differing across countries, and by gender, educational level and other factors. A US survey that incidentally screened for personality disorder found a rate of 14.79%. Approximately 7% of a preschool pediatric sample were given a psychiatric diagnosis in one clinical study, and approximately 10% of 1- and 2-year-olds receiving developmental screening have been assessed as having significant emotional/behavioral problems based on parent and pediatrician reports. While rates of psychological disorders are often the same for men and women, women tend to have a higher rate of depression. Each year 73 million women are affected by major depression, and suicide is ranked 7th as the cause of death for women between the ages of 20–59. Depressive disorders account for close to 41.9% of the psychiatric disabilities among women compared to 29.3% among men. History Ancient civilizations Ancient civilizations described and treated a number of mental disorders. Mental illnesses were well known in ancient Mesopotamia, where diseases and mental disorders were believed to be caused by specific deities. Because hands symbolized control over a person, mental illnesses were known as "hands" of certain deities. One psychological illness was known as Qāt Ištar, meaning "Hand of Ishtar". Others were known as "Hand of Shamash", "Hand of the Ghost", and "Hand of the God". Descriptions of these illnesses, however, are so vague that it is usually impossible to determine which illnesses they correspond to in modern terminology. Mesopotamian doctors kept detailed record of their patients' hallucinations and assigned spiritual meanings to them. The royal family of Elam was notorious for its members often being insane. The Greeks coined terms for melancholy, hysteria and phobia and developed the humorism theory. Mental disorders were described, and treatments developed, in Persia, Arabia and in the medieval Islamic world. Europe Middle Ages Conceptions of madness in the Middle Ages in Christian Europe were a mixture of the divine, diabolical, magical and humoral, and transcendental. In the early modern period, some people with mental disorders may have been victims of the witch-hunts. While not every witch and sorcerer accused were mentally ill, all mentally ill were considered to be witches or sorcerers. Many terms for mental disorders that found their way into everyday use first became popular in the 16th and 17th centuries. Eighteenth century By the end of the 17th century and into the Enlightenment, madness was increasingly seen as an organic physical phenomenon with no connection to the soul or moral responsibility. Asylum care was often harsh and treated people like wild animals, but towards the end of the 18th century a moral treatment movement gradually developed. Clear descriptions of some syndromes may be rare before the 19th century. Nineteenth century Industrialization and population growth led to a massive expansion of the number and size of insane asylums in every Western country in the 19th century. Numerous different classification schemes and diagnostic terms were developed by different authorities, and the term psychiatry was coined (1808), though medical superintendents were still known as alienists. Twentieth century The turn of the 20th century saw the development of psychoanalysis, which would later come to the fore, along with Kraepelin's classification scheme. Asylum "inmates" were increasingly referred to as "patients", and asylums were renamed as hospitals. Europe and the United States Early in the 20th century in the United States, a mental hygiene movement developed, aiming to prevent mental disorders. Clinical psychology and social work developed as professions. World War I saw a massive increase of conditions that came to be termed "shell shock". World War II saw the development in the U.S. of a new psychiatric manual for categorizing mental disorders, which along with existing systems for collecting census and hospital statistics led to the first Diagnostic and Statistical Manual of Mental Disorders. The International Classification of Diseases (ICD) also developed a section on mental disorders. The term stress, having emerged from endocrinology work in the 1930s, was increasingly applied to mental disorders. Electroconvulsive therapy, insulin shock therapy, lobotomies and the neuroleptic chlorpromazine came to be used by mid-century. In the 1960s there were many challenges to the concept of mental illness itself. These challenges came from psychiatrists like Thomas Szasz who argued that mental illness was a myth used to disguise moral conflicts; from sociologists such as Erving Goffman who said that mental illness was merely another example of how society labels and controls non-conformists; from behavioral psychologists who challenged psychiatry's fundamental reliance on unobservable phenomena; and from gay rights activists who criticised the APA's listing of homosexuality as a mental disorder. A study published in Science by Rosenhan received much publicity and was viewed as an attack on the efficacy of psychiatric diagnosis. Deinstitutionalization gradually occurred in the West, with isolated psychiatric hospitals being closed down in favor of community mental health services. A consumer/survivor movement gained momentum. Other kinds of psychiatric medication gradually came into use, such as "psychic energizers" (later antidepressants) and lithium. Benzodiazepines gained widespread use in the 1970s for anxiety and depression, until dependency problems curtailed their popularity. Advances in neuroscience, genetics, and psychology led to new research agendas. Cognitive behavioral therapy and other psychotherapies developed. The DSM and then ICD adopted new criteria-based classifications, and the number of "official" diagnoses saw a large expansion. Through the 1990s, new SSRI-type antidepressants became some of the most widely prescribed drugs in the world, as later did antipsychotics. Also during the 1990s, a recovery approach developed. Africa and Nigeria Most Africans view mental disturbances as external spiritual attack on the person. Those who have a mental illness are thought to be under a spell or bewitched. Often than usual, People view a mentally ill person as possessed of an evil spirit and is seen as more of sociological perspective than a psychological order. The WHO estimated that fewer than 10% of mentally ill Nigerians have access to a psychiatrist or health worker, because there is a low ratio of mental-health specialists available in a country of 200 million people. WHO estimates that the number of mentally ill Nigerians ranges from 40 million to 60 million. Disorders such as depression, anxiety, schizophrenia, personality disorder, old age-related disorder, and substance-abuse disorder are common in Nigeria, as in other countries in Africa. Nigeria is still nowhere near being equipped to solve prevailing mental health challenges. With little scientific research carried out, coupled with insufficient mental-health hospitals in the country, traditional healers provide specialized psychotherapy care to those that require their services and pharmacotherapy Society and culture Different societies or cultures, even different individuals in a subculture, can disagree as to what constitutes optimal versus pathological biological and psychological functioning. Research has demonstrated that cultures vary in the relative importance placed on, for example, happiness, autonomy, or social relationships for pleasure. Likewise, the fact that a behavior pattern is valued, accepted, encouraged, or even statistically normative in a culture does not necessarily mean that it is conducive to optimal psychological functioning. People in all cultures find some behaviors bizarre or even incomprehensible. But just what they feel is bizarre or incomprehensible is ambiguous and subjective. These differences in determination can become highly contentious. The process by which conditions and difficulties come to be defined and treated as medical conditions and problems, and thus come under the authority of doctors and other health professionals, is known as medicalization or pathologization. Mental illness in the Latin American community There is a perception in Latin American communities, especially among older people, that discussing problems with mental health can create embarrassment and shame for the family. This results in fewer people seeking treatment. Latin Americans from the US are slightly more likely to have a mental health disorder than first-generation Latin American immigrants, although differences between ethnic groups were found to disappear after adjustment for place of birth. From 2015 to 2018, rates of serious mental illness in young adult Latin Americans increased by 60%, from 4% to 6.4%. The prevalence of major depressive episodes in young and adult Latin Americans increased from 8.4% to 11.3%. More than a third of Latin Americans reported more than one bad mental health day in the last three months. The rate of suicide among Latin Americans was about half the rate of non-Latin American white Americans in 2018, and this was the second-leading cause of death among Latin Americans ages 15 to 34. However, Latin American suicide rates rose steadily after 2020 in relation to the COVID-19 pandemic, even as the national rate declined. Family relations are an integral part of the Latin American community. Some research has shown that Latin Americans are more likely rely on family bonds, or familismo, as a source of therapy while struggling with mental health issues. Because Latin Americans have a high rate of religiosity, and because there is less stigma associated with religion than with psychiatric services, religion may play a more important therapeutic role for the mentally ill in Latin American communities. However, research has also suggested that religion may also play a role in stigmatizing mental illness in Latin American communities, which can discourage community members from seeking professional help. Religion Religious, spiritual, or transpersonal experiences and beliefs meet many criteria of delusional or psychotic disorders. A belief or experience can sometimes be shown to produce distress or disability—the ordinary standard for judging mental disorders. There is a link between religion and schizophrenia, a complex mental disorder characterized by a difficulty in recognizing reality, regulating emotional responses, and thinking in a clear and logical manner. Those with schizophrenia commonly report some type of religious delusion, and religion itself may be a trigger for schizophrenia. Movements Controversy has often surrounded psychiatry, and the term anti-psychiatry was coined by the psychiatrist David Cooper in 1967. The anti-psychiatry message is that psychiatric treatments are ultimately more damaging than helpful to patients, and psychiatry's history involves what may now be seen as dangerous treatments. Electroconvulsive therapy was one of these, which was used widely between the 1930s and 1960s. Lobotomy was another practice that was ultimately seen as too invasive and brutal. Diazepam and other sedatives were sometimes over-prescribed, which led to an epidemic of dependence. There was also concern about the large increase in prescribing psychiatric drugs for children. Some charismatic psychiatrists came to personify the movement against psychiatry. The most influential of these was R.D. Laing who wrote a series of best-selling books, including The Divided Self. Thomas Szasz wrote The Myth of Mental Illness. Some ex-patient groups have become militantly anti-psychiatric, often referring to themselves as survivors. Giorgio Antonucci has questioned the basis of psychiatry through his work on the dismantling of two psychiatric hospitals (in the city of Imola), carried out from 1973 to 1996. The consumer/survivor movement (also known as user/survivor movement) is made up of individuals (and organizations representing them) who are clients of mental health services or who consider themselves survivors of psychiatric interventions. Activists campaign for improved mental health services and for more involvement and empowerment within mental health services, policies and wider society. Patient advocacy organizations have expanded with increasing deinstitutionalization in developed countries, working to challenge the stereotypes, stigma and exclusion associated with psychiatric conditions. There is also a carers rights movement of people who help and support people with mental health conditions, who may be relatives, and who often work in difficult and time-consuming circumstances with little acknowledgement and without pay. An anti-psychiatry movement fundamentally challenges mainstream psychiatric theory and practice, including in some cases asserting that psychiatric concepts and diagnoses of 'mental illness' are neither real nor useful. Alternatively, a movement for global mental health has emerged, defined as 'the area of study, research and practice that places a priority on improving mental health and achieving equity in mental health for all people worldwide'. Cultural bias Diagnostic guidelines of the 2000s, namely the DSM and to some extent the ICD, have been criticized as having a fundamentally Euro-American outlook. Opponents argue that even when diagnostic criteria are used across different cultures, it does not mean that the underlying constructs have validity within those cultures, as even reliable application can prove only consistency, not legitimacy. Advocating a more culturally sensitive approach, critics such as Carl Bell and Marcello Maviglia contend that the cultural and ethnic diversity of individuals is often discounted by researchers and service providers. Cross-cultural psychiatrist Arthur Kleinman contends that the Western bias is ironically illustrated in the introduction of cultural factors to the DSM-IV. Disorders or concepts from non-Western or non-mainstream cultures are described as "culture-bound", whereas standard psychiatric diagnoses are given no cultural qualification whatsoever, revealing to Kleinman an underlying assumption that Western cultural phenomena are universal. Kleinman's negative view towards the culture-bound syndrome is largely shared by other cross-cultural critics. Common responses included both disappointment over the large number of documented non-Western mental disorders still left out and frustration that even those included are often misinterpreted or misrepresented. Many mainstream psychiatrists are dissatisfied with the new culture-bound diagnoses, although for partly different reasons. Robert Spitzer, a lead architect of the DSM-III, has argued that adding cultural formulations was an attempt to appease cultural critics, and has stated that they lack any scientific rationale or support. Spitzer also posits that the new culture-bound diagnoses are rarely used, maintaining that the standard diagnoses apply regardless of the culture involved. In general, mainstream psychiatric opinion remains that if a diagnostic category is valid, cross-cultural factors are either irrelevant or are significant only to specific symptom presentations. Clinical conceptions of mental illness also overlap with personal and cultural values in the domain of morality, so much so that it is sometimes argued that separating the two is impossible without fundamentally redefining the essence of being a particular person in a society. In clinical psychiatry, persistent distress and disability indicate an internal disorder requiring treatment; but in another context, that same distress and disability can be seen as an indicator of emotional struggle and the need to address social and structural problems. This dichotomy has led some academics and clinicians to advocate a postmodernist conceptualization of mental distress and well-being. Such approaches, along with cross-cultural and "heretical" psychologies centered on alternative cultural and ethnic and race-based identities and experiences, stand in contrast to the mainstream psychiatric community's alleged avoidance of any explicit involvement with either morality or culture. In many countries there are attempts to challenge perceived prejudice against minority groups, including alleged institutional racism within psychiatric services. There are also ongoing attempts to improve professional cross cultural sensitivity. Laws and policies Three-quarters of countries around the world have mental health legislation. Compulsory admission to mental health facilities (also known as involuntary commitment) is a controversial topic. It can impinge on personal liberty and the right to choose, and carry the risk of abuse for political, social, and other reasons; yet it can potentially prevent harm to self and others, and assist some people in attaining their right to healthcare when they may be unable to decide in their own interests. Because of this it is a concern of medical ethics. All human rights oriented mental health laws require proof of the presence of a mental disorder as defined by internationally accepted standards, but the type and severity of disorder that counts can vary in different jurisdictions. The two most often used grounds for involuntary admission are said to be serious likelihood of immediate or imminent danger to self or others, and the need for treatment. Applications for someone to be involuntarily admitted usually come from a mental health practitioner, a family member, a close relative, or a guardian. Human-rights-oriented laws usually stipulate that independent medical practitioners or other accredited mental health practitioners must examine the patient separately and that there should be regular, time-bound review by an independent review body. The individual should also have personal access to independent advocacy. For involuntary treatment to be administered (by force if necessary), it should be shown that an individual lacks the mental capacity for informed consent (i.e. to understand treatment information and its implications, and therefore be able to make an informed choice to either accept or refuse). Legal challenges in some areas have resulted in supreme court decisions that a person does not have to agree with a psychiatrist's characterization of the issues as constituting an "illness", nor agree with a psychiatrist's conviction in medication, but only recognize the issues and the information about treatment options. Proxy consent (also known as surrogate or substituted decision-making) may be transferred to a personal representative, a family member, or a legally appointed guardian. Moreover, patients may be able to make, when they are considered well, an advance directive stipulating how they wish to be treated should they be deemed to lack mental capacity in the future. The right to supported decision-making, where a person is helped to understand and choose treatment options before they can be declared to lack capacity, may also be included in the legislation. There should at the very least be shared decision-making as far as possible. Involuntary treatment laws are increasingly extended to those living in the community, for example outpatient commitment laws (known by different names) are used in New Zealand, Australia, the United Kingdom, and most of the United States. The World Health Organization reports that in many instances national mental health legislation takes away the rights of persons with mental disorders rather than protecting rights, and is often outdated. In 1991, the United Nations adopted the Principles for the Protection of Persons with Mental Illness and the Improvement of Mental Health Care, which established minimum human rights standards of practice in the mental health field. In 2006, the UN formally agreed the Convention on the Rights of Persons with Disabilities to protect and enhance the rights and opportunities of disabled people, including those with psychiatric disabilities. The term insanity, sometimes used colloquially as a synonym for mental illness, is often used technically as a legal term. The insanity defense may be used in a legal trial (known as the mental disorder defence in some countries). Perception and discrimination Stigma The social stigma associated with mental disorders is a widespread problem. The US Surgeon General stated in 1999 that: "Powerful and pervasive, stigma prevents people from acknowledging their own mental health problems, much less disclosing them to others." Additionally, researcher Wulf Rössler in 2016, in his article, "The Stigma of Mental Disorders" stated In the United States, racial and ethnic minorities are more likely to experience mental health disorders often due to low socioeconomic status, and discrimination. In Taiwan, people with mental disorders often face misconceptions from the general public. These misconceptions include the belief that mental health issues stem from excessive worry, having too much free time, a lack of progress or ambition, not taking life seriously, neglecting real-life responsibilities, mental weakness, unwillingness to be resilient, perfectionism, or a lack of courage. Employment discrimination is reported to play a significant part in the high rate of unemployment among those with a diagnosis of mental illness. An Australian study found that having a psychiatric disability is a bigger barrier to employment than a physical disability. The mentally ill are stigmatized in Chinese society and can not legally marry. Efforts are being undertaken worldwide to eliminate the stigma of mental illness, although the methods and outcomes used have sometimes been criticized. Media and general public Media coverage of mental illness comprises predominantly negative and pejorative depictions, for example, of incompetence, violence or criminality, with far less coverage of positive issues such as accomplishments or human rights issues. Such negative depictions, including in children's cartoons, are thought to contribute to stigma and negative attitudes in the public and in those with mental health problems themselves, although more sensitive or serious cinematic portrayals have increased in prevalence. In the United States, the Carter Center has created fellowships for journalists in South Africa, the U.S., and Romania, to enable reporters to research and write stories on mental health topics. Former US First Lady Rosalynn Carter began the fellowships not only to train reporters in how to sensitively and accurately discuss mental health and mental illness, but also to increase the number of stories on these topics in the news media. There is also a World Mental Health Day, which in the United States and Canada falls within a Mental Illness Awareness Week. The general public have been found to hold a strong stereotype of dangerousness and desire for social distance from individuals described as mentally ill. A US national survey found that a higher percentage of people rate individuals described as displaying the characteristics of a mental disorder as "likely to do something violent to others", compared to the percentage of people who are rating individuals described as being troubled. In the article, "Discrimination Against People with a Mental Health Diagnosis: Qualitative Analysis of Reported Experiences", an individual who has a mental disorder, revealed that, "If people don't know me and don't know about the problems, they'll talk to me quite happily. Once they've seen the problems or someone's told them about me, they tend to be a bit more wary." In addition, in the article, "Stigma and its Impact on Help-Seeking for Mental Disorders: What Do We Know?" by George Schomerus and Matthias Angermeyer, it is affirmed that "Family doctors and psychiatrists have more pessimistic views about the outcomes for mental illnesses than the general public (Jorm et al., 1999), and mental health professionals hold more negative stereotypes about mentally ill patients, but, reassuringly, they are less accepting of restrictions towards them." Recent depictions in media have included leading characters successfully living with and managing a mental illness, including in bipolar disorder in Homeland (2011) and post-traumatic stress disorder in Iron Man 3 (2013). Violence Despite public or media opinion, national studies have indicated that severe mental illness does not independently predict future violent behavior, on average, and is not a leading cause of violence in society. There is a statistical association with various factors that do relate to violence (in anyone), such as substance use and various personal, social, and economic factors. A 2015 review found that in the United States, about 4% of violence is attributable to people diagnosed with mental illness, and a 2014 study found that 7.5% of crimes committed by mentally ill people were directly related to the symptoms of their mental illness. The majority of people with serious mental illness are never violent. In fact, findings consistently indicate that it is many times more likely that people diagnosed with a serious mental illness living in the community will be the victims rather than the perpetrators of violence. In a study of individuals diagnosed with "severe mental illness" living in a US inner-city area, a quarter were found to have been victims of at least one violent crime over the course of a year, a proportion eleven times higher than the inner-city average, and higher in every category of crime including violent assaults and theft. People with a diagnosis may find it more difficult to secure prosecutions, however, due in part to prejudice and being seen as less credible. However, there are some specific diagnoses, such as childhood conduct disorder or adult antisocial personality disorder or psychopathy, which are defined by, or are inherently associated with, conduct problems and violence. There are conflicting findings about the extent to which certain specific symptoms, notably some kinds of psychosis (hallucinations or delusions) that can occur in disorders such as schizophrenia, delusional disorder or mood disorder, are linked to an increased risk of serious violence on average. The mediating factors of violent acts, however, are most consistently found to be mainly socio-demographic and socio-economic factors such as being young, male, of lower socioeconomic status and, in particular, substance use (including alcohol use) to which some people may be particularly vulnerable. High-profile cases have led to fears that serious crimes, such as homicide, have increased due to deinstitutionalization, but the evidence does not support this conclusion. Violence that does occur in relation to mental disorder (against the mentally ill or by the mentally ill) typically occurs in the context of complex social interactions, often in a family setting rather than between strangers. It is also an issue in health care settings and the wider community. Mental health The recognition and understanding of mental health conditions have changed over time and across cultures and there are still variations in definition, assessment, and classification, although standard guideline criteria are widely used. In many cases, there appears to be a continuum between mental health and mental illness, making diagnosis complex. According to the World Health Organization, over a third of people in most countries report problems at some time in their life which meet the criteria for diagnosis of one or more of the common types of mental disorder. Corey M Keyes has created a two continua model of mental illness and health which holds that both are related, but distinct dimensions: one continuum indicates the presence or absence of mental health, the other the presence or absence of mental illness. For example, people with optimal mental health can also have a mental illness, and people who have no mental illness can also have poor mental health. Other animals Psychopathology in non-human primates has been studied since the mid-20th century. Over 20 behavioral patterns in captive chimpanzees have been documented as (statistically) abnormal for frequency, severity or oddness—some of which have also been observed in the wild. Captive great apes show gross behavioral abnormalities such as stereotypy of movements, self-mutilation, disturbed emotional reactions (mainly fear or aggression) towards companions, lack of species-typical communications, and generalized learned helplessness. In some cases such behaviors are hypothesized to be equivalent to symptoms associated with psychiatric disorders in humans such as depression, anxiety disorders, eating disorders and post-traumatic stress disorder. Concepts of antisocial, borderline and schizoid personality disorders have also been applied to non-human great apes. The risk of anthropomorphism is often raised concerning such comparisons, and assessment of non-human animals cannot incorporate evidence from linguistic communication. However, available evidence may range from nonverbal behaviors—including physiological responses and homologous facial displays and acoustic utterances—to neurochemical studies. It is pointed out that human psychiatric classification is often based on statistical description and judgment of behaviors (especially when speech or language is impaired) and that the use of verbal self-report is itself problematic and unreliable. Psychopathology has generally been traced, at least in captivity, to adverse rearing conditions such as early separation of infants from mothers; early sensory deprivation; and extended periods of social isolation. Studies have also indicated individual variation in temperament, such as sociability or impulsiveness. Particular causes of problems in captivity have included integration of strangers into existing groups and a lack of individual space, in which context some pathological behaviors have also been seen as coping mechanisms. Remedial interventions have included careful individually tailored re-socialization programs, behavior therapy, environment enrichment, and on rare occasions psychiatric drugs. Socialization has been found to work 90% of the time in disturbed chimpanzees, although restoration of functional sexuality and caregiving is often not achieved. Laboratory researchers sometimes try to develop animal models of human mental disorders, including by inducing or treating symptoms in animals through genetic, neurological, chemical or behavioral manipulation, but this has been criticized on empirical grounds and opposed on animal rights grounds. See also 50 Signs of Mental Illness List of mental disorders Mental illness portrayed in media Mental disorders in film Mental illness in fiction Mental illness in American prisons Parity of esteem Psychological evaluation References Further reading Stanford Encyclopedia of Philosophy External links Overcoming Mental Health Stigma in the Latino Community – Consult QD clevelandclinic.org National Institute of Mental Health International Committee of Women Leaders on Mental Health Disability by type Abnormal psychology Psychopathology Psychiatric assessment Suffering
0.776725
0.999327
0.776203
Cognitive therapy
Cognitive therapy (CT) is a type of psychotherapy developed by American psychiatrist Aaron T. Beck. CT is one therapeutic approach within the larger group of cognitive behavioral therapies (CBT) and was first expounded by Beck in the 1960s. Cognitive therapy is based on the cognitive model, which states that thoughts, feelings and behavior are all connected, and that individuals can move toward overcoming difficulties and meeting their goals by identifying and changing unhelpful or inaccurate thinking, problematic behavior, and distressing emotional responses. This involves the individual working with the therapist to develop skills for testing and changing beliefs, identifying distorted thinking, relating to others in different ways, and changing behaviors. A cognitive case conceptualization is developed by the cognitive therapist as a guide to understand the individual's internal reality, select appropriate interventions and identify areas of distress. History Precursors of certain aspects of cognitive therapy have been identified in various ancient philosophical traditions, particularly Stoicism. For example, Beck's original treatment manual for depression states, "The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers". Albert Ellis worked on cognitive treatment methods from the 1950s (Ellis, 1956). He called his approach Rational Therapy (RT) at first, then Rational Emotive Therapy (RET) and later Rational Emotive Behavior Therapy (REBT). Becoming disillusioned with long-term psychodynamic approaches based on gaining insight into unconscious emotions, in the late 1950s Aaron T. Beck came to the conclusion that the way in which his patients perceived and attributed meaning in their daily lives—a process known as cognition—was a key to therapy. Beck outlined his approach in Depression: Causes and Treatment in 1967. He later expanded his focus to include anxiety disorders, in Cognitive Therapy and the Emotional Disorders in 1976, and other disorders later on. He also introduced a focus on the underlying "schema"—the underlying ways in which people process information about the self, the world or the future. This new cognitive approach came into conflict with the behaviorism common at the time, which claimed that talk of mental causes was not scientific or meaningful, and that assessing stimuli and behavioral responses was the best way to practice psychology. However, the 1970s saw a general "cognitive revolution" in psychology. Behavioral modification techniques and cognitive therapy techniques became joined, giving rise to a common concept of cognitive behavioral therapy. Although cognitive therapy has often included some behavioral components, advocates of Beck's particular approach sought to maintain and establish its integrity as a distinct, standardized form of cognitive behavioral therapy in which the cognitive shift is the key mechanism of change. Aaron and his daughter Judith S. Beck founded the Beck Institute for Cognitive Therapy and Research in 1994. This was later renamed the "Beck Institute for Cognitive Behavior Therapy." In 1995, Judith released Cognitive Therapy: Basics and Beyond, a treatment manual endorsed by her father Aaron. As cognitive therapy continued to grow in popularity, the non-profit "Academy of Cognitive Therapy" was created in 1998 to accredit cognitive therapists, create a forum for members to share research and interventions, and to educate the public about cognitive therapy and related mental health issues. The academy later changed its name to the "Academy of Cognitive & Behavioral Therapies". The 2011 second edition of "Basics and Beyond" (also endorsed by Aaron T. Beck) was titled Cognitive Behavioral Therapy: Basics and Beyond, Second Edition, and adopted the name "CBT" for Aaron's therapy from its beginning. This further blurred the boundaries between the concepts of "CT" and "CBT". Basis Therapy may consist of testing the assumptions which one makes and looking for new information that could help shift the assumptions in a way that leads to different emotional or behavioral reactions. Change may begin by targeting thoughts (to change emotion and behavior), behavior (to change feelings and thoughts), or the individual's goals (by identifying thoughts, feelings or behavior that conflict with the goals). Beck initially focused on depression and developed a list of "errors" (cognitive distortion) in thinking that he proposed could maintain depression, including arbitrary inference, selective abstraction, overgeneralization, and magnification (of negatives) and minimization (of positives). As an example of how CT might work: Having made a mistake at work, a man may believe: "I'm useless and can't do anything right at work." He may then focus on the mistake (which he takes as evidence that his belief is true), and his thoughts about being "useless" are likely to lead to negative emotion (frustration, sadness, hopelessness). Given these thoughts and feelings, he may then begin to avoid challenges at work, which is behavior that could provide even more evidence for him that his belief is true. As a result, any adaptive response and further constructive consequences become unlikely, and he may focus even more on any mistakes he may make, which serve to reinforce the original belief of being "useless." In therapy, this example could be identified as a self-fulfilling prophecy or "problem cycle," and the efforts of the therapist and patient would be directed at working together to explore and change this cycle. People who are working with a cognitive therapist often practice more flexible ways to think and respond, learning to ask themselves whether their thoughts are completely true, and whether those thoughts are helping them to meet their goals. Thoughts that do not meet this description may then be shifted to something more accurate or helpful, leading to more positive emotion, more desirable behavior, and movement toward the person's goals. Cognitive therapy takes a skill-building approach, where the therapist helps the person to learn and practice these skills independently, eventually "becoming their own therapist." "Consistent with the cognitive theory of psychopathology, CT is designed to be structured, directive, active, and time-limited, with the express purpose of identifying, reality-testing, and correcting distorted cognition and underlying dysfunctional beliefs". Cognitive model The cognitive model was originally constructed following research studies conducted by Aaron Beck to explain the psychological processes in depression. It divides the mind beliefs in three levels: Automatic thought Intermediate belief Core belief or basic belief In 2014, an update of the cognitive model was proposed, called the Generic Cognitive Model (GCM). The GCM is an update of Beck's model that proposes that mental disorders can be differentiated by the nature of their dysfunctional beliefs. The GCM includes a conceptual framework and a clinical approach for understanding common cognitive processes of mental disorders while specifying the unique features of the specific disorders. Cognitive restructuring (methods) Cognitive restructuring involves four steps: Identification of problematic cognitions known as "automatic thoughts" (ATs) which are dysfunctional or negative views of the self, world, or future based upon already existing beliefs about oneself, the world, or the future Identification of the cognitive distortions in the ATs Rational disputation of ATs with the Socratic method Development of a rational rebuttal to the ATs There are six types of automatic thoughts: Self-evaluated thoughts Thoughts about the evaluations of others Evaluative thoughts about the other person with whom they are interacting Thoughts about coping strategies and behavioral plans Thoughts of avoidance Any other thoughts that were not categorized Other major techniques include: Activity monitoring and activity scheduling Behavioral experiments Catching, checking, and changing thoughts Collaborative empiricism: therapist and patient become investigators by examining the evidence to support or reject the patient's cognitions. Empirical evidence is used to determine whether particular cognitions serve any useful purpose. Downward arrow technique Exposure and response prevention Cost benefit analysis acting "as if"' Guided discovery: therapist elucidates behavioral problems and faulty thinking by designing new experiences that lead to acquisition of new skills and perspectives. Through both cognitive and behavioral methods, the patient discovers more adaptive ways of thinking and coping with environmental stressors by correcting cognitive processing. Mastery and pleasure technique Problem solving Socratic questioning: involves the creation of a series of questions to a) clarify and define problems, b) assist in the identification of thoughts, images and assumptions, c) examine the meanings of events for the patient, and d) assess the consequences of maintaining maladaptive thoughts and behaviors. Socratic questioning Socratic questions are the archetypal cognitive restructuring techniques. These kinds of questions are designed to challenge assumptions by: Conceiving reasonable alternatives: "What might be another explanation or viewpoint of the situation? Why else did it happen?" Evaluating those consequences: "What's the effect of thinking or believing this? What could be the effect of thinking differently and no longer holding onto this belief?" Distancing: "Imagine a specific friend/family member in the same situation or if they viewed the situation this way, what would I tell them?" Examples of socratic questions are: "Describe the way you formed your viewpoint originally." "What initially convinced you that your current view is the best one available?" "Think of three pieces of evidence that contradict this view, or that support the opposite view. Think about the opposite of this viewpoint and reflect on it for a moment. What's the strongest argument in favor of this opposite view?" "Write down any specific benefits you get from holding this belief, such as social or psychological benefits. For example, getting to be part of a community of like-minded people, feeling good about yourself or the world, feeling that your viewpoint is superior to others", etc. Are there any reasons that you might hold this view other than because it's true?" "For instance, does holding this viewpoint provide some peace of mind that holding a different viewpoint would not?" "In order to refine your viewpoint so that it's as accurate as possible, it's important to challenge it directly on occasion and consider whether there are reasons that it might not be true. What do you think the best or strongest argument against this perspective is?" "What would you have to experience or find out in order for you to change your mind about this viewpoint?" "Given your thoughts so far, do you think that there may be a truer, more accurate, or more nuanced version of your original view that you could state right now?" False assumptions False assumptions are based on "cognitive distortions", such as: Always Being Right: "We are continually on trial to prove that our opinions and actions are correct. Being wrong is unthinkable and we will go to any length to demonstrate our rightness. For example, 'I don't care how badly arguing with me makes you feel, I'm going to win this argument no matter what because I'm right.' Being right often is more important than the feelings of others around a person who engages in this cognitive distortion, even loved ones." Heaven's Reward Fallacy: "We expect our sacrifice and self-denial to pay off, as if someone is keeping score. We feel bitter when the reward doesn't come." Awfulizing and Must-ing Rational emotive behavior therapy (REBT) includes awfulizing, when a person causes themselves disturbance by labeling an upcoming situation as "awful", rather than envisaging how the situation may actually unfold, and Must-ing, when a person places a false demand on themselves that something "must" happen (e.g. "I must get an A in this exam.") Application Depression According to Beck's theory of the etiology of depression, depressed people acquire a negative schema of the world in childhood and adolescence; children and adolescents who experience depression acquire this negative schema earlier. Depressed people acquire such schemas through the loss of a parent, rejection by peers, bullying, criticism from teachers or parents, the depressive attitude of a parent or other negative events. When a person with such schemas encounters a situation that resembles the original conditions of the learned schema, the negative schemas are activated. Beck's negative triad holds that depressed people have negative thoughts about themselves, their experiences in the world, and the future. For instance, a depressed person might think, "I didn't get the job because I'm terrible at interviews. Interviewers never like me, and no one will ever want to hire me." In the same situation, a person who is not depressed might think, "The interviewer wasn't paying much attention to me. Maybe she already had someone else in mind for the job. Next time I'll have better luck, and I'll get a job soon." Beck also identified a number of other cognitive distortions, which can contribute to depression, including the following: arbitrary inference, selective abstraction, overgeneralization, magnification and minimization. In 2008, Beck proposed an integrative developmental model of depression that aims to incorporate research in genetics and the neuroscience of depression. This model was updated in 2016 to incorporate multiple levels of analyses, new research, and key concepts (e.g., resilience) within the framework of an evolutionary perspective. Other applications Cognitive therapy has been applied to a very wide range of behavioral health issues including: Academic achievement Addiction Anxiety disorders Bipolar disorder Low self-esteem Phobia Schizophrenia Substance abuse Suicidal ideation Weight loss Criticisms A criticism has been that clinical studies of CT efficacy (or any psychotherapy) are not double-blind (i.e., neither subjects nor therapists in psychotherapy studies are blind to the type of treatment). They may be single-blinded, the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in. See also Cognitive analytic therapy Cognitive bias mitigation Cognitive-shifting David D. Burns Debiasing History of psychotherapy Journal of Cognitive Psychotherapy Recognition-primed decision Schema therapy References External links An Introduction to Cognitive Therapy & Cognitive Behavioural Approaches What is Cognitive Therapy Academy of Cognitive Therapy International Association of Cognitive Psychotherapy
0.781781
0.99285
0.776191
Phenomenology (psychology)
Phenomenology or phenomenological psychology, a sub-discipline of psychology, is the scientific study of subjective experiences. It is an approach to psychological subject matter that attempts to explain experiences from the point of view of the subject via the analysis of their written or spoken words. The approach has its roots in the phenomenological philosophical work of Edmund Husserl. History Early phenomenologists such as Husserl, Jean-Paul Sartre, and Maurice Merleau-Ponty conducted philosophical investigations of consciousness in the early 20th century. Their critiques of psychologism and positivism later influenced at least two main fields of contemporary psychology: the phenomenological psychological approach of the Duquesne School (the descriptive phenomenological method in psychology), including Amedeo Giorgi and Frederick Wertz; and the experimental approaches associated with Francisco Varela, Shaun Gallagher, Evan Thompson, and others (embodied mind thesis). Other names associated with the movement include Jonathan Smith (interpretative phenomenological analysis), Steinar Kvale, and Wolfgang Köhler. But "an even stronger influence on psychopathology came from Heidegger (1963), particularly through Kunz (1931), Blankenburg (1971), Tellenbach (1983), Binswanger (1994), and others." Phenomenological psychologists have also figured prominently in the history of the humanistic psychology movement. Methodology Phenomenology is concerned with the rich qualitative description of first-person experiences. This stands in contrast to quantitative approaches which seek to operationalize, abstract and predict behavior. Following Husserl's battle-cry "back to the things themselves", a phenomenological approach seeks to avoid speculation about underlying causes, and instead emphasizes direct descriptions of phenomena, whether by means of introspection or by attentive observation of another person. Experience The experiencing subject can be considered to be the person or self, for purposes of convenience. In phenomenological philosophy (and in particular in the work of Husserl, Heidegger, and Merleau-Ponty), "experience" is a considerably more complex concept than it is usually taken to be in everyday use. Instead, experience (or being, or existence itself) is an "in-relation-to" phenomenon, and it is defined by qualities of directedness, embodiment, and worldliness, which are evoked by the term "Being-in-the-World". The quality or nature of a given experience is often referred to by the term qualia, whose archetypical exemplar is "redness". For example, we might ask, "Is my experience of redness the same as yours?" While it is difficult to answer such a question in any concrete way, the concept of intersubjectivity is often used as a mechanism for understanding how it is that humans are able to empathize with one another's experiences, and indeed to engage in meaningful communication about them. The phenomenological formulation of "Being-in-the-World", where person and world are mutually constitutive, is central here. The observer, or in some cases the interviewer, achieves this sense of understanding and feeling of relatedness to the subject's experience, through subjective analysis of the experience, and the implied thoughts and emotions that they relay in their words. Challenges in studying subjectivity The philosophical psychology prevalent before the end of the 19th century relied heavily on introspection. The speculations concerning the mind based on those observations were criticized by the pioneering advocates of a more scientific and objective approach to psychology, such as William James and the behaviorists Edward Thorndike, Clark Hull, John B. Watson, and B. F. Skinner. However, not everyone agrees that introspection is intrinsically problematic, such as Francisco Varela, who has trained experimental participants in the structured "introspection" of phenomenological reduction. In the early 1970s, Amedeo Giorgi applied phenomenological theory to his development of the Descriptive Phenomenological Method in Psychology. He sought to overcome certain problems he perceived from his work in psychophysics by approaching subjective phenomena from the traditional hypothetical-deductive framework of the natural sciences. Giorgi hoped to use what he had learned from his natural science background to develop a rigorous qualitative research method. His goal was to ensure that phenomenological research was both reliable and valid and he did this by seeking to make its processes increasingly measurable. Philosophers have long confronted the problem of "qualia". Few philosophers believe that it is possible to be sure that one person's experience of the "redness" of an object is the same as another person's, even if both persons had effectively identical genetic and experiential histories. In principle, the same difficulty arises in feelings (the subjective experience of emotion), in the experience of effort, and especially in the "meaning" of concepts. As a result, many qualitative psychologists have claimed phenomenological inquiry to be essentially a matter of "meaning-making" and thus a question to be addressed by interpretive approaches. Applications Psychotherapy Carl Rogers's person-centered psychotherapy theory is based directly on the “phenomenal field” personality theory of Combs and Snygg. That theory in turn was grounded in phenomenological thinking. Rogers attempts to put a therapist in closer contact with a person by listening to the person's report of their recent subjective experiences, especially emotions of which the person is not fully aware. For example, in relationships the problem at hand is often not based around what actually happened but, instead, based on the perceptions and feelings of each individual in the relationship. “At the core of phenomenology lies the attempt to describe and understand phenomena such as caring, healing, and wholeness as experienced by individuals who have lived through them". Recent applications The study and practice of phenomenology continues to grow and develop today. In 2021 a study on the experiences of individuals who attended a coexistence center (CECO) was conducted using phenomenological interviews to understand the lives of the participants. After the interviews the researchers constructed a comprehensive narrative, putting their understanding of the participants experience into their own words. This process led the researchers to understand that "the CECO is a propitious space for the development of individual and collective potentialities and the valuation of constructive social relationships that facilitate and preserve the inherent tendency of people towards growth, autonomy and psychological maturation." Another example of phenomenology in recent years is an article published in 2022 which explains how phenomenology can grow into a larger field of study if we recognize how phenomenology has the ability to make the experiences of other people more clear, bridging the gap between subjective and objective reality. It puts forth "a methodological concept of phenomenological elucidation to promote the development of phenomenology as psychology." Critiques In 2022 Gerhard Thonhauser published an article which critiques phenomenology in psychology for adoption of Le Bon's crowd psychology, as well as what Thonhauser calls the "disease model of emotion transfer". Thonhauser claims there is little to no evidence of Le Bon's crowd psychology framework, of which phenomenology relies on. In a 2015 article written for the Partially Examined Life blog, Michael Burgess argues that "...the foundational problem here is that consciousness is not a container for objects; this assertion mostly derives from another: that the world itself seems to be one way but is another, thus in its initial state of “seeming to be” it cannot be itself real (that illusion is metaphysical)." See also Alterity Association of ideas Associationism Binding problem Ideology Neurophenomenology Prejudice Stream of consciousness (psychology) Vertiginous question References External links Phenomenology Philosophy of psychology Psychological schools
0.784888
0.988647
0.775977
Psychodrama
Psychodrama is an action method, often used as a psychotherapy, in which clients use spontaneous dramatization, role playing, and dramatic self-presentation to investigate and gain insight into their lives. Developed by Jacob L. Moreno and his wife Zerka Toeman Moreno, psychodrama includes elements of theater, often conducted on a stage, or a space that serves as a stage area, where props can be used. A psychodrama therapy group, under the direction of a licensed psychodramatist, reenacts real-life, past situations (or inner mental processes), acting them out in present time. Participants then have the opportunity to evaluate their behavior, reflect on how the past incident is getting played out in the present and more deeply understand particular situations in their lives. Psychodrama offers a creative way for an individual or group to explore and solve personal problems. It may be used in a variety of clinical and community-based settings in which other group members (audience) are invited to become therapeutic agents (stand-ins) to populate the scene of one client. Besides benefits to the designated client, "side-benefits" may accrue to other group members, as they make relevant connections and insights to their own lives from the psychodrama of another. A psychodrama is best conducted and produced by a person trained in the method, called a psychodrama director. In a session of psychodrama, one client of the group becomes the protagonist, and focuses on a particular, personal, emotionally problematic situation to enact on stage. A variety of scenes may be enacted, depicting, for example, memories of specific happenings in the client's past, unfinished situations, inner dramas, fantasies, dreams, preparations for future risk-taking situations, or unrehearsed expressions of mental states in the here and now. These scenes either approximate real-life situations or are externalizations of inner mental processes. Other members of the group may become auxiliaries and support the protagonist by playing other significant roles in the scene, or they may step in as a "double" who plays the role of the protagonist. A core tenet of psychodrama is Moreno's theory of "spontaneity-creativity". Moreno believed that the best way for an individual to respond creatively to a situation is through spontaneity, that is, through a readiness to improvise and respond in the moment. By encouraging an individual to address a problem in a creative way, reacting spontaneously and based on impulse, they may begin to discover new solutions to problems in their lives and learn new roles they can inhabit within it. Moreno's focus on spontaneous action within the psychodrama was developed in his Theatre of Spontaneity, which he directed in Vienna in the early 1920s. Disenchanted with the stagnancy he observed in conventional, scripted theatre, he found himself interested in the spontaneity required in improvisational work. He founded an improvisational troupe in the 1920s. This work in the theatre impacted the development of his psychodramatic theory. Methods In psychodrama, participants explore internal conflicts by acting out their emotions and interpersonal interactions on stage. A psychodrama session (typically 90 minutes to 2 hours) focuses principally on a single participant, known as the protagonist. Protagonists examine their relationships by interacting with the other actors and the leader, known as the director. This is done using specific techniques, including mirroring, doubling, soliloquy, and role reversal. The session is often broken up into three phases: the warm-up, the action, and the post-discussion. During a typical psychodrama session, a number of clients gather together. One of these clients is chosen by the group as the protagonist, and the director calls on the other clients to assist the protagonist's "performance," either by portraying other characters, or by utilizing mirroring, doubling, or role reversal. The clients act out a number of scenes in order to allow the protagonist to work through certain scenarios. This is obviously beneficial for the protagonist, but also is helpful to the other group members, allowing them to assume the role of another person and apply that experience to their own life. The focus during the session is on the acting out of different scenarios, rather than simply talking through them. All of the different elements of the session (stage, props, lighting, etc.) are used to heighten the reality of the scene. The three sections of a typical session are the warm-up, the action, and the sharing. During the warm-up, the actors are encouraged to enter into a state of mind where they can be present in and aware of the current moment and are free to be creative. This is done through the use of different ice-breaker games and activities. Next, the action section of the psychodrama session is the time in which the actual scenes themselves take place. Finally, in the post-discussion, the different actors are able to comment on the action, coming from their personal point of view, not as a critique, sharing their empathy and experiences with the protagonist of the scene. The following are core psychodramatic techniques: Mirroring: The protagonist is first asked to act out an experience. After this, the client steps out of the scene and watches as another actor steps into their role and portrays them in the scene. Doubling: The job of the double is to make conscious any thoughts or feelings that another person is unable to express whether it is because of shyness, guilt, inhibition, politeness, fear, anger, etc. In many cases, the person is unaware of these thoughts or at least is unable to form the words to express how they are feeling. Therefore, the Double attempts to make conscious and give form to the unconscious and/or under expressed material. The person being doubled has the full right to disown any of the Double's statements and to correct them as necessary. In this way, doubling itself can never be wrong. Role playing: The client portrays a person or object that is problematic to him or her. Soliloquy: The client speaks his or her thoughts aloud in order to build self-knowledge. Role reversal: The client is asked to portray another person while a second actor portrays the client in the particular scene. This not only prompts the client to think as the other person but also has some of the benefits of mirroring, as the client see themselves as portrayed by the second actor. Psychological applications Psychodrama can be used in both non-clinical and clinical arenas. In the non-clinical field, psychodrama is used in business, education, and professional training. In the clinical field, psychodrama may be used to alleviate the effects of emotional trauma and PTSD. One specific application in clinical situations is for people suffering from dysfunctional attachments. For this reason, it is often utilized in the treatment of children who have suffered emotional trauma and abuse. Using role-play and story telling, children may be able to express themselves emotionally and reveal truths about their experience they are not able to openly discuss with their therapist, and rehearse new ways of behavior. Moreno's theory of child development offers further insight into psychodrama and children. Moreno suggested that child development is divided into four stages: finding personal identity (the double), recognizing oneself (the mirror stage), the auxiliary ego (finding the need to fit in), and recognizing the other person (the role-reversal stage). Mirroring, role-playing and other psychodramatic techniques are based on these stages. Moreno believed that psychodrama could be used to help individuals continue their emotional development through the use of these techniques. Related concepts Sociometry Moreno's term sociometry is often used in relation to psychodrama. By definition, sociometry is the study of social relations between individuals—interpersonal relationships. It is, more broadly, a set of ideas and practices that are focused on promoting spontaneity in human relations. Classically, sociometry involves techniques for identifying, organizing, and giving feedback on specific interpersonal preferences an individual has. For example, in a psychodrama session, allowing the group to decide whom the protagonist shall be employs sociometry. Moreno is also credited for founding sociodrama. Though sociodrama, like psychodrama, utilizes the theatrical form as means of therapy, the terms are not synonymous. While psychodrama focuses on one patient within the group unit, sociodrama addresses the group as a whole. The goal is to explore social events, collective ideologies, and community patterns within a group in order to bring about positive change or transformation within the group dynamic. Moreno also believed that sociodrama could be used as a form of micro-sociology—that by examining the dynamic of a small group of individuals, patterns could be discovered that manifest themselves within the society as a whole, such as in Alcoholics Anonymous. Sociodrama can be divided into three main categories: crisis sociodrama, which deals with group responses after a catastrophic event, political sociodrama, which attempts to address stratification and inequality issues within a society, and diversity sociodrama, which considers conflicts based on prejudice, racism or stigmatization. Drama therapy The other creative arts therapies modality drama therapy, which was established and developed in the second half of the past century, shows multiple similarities in its approach to psychodrama, as to using theatre methods to achieve therapeutic goals. Both concepts however, describe different modalities. Drama therapy lets the patient explore fictional stories, such as fairytales, myths or improvised scenes, whereas psychodrama is focused on the patient's real-life experience to practice "new and more effective roles and behaviors" (ASGPP). History Jacob L. Moreno (1889–1974) is the founder of psychodrama and sociometry, and one of the forerunners of the group psychotherapy movement. Around 1910, he developed the Theater of Spontaneity, which is based on the acting out of improvisational impulses. The focus of this exercise was not originally on the therapeutic effects of psychodrama; these were seen by Moreno to simply be positive side-effects. A poem by Moreno reveals ideas central to the practice of psychodrama, and describes the purpose of mirroring: In 1912, Moreno attended one of Sigmund Freud's lectures. In his autobiography, he recalled the experience: "As the students filed out, he singled me out from the crowd and asked me what I was doing. I responded, 'Well, Dr. Freud, I start where you leave off. You meet people in the artificial setting of your office. I meet them on the street and in their homes, in their natural surroundings. You analyze their dreams. I give them the courage to dream again. You analyze and tear them apart. I let them act out their conflicting roles and help them to put the parts back together again.'" While a student at the University of Vienna in 1913–14, Moreno gathered a group of prostitutes as a way of discussing the social stigma and other problems they faced, starting what might be called the first "support group". From experiences like that, and as inspired by psychoanalysts such as Wilhelm Reich and Freud, Moreno began to develop psychodrama. After moving to the United States in 1925, Moreno introduced his work with psychodrama to American psychologists. He began this work with children, and then eventually moved on to large group psychodrama sessions that he held at Impromptu Group Theatre at Carnegie Hall. These sessions established Moreno's name, not only in psychological circles, but also among non-psychologists. Moreno continued to teach his method of psychodrama, leading sessions until his death in 1974. From 1980, Hans-Werner Gessmann developed the Humanistic Psychodrama (HPD) at the Bergerhausen Psychotherapeutic Institute in Duisburg, Germany. It is based on the human image of humanistic psychology <ref>Gessmann Hans-Werner. Die Humanistische Psychologie und das Humanistische Psychodrama. In: Humanistisches Psychodrama. Band IV, Verlag des Psychotherapeutischen Instituts Bergerhausen, Duisburg 1996, .</ref> All rules and methods follow the axioms of humanistic psychology. The HPD sees itself as development-oriented psychotherapy and has completely moved away from the psychoanalytic catharsist theory. Self-awareness and self-actualization are essential aspects in the therapeutic process. Subjective experiences, feelings and thoughts and one's own experiences are the starting point for a change or reorientation in experience and behavior towards more self-acceptance and satisfaction. The examination of the biography of the individual is closely related to the sociometry of the group. Another important practitioner in the field of psychodrama is Carl Hollander. Hollander was the 37th director certified by Moreno in psychodrama. He is known primarily for his creation of the Hollander Psychodrama Curve, which may be utilized as a way to understand how a psychodrama session is structured. Hollander uses the image of a curve to explain the three parts of a psychodrama session: the warm-up, the activity, and the integration. The warm-up exists to put patients into a place of spontaneity and creativity in order to be open in the act of psychodrama. The "activity" is the actual enactment of the psychodrama process. Finally, the "curve" moves to integration. It serves as closure and discussion of the session, and considers how the session can be brought into real life – a sort of debriefing. Although psychodrama is not widely practiced, the work done by practitioners of psychodrama has opened the doors to research possibilities for other psychological concepts such as group therapy and expansion of the work of Sigmund Freud. The growing field of drama therapy utilizes psychodrama as one of its main elements. The methods of psychodrama are also used by group therapy organizations and also find a place in other types of therapy, such as post-divorce counseling for children. See also Educational Psychodrama Diamond of opposites Gestalt therapy Play therapy Playback Theatre Sociodrama Sociometry Theraplay Citations General references Baim, C.; J. Burmeister, and M. Maciel, Psychodrama: Advances in Theory and Practice. Taylor and Frances: USA. . Yablonsky, Lewis. Psychodrama: Resolving Emotional Problems Through Role-playing. New York: Gardner, 1981. . Further reading Carnabucci, Karen: Show and Tell Psychodrama, Nusanto Publishing, United States, 2014. Gessmann, Hans-Werner: Humanistic Psychodrama''. Vol. I–IV. PIB Publisher, Duisburg, Germany, 1994. Gessmann, Hans-Werner: Empirical Research about Effectiveness of Psychodramatic Therapygroupwork of Patients with Neurosis (ICD-10: F3, F4). Zeitschrift für Psychodrama und Soziometrie, Sonderheft Empirische Forschung. VS Verlag für Sozialwissenschaften - Sonderheft Empirische Forschung, 2011. External links American Society of Group Psychotherapy and Psychodrama Australian and Aotearoa New Zealand Psychodrama Association British Psychodrama Association Federation of European Psychodrama Training Organisations International Association for Group Psychotherapy and Group Processes Psychodrama Aotearoa New Zealand Creative arts therapies Role-playing
0.782834
0.991209
0.775951
Media psychology
Media psychology is the branch and specialty field in psychology that focuses on the interaction of human behavior with media and technology. Media psychology is not limited to mass media or media content; it includes all forms of mediated communication and media technology-related behaviors, such as the use, design, impact, and sharing behaviors. This branch is a relatively new field of study because of advancement in technology. It uses various methods of critical analysis and investigation to develop a working model of a user's perception of media experience. These methods are employed for society as a whole and on an individual basis. Media psychologists are able to perform activities that include consulting, design, and production in various media like television, video games, films, and news broadcasting. Media psychologists are not considered to be those who are featured in media (such as counselors-psychotherapists, clinicians, etc.), rather than those who research, work or contribute to the field. Mediacology is a new term used as a collaborative word of Media and Psychology. History There are overlaps with numerous fields, such as media studies, communication science, anthropology, education, and sociology, not to mention those within the discipline of psychology itself. Much of the research that would be considered as 'media psychology' has come from other fields, both academic and applied. In the 1920s, marketing, advertising and public relations professionals began conducting research on consumer behavior and motivation for commercial applications. The use of mass media during World War II, created a surge of academic interest in mass media messaging and resulted in the creation of a new field, communication science (Lazarsfeld & Merton, 2000). The field of media psychology gained prominence in the 1950s when television was becoming popular in American households. Psychologists responded to widespread social concerns about the children and their television viewing. For example, researchers began to study the impact of television viewing on children's reading skills. Later, they began to study the impact of violent television viewing on children's behavior, for example, if they were likely to exhibit anti-social behavior or to copy the violent behaviors that they were seeing. These events led up to the creation of a new division of the American Psychological Association in 1987. Division 46, the Media Psychology Division (now the APA Society for Media Psychology and Technology), is one of the fastest growing in the American Psychological Association.  Today's media psychologists study both legacy and new media forms that have risen in recent years such as cellular phone technology, the internet, and new genres of television. Media psychologists are also involved in how people are impacted and can benefit from the design of technologies such as augmented reality (AR) and virtual reality (VR) and mobile technologies, such as using VR to help trauma victims. Theories Media psychology's theories include the user's perception, cognition, and humanistic components in regards to their experience to their surroundings. Media psychologists also draw upon developmental and narrative psychologies and emerging findings from neuroscience. The theories and research in psychology are used as the backbone of media psychology and guide the discipline itself. Theories in psychology applied to media include multiple dimensions, i.e., text, pictures, symbols, video and sound. Sensory Psychology, semiotics and semantics for visual and language communication, social cognition and neuroscience are among the areas addressed in the study of this area of media psychology. A few of the theories employed in media psychology include: Affective disposition theory (ADT) The concept of affective disposition theory is used to differentiate users' perspectives on different forms of media content and the differences within attentional focus. The theory consists of four components that revolve around emotion: (1) media is based on an individual's emotions and opinions towards characters, (2) media content is driven from enjoyment and appreciation from individuals, (3) individuals form feelings about characters that are either positive or negative and (4) media relies on conflicts between characters and how individuals react to the conflict. Simulation theory (ST) Simulation theory argues that mental simulations do not fully exclude the external information that surrounds the user. Rather that the mediated stimuli are reshaped into imagery and memories of the user in order to run the simulation. It explains why the user is able to form these experiences without the use of technology, because it points to the relevance of construction and internal processing. Psychological theory of play The psychological theory of play applies a more general framework to the concept of media entertainment. This idea potentially offers a more conceptual connection that points to presence. The activity of playing exhibits consistent results to the use of entertainment objects. This theory states that play is a type of action that is characterized by three major aspects: It is intrinsically motivated and highly attractive. It implies a change in perceived reality, as players construct an additional reality while they are playing. It is frequently repeated. The psychological theory of play is based upon the explanations given by eminent people such as Stephenson, Freud, Piaget, and Vygotsky. The theory is based on how an individual uses media for their satisfaction and how media changes within a person's life according to its contents. Play is used for pleasure and is self-contained. People are influenced by media both negatively and positively because we are able to relate to what we see within the environment. By looking more in-depth at the different forms of play, it becomes apparent that the early versions of make-believe play demonstrate the child's need for control and the desire to influence their current environment. The theory explains the allure play has to humans in its many forms. In video games, which replicate the feeling, players hold some aspect of responsibility in the actions that they take within the world of the game. This can allow players to feel successful and powerful. It replicates the feeling of self-efficiency and proficiency within the video game. The experience of defeat is also replicated. In addition to that, in the case of defeat, players are not able to blame their mistakes on anyone but themselves. These all explain some aspects of the pleasure that comes from play. Major contributors Major contributors to media psychology include Marshall McLuhan, Dolf Zillmann, Katz, Blumler and Gurevitch, David Giles, and Bernard Luskin. Marshall McLuhan is a Canadian communication philosopher who was active from the 1930s to the 1970s in the realm of Media Analysis and Technology. He was appointed by the President of the University of Toronto in 1963 to create a new Centre for Culture and Technology to study the psychological and social consequences of technologies and media. McLuhan's famous statement pertaining to media psychology was, "The medium is the message". McLuhan's famous statement was suggestive towards the notion that media is inherently dangerous. McLuhan's theory on media called "technological determinism" would pave the way for other people to study media. Dolf Zillmann advanced the two-factor model of emotion. The two-factor of emotion proposed that emotion involves both psychological and cognitive components. Zillmann advanced the theory of "Excitation transfer" by establishing the explanation for the effects of violent media. Zillmann's theory proposed the notion that viewer's are physiologically aroused when they watch aggressive scenes. After watching an aggressive scene, an individual will become aggressive due to the arousal from the scene. In 1974 Katz, Blumler, and Gurevitch used the uses and gratifications theory to explain media psychology. Katz, Blumler, and Gurevitch discovered five components of the theory; (1) the media competes with sources of satisfaction, (2) goals of mass media can be discovered through data and research, (3) media lies within the audience, (4) an audience is conceived as active, and (5) judgment of mass media should not be expressed until the audience has time to process the media and its content on their own. Katz, Blumler, and Gurevitch found out that audience gratification from the media are rooted in three things, the content of the media, the exposure to it, and the social context that represents different media exposure. However, most of all it comes from the desire to kill time in a way that is worthwhile. They also discovered that different forms of media satisfy in different ways; it fulfills different needs. For example, certain forms of media are used as an escape, like movies at the cinema, but the news channel may be not. David Giles has been publishing in the area of media psychology since 2000. He wrote a book about media psychology in 2003. His book Media Psychology gives an overview of media psychology as a field, its subcategories, theories, and developmental issues within media psychology. Giles started his career as a music journalist, before attending the University of Manchester to study psychology. He then continued his studies at University of Bristol, where he obtained his PhD. Since then, Giles has published numerous books, chapters, articles, and delivered presentations on psychology and the media, with a focus on the influence of celebrities and media figures. He has also worked as a professor of psychology at many universities in England, including universities of Bolton, Sheffield Hallam, Coventry and Lancaster. Since 2009 Giles has been working in the position of reader at the University of Winchester. Bernard Luskin is a licensed psychotherapist, with degrees in business and a UCLA doctorate in education, psychology and technology. He is also the founder and CEO of Luskin International. Luskin has been the founding president and CEO of many colleges and universities, including: Orange Coast College, Jones International University, Touro University Worldwide, Moorpark College, and Oxnard College. He has also had success as a writer, publishing titles such as Introduction to Economics: A Performance-Based Learning Guide in 1977 and Casting the Net over Global Learning: New Developments in Workforce and Online Psychologies in 2022. Media psychology and technology Media psychology involves all the research and applications which deal with all forms of media technologies. The media psychology comprises the prevailing customary and mass media, including radio, television, newsprint, magazines, music, film, and video. It comprises art with new emerging technologies and applications that include social media, mobile media, and interface design. media psychology enables us to create a better and new trajectory concerning how people think about, use, and design media technology in medial platforms. It helps provide tools that aid in identifying how technology has facilitated human goals. It also analyzes how the media becomes inadequate and the inadvertent outcomes of performance shifts, which determine better or worse applications. The media psychology leads to the shift of the general focus from the center of inquiry in the given media-centric to the basic human-centric, leading to the enhancement of communication in the whole sector of media psychology. The use of marketing and public relations has made tremendous help in the whole media psychology analysis whereby customer research and media psychology have given different goals that do not go hand in hand with the other marketing and public relations sectors. The use of technology has enabled the improvement of global connection, limiting traditional activities and advancing the media sector. The media advancement led to the more beneficial platform, which was possible to pass judgment, produce, and distribute analysis to the required platforms. See also Cyberpsychology Media effects Mediatization References External links Media Psychology Research Center Media Psychology: Division 46 of the American Psychological Association Applied psychology Media studies
0.792014
0.979719
0.775951
Mental state
A mental state, or a mental property, is a state of mind of a person. Mental states comprise a diverse class, including perception, pain/pleasure experience, belief, desire, intention, emotion, and memory. There is controversy concerning the exact definition of the term. According to epistemic approaches, the essential mark of mental states is that their subject has privileged epistemic access while others can only infer their existence from outward signs. Consciousness-based approaches hold that all mental states are either conscious themselves or stand in the right relation to conscious states. Intentionality-based approaches, on the other hand, see the power of minds to refer to objects and represent the world as the mark of the mental. According to functionalist approaches, mental states are defined in terms of their role in the causal network independent of their intrinsic properties. Some philosophers deny all the aforementioned approaches by holding that the term "mental" refers to a cluster of loosely related ideas without an underlying unifying feature shared by all. Various overlapping classifications of mental states have been proposed. Important distinctions group mental phenomena together according to whether they are sensory, propositional, intentional, conscious or occurrent. Sensory states involve sense impressions like visual perceptions or bodily pains. Propositional attitudes, like beliefs and desires, are relations a subject has to a proposition. The characteristic of intentional states is that they refer to or are about objects or states of affairs. Conscious states are part of the phenomenal experience while occurrent states are causally efficacious within the owner's mind, with or without consciousness. An influential classification of mental states is due to Franz Brentano, who argues that there are only three basic kinds: presentations, judgments, and phenomena of love and hate. Mental states are usually contrasted with physical or material aspects. For (non-eliminative) physicalists, they are a kind of high-level property that can be understood in terms of fine-grained neural activity. Property dualists, on the other hand, claim that no such reductive explanation is possible. Eliminativists may reject the existence of mental properties, or at least of those corresponding to folk psychological categories such as thought and memory. Mental states play an important role in various fields, including philosophy of mind, epistemology and cognitive science. In psychology, the term is used not just to refer to the individual mental states listed above but also to a more global assessment of a person's mental health. Definition Various competing theories have been proposed about what the essential features of all mental states are, sometimes referred to as the search for the "mark of the mental". These theories can roughly be divided into epistemic approaches, consciousness-based approaches, intentionality-based approaches and functionalism. These approaches disagree not just on how mentality is to be defined but also on which states count as mental. Mental states encompass a diverse group of aspects of an entity, like this entity's beliefs, desires, intentions, or pain experiences. The different approaches often result in a satisfactory characterization of only some of them. This has prompted some philosophers to doubt that there is a unifying mark of the mental and instead see the term "mental" as referring to a cluster of loosely related ideas. Mental states are usually contrasted with physical or material aspects. This contrast is commonly based on the idea that certain features of mental phenomena are not present in the material universe as described by the natural sciences and may even be incompatible with it. Epistemic and consciousness-based approaches Epistemic approaches emphasize that the subject has privileged access to all or at least some of their mental states. It is sometimes claimed that this access is direct, private and infallible. Direct access refers to non-inferential knowledge. When someone is in pain, for example, they know directly that they are in pain, they do not need to infer it from other indicators like a body part being swollen or their tendency to scream when it is touched. But we arguably also have non-inferential knowledge of external objects, like trees or cats, through perception, which is why this criterion by itself is not sufficient. Another epistemic privilege often mentioned is that mental states are private in contrast to public external facts. For example, the fallen tree lying on a person's leg is directly open to perception by the bystanders while the victim's pain is private: only they know it directly while the bystanders have to infer it from their screams. It was traditionally often claimed that we have infallible knowledge of our own mental states, i.e. that we cannot be wrong about them when we have them. So when someone has an itching sensation, for example, they cannot be wrong about having this sensation. They can only be wrong about the non-mental causes, e.g. whether it is the consequence of bug bites or of a fungal infection. But various counterexamples have been presented to claims of infallibility, which is why this criterion is usually not accepted in contemporary philosophy. One problem for all epistemic approaches to the mark of the mental is that they focus mainly on conscious states but exclude unconscious states. A repressed desire, for example, is a mental state to which the subject lacks the forms of privileged epistemic access mentioned. One way to respond to this worry is to ascribe a privileged status to conscious mental states. On such a consciousness-based approach, conscious mental states are non-derivative constituents of the mind while unconscious states somehow depend on their conscious counterparts for their existence. An influential example of this position is due to John Searle, who holds that unconscious mental states have to be accessible to consciousness to count as "mental" at all. They can be understood as dispositions to bring about conscious states. This position denies that the so-called "deep unconscious", i.e. mental contents inaccessible to consciousness, exists. Another problem for consciousness-based approaches, besides the issue of accounting for the unconscious mind, is to elucidate the nature of consciousness itself. Consciousness-based approaches are usually interested in phenomenal consciousness, i.e. in qualitative experience, rather than access consciousness, which refers to information being available for reasoning and guiding behavior. Conscious mental states are normally characterized as qualitative and subjective, i.e. that there is something it is like for a subject to be in these states. Opponents of consciousness-based approaches often point out that despite these attempts, it is still very unclear what the term "phenomenal consciousness" is supposed to mean. This is important because not much would be gained theoretically by defining one ill-understood term in terms of another. Another objection to this type of approach is to deny that the conscious mind has a privileged status in relation to the unconscious mind, for example, by insisting that the deep unconscious exists. Intentionality-based approaches Intentionality-based approaches see intentionality as the mark of the mental. The originator of this approach is Franz Brentano, who defined intentionality as the characteristic of mental states to refer to or be about objects. One central idea for this approach is that minds represent the world around them, which is not the case for regular physical objects. So a person who believes that there is ice cream in the fridge represents the world as being a certain way. The ice cream can be represented but it does not itself represent the world. This is why a mind is ascribed to the person but not to the ice cream, according to the intentional approach. One advantage of it in comparison to the epistemic approach is that it has no problems to account for unconscious mental states: they can be intentional just like conscious mental states and thereby qualify as constituents of the mind. But a problem for this approach is that there are also some non-mental entities that have intentionality, like maps or linguistic expressions. One response to this problem is to hold that the intentionality of non-mental entities is somehow derivative in relation to the intentionality of mental entities. For example, a map of Addis Ababa may be said to represent Addis Ababa not intrinsically but only extrinsically because people interpret it as a representation. Another difficulty is that not all mental states seem to be intentional. So while beliefs and desires are forms of representation, this seems not to be the case for pains and itches, which may indicate a problem without representing it. But some theorists have argued that even these apparent counterexamples should be considered intentional when properly understood. Behaviorism and functionalism Behaviorist definitions characterize mental states as dispositions to engage in certain publicly observable behavior as a reaction to particular external stimuli. On this view, to ascribe a belief to someone is to describe the tendency of this person to behave in certain ways. Such an ascription does not involve any claims about the internal states of this person, it only talks about behavioral tendencies. A strong motivation for such a position comes from empiricist considerations stressing the importance of observation and the lack thereof in the case of private internal mental states. This is sometimes combined with the thesis that we could not even learn how to use mental terms without reference to the behavior associated with them. One problem for behaviorism is that the same entity often behaves differently despite being in the same situation as before. This suggests that explanation needs to make reference to the internal states of the entity that mediate the link between stimulus and response. This problem is avoided by functionalist approaches, which define mental states through their causal roles but allow both external and internal events in their causal network. On this view, the definition of pain-state may include aspects such as being in a state that "tends to be caused by bodily injury, to produce the belief that something is wrong with the body and ... to cause wincing or moaning". One important aspect of both behaviorist and functionalist approaches is that, according to them, the mind is multiply realizable. This means that it does not depend on the exact constitution of an entity for whether it has a mind or not. Instead, only its behavioral dispositions or its role in the causal network matter. The entity in question may be a human, an animal, a silicon-based alien or a robot. Functionalists sometimes draw an analogy to the software-hardware distinction where the mind is likened to a certain type of software that can be installed on different forms of hardware. Closely linked to this analogy is the thesis of computationalism, which defines the mind as an information processing system that is physically implemented by the neural activity of the brain. One problem for all of these views is that they seem to be unable to account for the phenomenal consciousness of the mind emphasized by consciousness-based approaches. It may be true that pains are caused by bodily injuries and themselves produce certain beliefs and moaning behavior. But the causal profile of pain remains silent on the intrinsic unpleasantness of the painful experience itself. Some states that are not painful to the subject at all may even fit these characterizations. Externalism Theories under the umbrella of externalism emphasize the mind's dependency on the environment. According to this view, mental states and their contents are at least partially determined by external circumstances. For example, some forms of content externalism hold that it can depend on external circumstances whether a belief refers to one object or another. The extended mind thesis states that external circumstances not only affect the mind but are part of it. The closely related view of enactivism holds that mental processes involve an interaction between organism and environment. Classifications of mental states There is a great variety of types of mental states, which can be classified according to various distinctions. These types include perception, belief, desire, intention, emotion and memory. Many of the proposed distinctions for these types have significant overlaps and some may even be identical. Sensory states involve sense impressions, which are absent in non-sensory states. Propositional attitudes are mental states that have propositional contents, in contrast to non-propositional states. Intentional states refer to or are about objects or states of affairs, a feature which non-intentional states lack. A mental state is conscious if it belongs to a phenomenal experience. Unconscious mental states are also part of the mind but they lack this phenomenal dimension. Occurrent mental states are active or causally efficacious within the owner's mind while non-occurrent or standing states exist somewhere in the back of one's mind but do not currently play an active role in any mental processes. Certain mental states are rationally evaluable: they are either rational or irrational depending on whether they obey the norms of rationality. But other states are arational: they are outside the domain of rationality. A well-known classification is due to Franz Brentano, who distinguishes three basic categories of mental states: presentations, judgments, and phenomena of love and hate. Types of mental states There is a great variety of types of mental states including perception, bodily awareness, thought, belief, desire, motivation, intention, deliberation, decision, pleasure, emotion, mood, imagination and memory. Some of these types are precisely contrasted with each other while other types may overlap. Perception involves the use of senses, like sight, touch, hearing, smell and taste, to acquire information about material objects and events in the external world. It contrasts with bodily awareness in this sense, which is about the internal ongoings in our body and which does not present its contents as independent objects. The objects given in perception, on the other hand, are directly (i.e. non-inferentially) presented as existing out there independently of the perceiver. Perception is usually considered to be reliable but our perceptual experiences may present false information at times and can thereby mislead us. The information received in perception is often further considered in thought, in which information is mentally represented and processed. Both perceptions and thoughts often result in the formation of new or the change of existing beliefs. Beliefs may amount to knowledge if they are justified and true. They are non-sensory cognitive propositional attitudes that have a mind-to-world direction of fit: they represent the world as being a certain way and aim at truth. They contrast with desires, which are conative propositional attitudes that have a world-to-mind direction of fit and aim to change the world by representing how it should be. Desires are closely related to agency: they motivate the agent and are thus involved in the formation of intentions. Intentions are plans to which the agent is committed and which may guide actions. Intention-formation is sometimes preceded by deliberation and decision, in which the advantages and disadvantages of different courses of action are considered before committing oneself to one course. It is commonly held that pleasure plays a central role in these considerations. "Pleasure" refers to experience that feels good, that involves the enjoyment of something. The topic of emotions is closely intertwined with that of agency and pleasure. Emotions are evaluative responses to external or internal stimuli that are associated with a feeling of pleasure or displeasure and motivate various behavioral reactions. Emotions are quite similar to moods, some differences being that moods tend to arise for longer durations at a time and that moods are usually not clearly triggered by or directed at a specific event or object. Imagination is even further removed from the actual world in that it represents things without aiming to show how they actually are. All the aforementioned states can leave traces in memory that make it possible to relive them at a later time in the form of episodic memory. Sensation, propositional attitudes and intentionality An important distinction among mental states is between sensory and non-sensory states. Sensory states involve some form of sense impressions like visual perceptions, auditory impressions or bodily pains. Non-sensory states, like thought, rational intuition or the feeling of familiarity, lack sensory contents. Sensory states are sometimes equated with qualitative states and contrasted with propositional attitude states. Qualitative states involve qualia, which constitute the subjective feeling of having the state in question or what it is like to be in it. Propositional attitudes, on the other hand, are relations a subject has to a proposition. They are usually expressed by verbs like believe, desire, fear or hope together with a that-clause. So believing that it will rain today, for example, is a propositional attitude. It has been argued that the contrast between qualitative states and propositional attitudes is misleading since there is some form of subjective feel to certain propositional states like understanding a sentence or suddenly thinking of something. This would suggest that there are also non-sensory qualitative states and some propositional attitudes may be among them. Another problem with this contrast is that some states are both sensory and propositional. This is the case for perception, for example, which involves sensory impressions that represent what the world is like. This representational aspect is usually understood as involving a propositional attitude. Closely related to these distinctions is the concept of intentionality. Intentionality is usually defined as the characteristic of mental states to refer to or be about objects or states of affairs. The belief that the moon has a circumference of 10921 km, for example, is a mental state that is intentional in virtue of being about the moon and its circumference. It is sometimes held that all mental states are intentional, i.e. that intentionality is the "mark of the mental". This thesis is known as intentionalism. But this view has various opponents, who distinguish between intentional and non-intentional states. Putative examples of non-intentional states include various bodily experiences like pains and itches. Because of this association, it is sometimes held that all sensory states lack intentionality. But such a view ignores that certain sensory states, like perceptions, can be intentional at the same time. It is usually accepted that all propositional attitudes are intentional. But while the paradigmatic cases of intentionality are all propositional as well, there may be some intentional attitudes that are non-propositional. This could be the case when an intentional attitude is directed only at an object. In this view, Elsie's fear of snakes is a non-propositional intentional attitude while Joseph's fear that he will be bitten by snakes is a propositional intentional attitude. Conscious and unconscious A mental state is conscious if it belongs to phenomenal experience. The subject is aware of the conscious mental states it is in: there is some subjective feeling to having them. Unconscious mental states are also part of the mind but they lack this phenomenal dimension. So it is possible for a subject to be in an unconscious mental state, like a repressed desire, without knowing about it. It is usually held that some types of mental states, like sensations or pains, can only occur as conscious mental states. But there are also other types, like beliefs and desires, that can be both conscious and unconscious. For example, many people share the belief that the moon is closer to the earth than to the sun. When considered, this belief becomes conscious, but it is unconscious most of the time otherwise. The relation between conscious and unconscious states is a controversial topic. It is often held that conscious states are in some sense more basic with unconscious mental states depending on them. One such approach states that unconscious states have to be accessible to consciousness, that they are dispositions of the subject to enter their corresponding conscious counterparts. On this position there can be no "deep unconscious", i.e. unconscious mental states that can not become conscious. The term "consciousness" is sometimes used not in the sense of phenomenal consciousness, as above, but in the sense of access consciousness. A mental state is conscious in this sense if the information it carries is available for reasoning and guiding behavior, even if it is not associated with any subjective feel characterizing the concurrent phenomenal experience. Being an access-conscious state is similar but not identical to being an occurrent mental state, the topic of the next section. Occurrent and standing A mental state is occurrent if it is active or causally efficacious within the owner's mind. Non-occurrent states are called standing or dispositional states. They exist somewhere in the back of one's mind but currently play no active role in any mental processes. This distinction is sometimes identified with the distinction between phenomenally conscious and unconscious mental states. It seems to be the case that the two distinctions overlap but do not fully match despite the fact that all conscious states are occurrent. This is the case because unconscious states may become causally active while remaining unconscious. A repressed desire may affect the agent's behavior while remaining unconscious, which would be an example of an unconscious occurring mental state. The distinction between occurrent and standing is especially relevant for beliefs and desires. At any moment, there seems to be a great number of things we believe or things we want that are not relevant to our current situation. These states remain inactive in the back of one's head even though one has them. For example, while Ann is engaged in her favorite computer game, she still believes that dogs have four legs and desires to get a pet dog on her next birthday. But these two states play no active role in her current state of mind. Another example comes from dreamless sleep when most or all of our mental states are standing states. Rational, irrational and arational Certain mental states, like beliefs and intentions, are rationally evaluable: they are either rational or irrational depending on whether they obey the norms of rationality. But other states, like urges, experiences of dizziness or hunger, are arational: they are outside the domain of rationality and can be neither rational nor irrational. An important distinction within rationality concerns the difference between theoretical and practical rationality. Theoretical rationality covers beliefs and their degrees while practical rationality focuses on desires, intentions and actions. Some theorists aim to provide a comprehensive account of all forms of rationality but it is more common to find separate treatments of specific forms of rationality that leave the relation to other forms of rationality open. There are various competing definitions of what constitutes rationality but no universally accepted answer. Some accounts focus on the relation between mental states for determining whether a given state is rational. In one view, a state is rational if it is well-grounded in another state that acts as its source of justification. For example, Scarlet's belief that it is raining in Manchester is rational because it is grounded in her perceptual experience of the rain while the same belief would be irrational for Frank since he lacks such a perceptual ground. A different version of such an approach holds that rationality is given in virtue of the coherence among the different mental states of a subject. This involves an holistic outlook that is less concerned with the rationality of individual mental states and more with the rationality of the person as a whole. Other accounts focus not on the relation between two or several mental states but on responding correctly to external reasons. Reasons are usually understood as facts that count in favor or against something. On this account, Scarlet's aforementioned belief is rational because it responds correctly to the external fact that it is raining, which constitutes a reason for holding this belief. Classification according to Brentano An influential classification of mental states is due to Franz Brentano. He argues that there are three basic kinds: presentations, judgments, and phenomena of love and hate. All mental states either belong to one of these kinds or are constituted by combinations of them. These different types differ not in content or what is presented but in mode or how it is presented. The most basic kind is presentation, which is involved in every mental state. Pure presentations, as in imagination, just show their object without any additional information about the veridical or evaluative aspects of their object. A judgment, on the other hand, is an attitude directed at a presentation that asserts that its presentation is either true or false, as is the case in regular perception. Phenomena of love and hate involve an evaluative attitude towards their presentation: they show how things ought to be, and the presented object is seen as either good or bad. This happens, for example, in desires. More complex types can be built up through combinations of these basic types. To be disappointed about an event, for example, can be construed as a judgment that this event happened together with a negative evaluation of it. Brentano's distinction between judgments, phenomena of love and hate, and presentations is closely related to the more recent idea of direction of fit between mental state and world, i.e. mind-to-world direction of fit for judgments, the world-to-mind direction of fit for phenomena of love and hate and null direction of fit for mere presentations. Brentano's tripartite system of classification has been modified in various ways by Brentano's students. Alexius Meinong, for example, divides the category of phenomena of love and hate into two distinct categories: feelings and desires. Uriah Kriegel is a contemporary defender of Brentano's approach to the classification of mental phenomena. Academia Discussions about mental states can be found in many areas of study. In cognitive psychology and the philosophy of mind, a mental state is a kind of hypothetical state that corresponds to thinking and feeling, and consists of a conglomeration of mental representations and propositional attitudes. Several theories in philosophy and psychology try to determine the relationship between the agent's mental state and a proposition. Instead of looking into what a mental state is, in itself, clinical psychology and psychiatry determine a person's mental health through a mental status examination. Epistemology Mental states also include attitudes towards propositions, of which there are at least two—factive and non-factive, both of which entail the mental state of acquaintance. To be acquainted with a proposition is to understand its meaning and be able to entertain it. The proposition can be true or false, and acquaintance requires no specific attitude towards that truth or falsity. Factive attitudes include those mental states that are attached to the truth of the proposition—i.e. the proposition entails truth. Some factive mental states include "perceiving that", "remembering that", "regretting that", and (more controversially) "knowing that". Non-factive attitudes do not entail the truth of the propositions to which they are attached. That is, one can be in one of these mental states and the proposition can be false. An example of a non-factive attitude is believing—people can believe a false proposition and people can believe a true proposition. Since there is the possibility of both, such mental states do not entail truth, and therefore, are not factive. However, belief does entail an attitude of assent toward the presumed truth of the proposition (whether or not it is so), making it and other non-factive attitudes different from a mere acquaintance. See also Altered state of consciousness, a mental state that is different from the normal state of consciousness Flow (psychology), the mental state of operation in which a person in an activity is fully immersed in a feeling of energized focus Mental factors (Buddhism), aspects of the mind that apprehend the quality of an object, and that have the ability to color the mind Mental representation, a hypothetical internal cognitive symbol Mood (psychology), an emotional state Propositional attitude, a relational mental state connecting a person to a proposition Benj Hellie's Vertiginous question References Sources Mental content
0.778897
0.995935
0.775731
Human sexuality
Human sexuality is the way people experience and express themselves sexually. This involves biological, psychological, physical, erotic, emotional, social, or spiritual feelings and behaviors. Because it is a broad term, which has varied with historical contexts over time, it lacks a precise definition. The biological and physical aspects of sexuality largely concern the human reproductive functions, including the human sexual response cycle. Someone's sexual orientation is their pattern of sexual interest in the opposite and/or same sex. Physical and emotional aspects of sexuality include bonds between individuals that are expressed through profound feelings or physical manifestations of love, trust, and care. Social aspects deal with the effects of human society on one's sexuality, while spirituality concerns an individual's spiritual connection with others. Sexuality also affects and is affected by cultural, political, legal, philosophical, moral, ethical, and religious aspects of life. Interest in sexual activity normally increases when an individual reaches puberty. Although no single theory on the cause of sexual orientation has yet gained widespread support, there is considerably more evidence supporting nonsocial causes of sexual orientation than social ones, especially for males. Hypothesized social causes are supported by only weak evidence, distorted by numerous confounding factors. This is further supported by cross-cultural evidence, because cultures that are tolerant of homosexuality do not have significantly higher rates of it. Evolutionary perspectives on human coupling, reproduction and reproduction strategies, and social learning theory provide further views of sexuality. Sociocultural aspects of sexuality include historical developments and religious beliefs. Some cultures have been described as sexually repressive. The study of sexuality also includes human identity within social groups, sexually transmitted infections (STIs), and birth control methods. Development Sexual orientation There is considerably more evidence supporting innate causes of sexual orientation than learned ones, especially for males. This evidence includes the cross-cultural correlation of homosexuality and childhood gender nonconformity, moderate genetic influences found in twin studies, evidence for prenatal hormonal effects on brain organization, the fraternal birth order effect, and the finding that in rare cases where infant males were raised as girls due to physical differences or deformity, they nevertheless turned out attracted to females. Hypothesized social causes are supported by only weak evidence, distorted by numerous confounding factors. Cross-cultural evidence also leans more toward non-social causes. Cultures that are very tolerant of homosexuality do not have significantly higher rates of it. Homosexual behavior is relatively common among boys in British single-sex boarding schools, but adult Britons who attended such schools are no more likely to engage in homosexual behavior than those who did not. In an extreme case, the Sambia people ritually require their boys to engage in homosexual behavior during adolescence before they have any access to females, yet most of these boys become heterosexual. It is not fully understood why genes causing homosexuality persist in the gene pool. One hypothesis involves kin selection, suggesting that homosexuals invest heavily enough in their relatives to offset the cost of not reproducing as much directly. This has not been supported by studies in Western cultures, but several studies in Samoa have found some support for this hypothesis. Another hypothesis involves sexually antagonistic genes, which cause homosexuality when expressed in males but increase reproduction when expressed in females. Studies in both Western and non-Western cultures have found support for this hypothesis. Gender differences Psychological theories exist regarding the development and expression of gender differences in human sexuality. A number of them (including neo-analytic theories, sociobiological theories, social learning theory, social role theory, and script theory) agree in predicting that men should be more approving of casual sex (sex happening outside a stable, committed relationship such as marriage) and should also be more promiscuous (have a higher number of sexual partners) than women. These theories are mostly consistent with observed differences in males' and females' attitudes toward casual sex before marriage in the United States. Other aspects of human sexuality, such as sexual satisfaction, incidence of oral sex, and attitudes toward homosexuality and masturbation, show little to no observed difference between males and females. Observed gender differences regarding the number of sexual partners are modest, with males tending to have slightly more than females. Biological and physiological aspects Like other mammals, humans are primarily grouped into either the male or female sex. The biological aspects of humans' sexuality deal with the reproductive system, the sexual response cycle, and the factors that affect these aspects. They also deal with the influence of biological factors on other aspects of sexuality, such as organic and neurological responses, heredity, hormonal issues, gender issues, and sexual dysfunction. Physical anatomy and reproduction Males and females are anatomically similar; this extends to some degree to the development of the reproductive system. As adults, they have different reproductive mechanisms that enable them to perform sexual acts and to reproduce. Men and women react to sexual stimuli in a similar fashion with minor differences. Women have a monthly reproductive cycle, whereas the male sperm production cycle is more continuous. Brain The hypothalamus is the most important part of the brain for sexual functioning. This is a small area at the base of the brain consisting of several groups of nerve cell bodies that receives input from the limbic system. Studies have shown that within lab animals, the destruction of certain areas of the hypothalamus causes the elimination of sexual behavior. The hypothalamus is important because of its relationship to the pituitary gland, which lies beneath it. The pituitary gland secretes hormones that are produced in the hypothalamus and itself. The four important sexual hormones are oxytocin, prolactin, follicle-stimulating hormone, and luteinizing hormone. Oxytocin, sometimes referred to as the "love hormone", is released in both sexes during sexual intercourse when an orgasm is achieved. Oxytocin has been suggested as critical to the thoughts and behaviors required to maintain close relationships. The hormone is also released in women when they give birth or are breastfeeding. Prolactin and oxytocin are responsible for inducing milk production in women. Follicle-stimulating hormone (FSH) is responsible for ovulation in women, and acts by triggering egg maturity; in men it stimulates sperm production. Luteinizing hormone (LH) triggers ovulation, which is the release of a mature egg. Male anatomy and reproductive system Males have both internal and external genitalia that are responsible for procreation and sexual intercourse. Production of spermatozoa (sperm) is also cyclic, but unlike the female ovulation cycle, the sperm production cycle is constantly producing millions of sperm daily. External male anatomy The external male genitalia are the penis and the scrotum. The penis provides a passageway for sperm and urine. The penis consists of nerves, blood vessels, fibrous tissue, and three parallel cylinders of spongy tissue. Other components of the penis include the shaft, glans, root, cavernous bodies, and spongy body. The three cylindrical bodies of spongy tissue, which are filled with blood vessels, run along the length of the shaft. The two bodies that lie side by side in the upper portion of the penis are the corpora cavernosa (cavernous bodies). The third, called the corpus spongiosum (spongy body), is a tube that lies centrally beneath the others and expands at the end to form the tip of the penis (glans). During arousal, these bodies erect the penis by filling with blood. The raised rim at the border of the shaft and glans is called the corona. The urethra connects the urinary bladder to the penis where urine exits the penis through the urethral meatus. The urethra eliminates urine and acts as a channel for semen and sperm to exit the body during sexual intercourse. The root consists of the expanded ends of the cavernous bodies, which fan out to form the crura and attach to the pubic bone and the expanded end of the spongy body. The bulb of the penis is surrounded by the bulbospongiosus muscle, while the corpora cavernosa are surrounded by the ischiocavernosus muscles. These aid urination and ejaculation. The penis has a foreskin that typically covers the glans; this is sometimes removed by circumcision for medical, religious or cultural reasons. In the scrotum, the testicles are held away from the body, one possible reason for this is so sperm can be produced in an environment slightly lower than normal body temperature. The penis has very little muscular tissue, and this exists in its root. The shaft and glans have no muscle fibers. Unlike most other primates, male humans lack a penile bone. Internal male anatomy Male internal reproductive structures are the testicles, the duct system, the prostate and seminal vesicles, and the Cowper's gland. The testicles (male gonads), are where sperm and male hormones are produced. Millions of sperm are produced daily in several hundred seminiferous tubules. Cells called the Leydig cells lie between the tubules; these produce hormones called androgens; these consist of testosterone and inhibin. The testicles are held by the spermatic cord, which is a tubelike structure containing blood vessels, nerves, the vas deferens, and a muscle that helps to raise and lower the testicles in response to temperature changes and sexual arousal, in which the testicles are drawn closer to the body. Sperm gets transported through a four-part duct system. The first part of this system is the epididymis. The testicles converge to form the seminiferous tubules, coiled tubes at the top and back of each testicle. The second part of the duct system is the vas deferens, a muscular tube that begins at the lower end of the epididymis. The vas deferens passes upward along the side of the testicles to become part of the spermatic cord. The expanded end is the ampulla, which stores sperm before ejaculation. The third part of the duct system is the ejaculatory ducts, which are -long paired tubes that pass through the prostate gland, where semen is produced. The prostate gland is a solid, chestnut-shaped organ that surrounds the first part of the urethra, which carries urine and semen. Similar to the female G-spot, the prostate provides sexual stimulation and can lead to orgasm through anal sex. The prostate gland and the seminal vesicles produce seminal fluid that is mixed with sperm to create semen. The prostate gland lies under the bladder and in front of the rectum. It consists of two main zones: the inner zone that produces secretions to keep the lining of the male urethra moist and the outer zone that produces seminal fluids to facilitate the passage of semen. The seminal vesicles secrete fructose for sperm activation and mobilization, prostaglandins to cause uterine contractions that aid movement through the uterus, and bases that help neutralize the acidity of the vagina. The Cowper's glands, or bulbourethral glands, are two pea-sized structures beneath the prostate. Female anatomy and reproductive system External female anatomy The external female genitalia are the vulva. The mons pubis is a soft layer of fatty tissue overlaying the pubic bone. Following puberty, this area grows in size. It has many nerve endings and is sensitive to stimulation. The labia minora and labia majora are collectively known as the labia or "lips". The labia majora are two elongated folds of skin extending from the mons to the perineum. Its outer surface becomes covered with hair after puberty. In between the labia majora are the labia minora, two hairless folds of skin that meet above the clitoris to form the clitoral hood, which is highly sensitive to touch. The labia minora become engorged with blood during sexual stimulation, causing them to swell and turn red. The labia minora are composed of connective tissues that are richly supplied with blood vessels which cause a pinkish appearance. Near the anus, the labia minora merge with the labia majora. In a sexually unstimulated state, the labia minora protects the vaginal and urethral opening by covering them. At the base of the labia minora are the Bartholin's glands, which add a few drops of an alkaline fluid to the vagina via ducts; this fluid helps to counteract the acidity of the outer vagina since sperm cannot live in an acidic environment. The Skene's glands are possibly responsible for secreting fluid during female ejaculation. The clitoris is developed from the same embryonic tissue as the penis; it or its glans alone consists of as many (or more in some cases) nerve endings as the human penis or glans penis, making it extremely sensitive to touch. The clitoral glans, which is a small, elongated erectile structure, has only one known function—sexual sensations. It is the female's most sensitive erogenous zone and the main source of orgasm in women. Thick secretions called smegma collect around the clitoris. The vaginal opening and the urethral opening are only visible when the labia minora are parted. These openings have many nerve endings that make them sensitive to touch. They are surrounded by a ring of sphincter muscles called the bulbocavernosus muscle. Underneath this muscle and on opposite sides of the vaginal opening are the vestibular bulbs, which help the vagina grip the penis by swelling with blood during arousal. Within the vaginal opening is the hymen, a thin membrane that partially covers the opening in many virgins. Rupture of the hymen has been historically considered the loss of one's virginity, though, by modern standards, loss of virginity is considered to be the first sexual intercourse. The hymen can be ruptured by activities other than sexual intercourse. The urethral opening connects to the bladder with the urethra; it expels urine from the bladder. This is located below the clitoris and above the vaginal opening. The breasts are the subcutaneous tissues on the front thorax of the female body. Though they are not technically part of a woman's sexual anatomy, they do have roles in both sexual pleasure and reproduction. Breasts are modified sweat glands made up of fibrous tissues and fat that provide support and contain nerves, blood vessels, and lymphatic vessels. Their main purpose is to provide milk to a developing infant. Breasts develop during puberty in response to an increase in estrogen. Each adult breast consists of 15 to 20 milk-producing mammary glands, irregularly shaped lobes that include alveolar glands and a lactiferous duct leading to the nipple. The lobes are separated by dense connective tissues that support the glands and attach them to the tissues on the underlying pectoral muscles. Other connective tissue, which forms dense strands called suspensory ligaments, extends inward from the skin of the breast to the pectoral tissue to support the weight of the breast. Heredity and the quantity of fatty tissue determine the size of the breasts. Men typically find female breasts attractive and this holds true for a variety of cultures. In women, stimulation of the nipple seems to result in activation of the brain's genital sensory cortex (the same region of the brain activated by stimulation of the clitoris, vagina, and cervix). This may be why many women find nipple stimulation arousing and why some women are able to orgasm by nipple stimulation alone. Internal female anatomy The female internal reproductive organs are the vagina, uterus, fallopian tubes, and ovaries. The vagina is a sheath-like canal that extends from the vulva to the cervix. It receives the penis during intercourse and serves as a depository for sperm. The vagina is also the birth canal; it can expand to during labor and delivery. The vagina is located between the bladder and the rectum. The vagina is normally collapsed, but during sexual arousal it opens, lengthens, and produces lubrication to allow the insertion of the penis. The vagina has three layered walls; it is a self-cleaning organ with natural bacteria that suppress the production of yeast. The G-spot, named after the Ernst Gräfenberg who first reported it in 1950, may be located in the front wall of the vagina and may cause orgasms. This area may vary in size and location between women; in some it may be absent. Various researchers dispute its structure or existence or regard it as an extension of the clitoris. The uterus or womb is a hollow, muscular organ where a fertilized egg (ovum) will implant itself and grow into a fetus. The uterus lies in the pelvic cavity between the bladder and the bowel, and above the vagina. It is usually positioned in a 90-degree angle tilting forward, although in about 20% of women it tilts backwards. The uterus has three layers; the innermost layer is the endometrium, where the egg is implanted. During ovulation, this thickens for implantation. If implantation does not occur, it is sloughed off during menstruation. The cervix is the narrow end of the uterus. The broad part of the uterus is the fundus. During ovulation, the ovum travels down the fallopian tubes to the uterus. These extend about from both sides of the uterus. Finger-like projections at the ends of the tubes brush the ovaries and receive the ovum once it is released. The ovum then travels for three to four days to the uterus. After sexual intercourse, sperm swim up this funnel from the uterus. The lining of the tube and its secretions sustain the egg and the sperm, encouraging fertilization and nourishing the ovum until it reaches the uterus. If the ovum divides after fertilization, identical twins are produced. If separate eggs are fertilized by different sperm, the mother gives birth to non-identical or fraternal twins. The ovaries (female gonads), develop from the same embryonic tissue as the testicles. The ovaries are suspended by ligaments and are the source where ova are stored and developed before ovulation. The ovaries also produce female hormones progesterone and estrogen. Within the ovaries, each ovum is surrounded by other cells and contained within a capsule called a primary follicle. At puberty, one or more of these follicles are stimulated to mature on a monthly basis. Once matured, these are called Graafian follicles. The female reproductive system does not produce the ova; about 60,000 ova are present at birth, only 400 of which will mature during the woman's lifetime. Ovulation is based on a monthly cycle; the 14th day is the most fertile. On days one to four, menstruation and production of estrogen and progesterone decreases, and the endometrium starts thinning. The endometrium is sloughed off for the next three to six days. Once menstruation ends, the cycle begins again with an FSH surge from the pituitary gland. Days five to thirteen are known as the pre-ovulatory stage. During this stage, the pituitary gland secretes follicle-stimulating hormone (FSH). A negative feedback loop is enacted when estrogen is secreted to inhibit the release of FSH. Estrogen thickens the endometrium of the uterus. A surge of luteinizing hormone (LH) triggers ovulation. On day 14, the LH surge causes a Graafian follicle to surface the ovary. The follicle ruptures and the ripe ovum is expelled into the abdominal cavity. The fallopian tubes pick up the ovum with the fimbria. The cervical mucus changes to aid the movement of sperm. On days 15 to 28—the post-ovulatory stage, the Graafian follicle—now called the corpus luteum—secretes estrogen. Production of progesterone increases, inhibiting LH release. The endometrium thickens to prepare for implantation, and the ovum travels down the fallopian tubes to the uterus. If the ovum is not fertilized and does not implant, menstruation begins. Sexual response cycle The sexual response cycle is a model that describes the physiological responses that occur during sexual activity. This model was created by William Masters and Virginia Johnson. According to Masters and Johnson, the human sexual response cycle consists of four phases; excitement, plateau, orgasm, and resolution, also called the EPOR model. During the excitement phase of the EPOR model, one attains the intrinsic motivation to have sex. The plateau phase is the precursor to orgasm, which may be mostly biological for men and mostly psychological for women. Orgasm is the release of tension, and the resolution period is the unaroused state before the cycle begins again. The male sexual response cycle starts in the excitement phase; two centers in the spine are responsible for erections. Vasoconstriction in the penis begins, the heart rate increases, the scrotum thickens, the spermatic cord shortens, and the testicles become engorged with blood. In the plateau phase, the penis increases in diameter, the testicles become more engorged, and the Cowper's glands secrete pre-seminal fluid. The orgasm phase, during which rhythmic contractions occur every 0.8 seconds, consists of two phases; the emission phase, in which contractions of the vas deferens, prostate, and seminal vesicles encourage ejaculation, which is the second phase of orgasm. Ejaculation is called the expulsion phase; it cannot be reached without an orgasm. In the resolution phase, the male is now in an unaroused state consisting of a refractory (rest) period before the cycle can begin. This rest period may increase with age. The female sexual response begins with the excitement phase, which can last from several minutes to several hours. Characteristics of this phase include increased heart and respiratory rate, and an elevation of blood pressure. Flushed skin or blotches of redness may occur on the chest and back; breasts increase slightly in size and nipples may become hardened and erect. The onset of vasocongestion results in swelling of the clitoris, labia minora, and vagina. The muscle that surrounds the vaginal opening tightens and the uterus elevates and grows in size. The vaginal walls begin to produce a lubricating liquid. The second phase, called the plateau phase, is characterized primarily by the intensification of the changes begun during the excitement phase. The plateau phase extends to the brink of orgasm, which initiates the resolution stage; the reversal of the changes begun during the excitement phase. During the orgasm stage the heart rate, blood pressure, muscle tension, and breathing rates peak. The pelvic muscle near the vagina, the anal sphincter, and the uterus contract. Muscle contractions in the vaginal area create a high level of pleasure, though all orgasms are centered in the clitoris. Sexual dysfunction and sexual problems Sexual disorders, according to the DSM-IV-TR, are disturbances in sexual desire and psycho-physiological changes that characterize the sexual response cycle and cause marked distress and interpersonal difficulty. Sexual dysfunctions are a result of physical or psychological disorders. Physical causes include hormonal imbalance, diabetes, heart disease and more; psychological causes include but are not limited to stress, anxiety, and depression. Sexual dysfunction affects both men and women. There are four major categories of sexual problems in women: desire disorders, arousal disorders, orgasmic disorders, and sexual pain disorders. Sexual desire disorder occurs when an individual lacks sexual desire because of hormonal changes, depression, and pregnancy. Arousal disorder is a female sexual dysfunction leading to a lack of vaginal lubrication. In addition, blood flow problems may affect arousal disorder. Lack of orgasm, also known as anorgasmia, is another sexual dysfunction in women. The last sexual disorder is painful intercourse, which can be caused by factors including pelvic mass, scar tissue, and sexually transmitted infections. Three common sexual disorders for men are sexual desire disorder, ejaculation disorder, and erectile dysfunction. Lack of sexual desire in men may be caused by physical issues like low testosterone or psychological factors such as anxiety and depression. Ejaculation disorders include retrograde ejaculation, retarded ejaculation, and premature ejaculation. Erectile dysfunction is an inability to initiate and maintain an erection during intercourse. Psychological aspects As one form of behavior, the psychological aspects of sexual expression have been studied in the context of emotional involvement, gender identity, intersubjective intimacy, and Darwinian reproductive efficacy. Sexuality in humans generates profound emotional and psychological responses. Some theorists identify sexuality as the central source of human personality. Psychological studies of sexuality focus on psychological influences that affect sexual behavior and experiences. Early psychological analyses were carried out by Sigmund Freud, who believed in a psychoanalytic approach. He also proposed the concepts of psychosexual development and the Oedipus complex, among other theories. Gender identity is a person's sense of their own gender, whether male, female, or non-binary. Gender identity can correlate with assigned sex at birth or can differ from it. All societies have a set of gender categories that can serve as the basis of the formation of a person's social identity in relation to other members of society. Sexual behavior and intimate relationships are strongly influenced by a person's sexual orientation. Sexual orientation is an enduring pattern of romantic or sexual attraction (or a combination of these) to persons of the opposite sex, same sex, or both sexes. Heterosexual people are romantically/sexually attracted to the members of the opposite sex, gay and lesbian people are romantically/sexually attracted to people of the same sex, and those who are bisexual are romantically/sexually attracted to both sexes. The idea that homosexuality results from reversed gender roles is reinforced by the media's portrayal of gay men as feminine and lesbians as masculine. However, a person's conformity or non-conformity to gender stereotypes does not always predict sexual orientation. Society believes that if a man is masculine, he is heterosexual, and if a man is feminine, he is homosexual. There is no strong evidence that a homosexual or bisexual orientation must be associated with atypical gender roles. By the early 21st century, homosexuality was no longer considered to be a pathology. Theories have linked many factors, including genetic, anatomical, birth order, and hormones in the prenatal environment, to homosexuality. Other than the need to procreate, there are many other reasons people have sex. According to one study conducted on college students (Meston & Buss, 2007), the four main reasons for sexual activities are physical attraction, as a means to an end, to increase emotional connection, and to alleviate insecurity. Sexuality and age Child sexuality Until Sigmund Freud published his Three Essays on the Theory of Sexuality in 1905, children were often regarded as asexual, having no sexuality until later development. Sigmund Freud was one of the first researchers to take child sexuality seriously. His ideas, such as psychosexual development and the Oedipus conflict, have been much debated but acknowledging the existence of child sexuality was an important development. Freud gave sexual drives an importance and centrality in human life, actions, and behavior; he said sexual drives exist and can be discerned in children from birth. He explains this in his theory of infantile sexuality, and says sexual energy (libido) is the most important motivating force in adult life. Freud wrote about the importance of interpersonal relationships to one's sexual and emotional development. From birth, the mother's connection to the infant affects the infant's later capacity for pleasure and attachment. Freud described two currents of emotional life; an affectionate current, including our bonds with the important people in our lives; and a sensual current, including our wish to gratify sexual impulses. During adolescence, a young person tries to integrate these two emotional currents. Alfred Kinsey also examined child sexuality in his Kinsey Reports. Children are naturally curious about their bodies and sexual functions. For example, they wonder where babies come from, they notice the differences between males and females, and many engage in genital play, which is often mistaken for masturbation. Child sex play, also known as playing doctor, includes exhibiting or inspecting the genitals. Many children take part in some sex play, typically with siblings or friends. Sex play with others usually decreases as children grow, but they may later possess romantic interest in their peers. Curiosity levels remain high during these years, but the main surge in sexual interest occurs in adolescence. Sexuality in late adulthood Adult sexuality originates in childhood. However, like many other human capacities, sexuality is not fixed, but matures and develops. A common stereotype associated with old people is that they tend to lose interest and the ability to engage in sexual acts once they reach late adulthood. This misconception is reinforced by Western popular culture, which often ridicules older adults who try to engage in sexual activities. Age does not necessarily change the need or desire to be sexually expressive or active. A couple in a long-term relationship may find that the frequency of their sexual activity decreases over time and the type of sexual expression may change, but feelings of intimacy may continue to grow and develop over time. Sociocultural aspects Human sexuality can be understood as part of the social life of humans, which is governed by implied rules of behavior and the status quo. This narrows the view to groups within a society. The socio-cultural context of society, including the effects of politics and the mass media, influences and forms social norms. Throughout history, social norms have been changing and continue to change as a result of movements such as the sexual revolution and the rise of feminism. Sex education The age and manner in which children are informed of issues of sexuality is a matter of sex education. The school systems in almost all developed countries have some form of sex education, but the nature of the issues covered varies widely. In some countries, such as Australia and much of Europe, age-appropriate sex education often begins in pre-school, whereas other countries leave sex education to the pre-teenage and teenage years. Sex education covers a range of topics, including the physical, mental, and social aspects of sexual behavior. Communities have differing opinions on the appropriate age for children to learn about sexuality. According to Time magazine and CNN, 74% of teenagers in the United States reported that their major sources of sexual information were their peers and the media, compared to 10% who named their parents or a sex education course. In the United States, some sex education programs encourage abstinence-only, the choice to restrain oneself from sexual activity. In contrast, comprehensive sex education aims to encourage students to take charge of their own sexuality and know how to have safe, healthy, and pleasurable sex if and when they choose to do so. Proponents for an abstinence-only education believe that teaching a comprehensive curriculum would encourage teenagers to have sex, while proponents for comprehensive sex education argue that many teenagers will have sex regardless and should be equipped with knowledge of how to have sex responsibly. According to data from the National Longitudinal Survey of Youth, many teens who intend to be abstinent fail to do so, and when these teenagers do have sex, many do not use safe sex practices such as contraceptives. Sexuality in history Sexuality has been an important, vital part of human existence throughout history. All civilizations have managed sexuality through sexual standards, representations, and behavior. Before the rise of agriculture, groups of hunter-gatherers and nomadic groups inhabited the world. These groups had less restrictive sexual standards that emphasized sexual pleasure and enjoyment, but with definite rules and constraints. Some underlying continuities or key regulatory standards contended with the tension between recognition of pleasure, interest, and the need to procreate for the sake of social order and economic survival. Hunter-gatherers also placed high value on certain types of sexual symbolism. A common tension in hunter-gatherer societies is expressed in their art, which emphasized male sexuality and prowess, but also blurred gender lines in sexual matters. One example of these male-dominated portrayals is the Egyptian creation myth, in which the sun god Atum masturbates in the water, creating the Nile River. In Sumerian myth, the gods' semen filled the Tigris. Once agricultural societies emerged, the sexual framework shifted in ways that persisted for many millennia in much of Asia, Africa, Europe, and parts of the Americas. One common characteristic new to these societies was the collective supervision of sexual behavior due to urbanization and the growth of population and population density. Children would commonly witness parents having sex because many families shared the same sleeping quarters. Due to land ownership, determination of children's paternity became important, and society and family life became patriarchal. These changes in sexual ideology were used to control female sexuality and to differentiate standards by gender. With these ideologies, sexual possessiveness and increases in jealousy emerged. While retaining the precedents of earlier civilizations, each classical civilization established a somewhat distinctive approach to gender, artistic expression of sexual beauty, and to behaviors such as homosexuality. Some of these distinctions are portrayed in sex manuals, which were also common among civilizations in China, Greece, Rome, Persia, and India; each has its own sexual history. Before the High Middle Ages, homosexual acts appear to have been ignored or tolerated by the Christian church. During the 12th century, hostility toward homosexuality began to spread throughout religious and secular institutions. By the end of the 19th century, it was viewed as a pathology. During the beginning of the Industrial Revolution of the 18th and 19th centuries, many changes in sexual standards occurred. New artificial birth control devices such as the condom and diaphragm were introduced. Doctors started claiming a new role in sexual matters, urging that their advice was crucial to sexual morality and health. New pornographic industries grew, and Japan adopted its first laws against homosexuality. In Western societies, the definition of homosexuality was constantly changing; Western influence on other cultures became more prevalent. New contacts created serious issues around sexuality and sexual traditions. There were also major shifts in sexual behavior. During this period, puberty began occurring at younger ages, so a new focus on adolescence as a time of sexual confusion and danger emerged. There was a new focus on the purpose of marriage; it was increasing regarded as being for love rather than only for economics and reproduction. Havelock Ellis and Sigmund Freud adopted more accepting stances toward homosexuality; Ellis said homosexuality was inborn and therefore not immoral, not a disease, and that many homosexuals made significant contributions to society. Freud wrote that all human beings as capable of becoming either heterosexual or homosexual; neither orientation was assumed to be innate. According to Freud, a person's orientation depended on the resolution of the Oedipus complex. He said male homosexuality resulted when a young boy had an authoritarian, rejecting mother and turned to his father for love and affection, and later to men in general. He said female homosexuality developed when a girl loved her mother and identified with her father and became fixated at that stage. Alfred Kinsey initiated the modern era of sex research. He collected data from questionnaires given to his students at Indiana University, but then switched to personal interviews about sexual behaviors. Kinsey and his colleagues sampled 5,300 men and 5,940 women. He found that most people masturbated, that many engaged in oral sex, that women are capable of having multiple orgasms, and that many men had had some type of homosexual experience in their lifetimes. Before William Masters, a physician, and Virginia Johnson, a behavioral scientist, the study of anatomy and physiological studies of sex was still limited to experiments with laboratory animals. Masters and Johnson started to directly observe and record the physical responses in humans that are engaged in sexual activity under laboratory settings. They observed 10,000 episodes of sexual acts between 312 men and 382 women. This led to methods of treating clinical problems and abnormalities. Masters and Johnson opened the first sex therapy clinic in 1965. In 1970, they described their therapeutic techniques in their book, Human Sexual Inadequacy. The first edition of the Diagnostic and Statistical Manual of Mental Disorders, published by the American Psychiatric Association, classified homosexuality as a mental illness, and more specifically, a "sociopathic personality disturbance". This definition remained the professional understanding of homosexuality until 1973 when the American Psychiatric Association removed homosexuality from their list of diagnoses for mental disorders. Through her research of heterosexual and homosexual men, Evelyn Hooker revealed that there was no correlation between homosexuality and psychological maladjustment, and her findings played a pivotal role in shifting the scientific community away from the perspective that homosexuality was something that needed to be treated or cured. Sexuality, colonialism, and race European conquerors/colonists discovered that many non-European cultures had expressions of sexuality and gender which differed from European notion of heterosexual cisnormativity. These would include transgender practices. In 1516, Vasco Núñez de Balboa, a Spanish explorer, discovered indigenous people in Central America, among whom several indigenous men who dressed like women and had sex with each other, resulting in him feeding forty of these men to his dogs for having non-gender conforming behaviors and sexuality. In North America and the United States, Europeans have used claims of sexual immorality to justify discrimination against racial and ethnic minorities. Scholars also study the ways in which colonialism has affected sexuality today and argue that due to racism and slavery it has been dramatically changed from the way it had previously been understood. In her book, Carnal Knowledge and Imperial Power: Gender, Race, and Morality in Colonial Asia, Laura Stoler investigates how the Dutch colonists used sexual control and gender-specific sexual sanctions to distinguish between the rulers from the ruled and enforce colonial domination onto the people of Indonesia. In America, there are 155 native tribes that are recorded to have embraced two-spirit people within their tribes, but the total number of tribes could be greater than what is documented. Two-spirit people were and still are members of communities who do not fall under Western gender categories of male and female, but rather under a "third gender" category. This system of gender contradicts both the gender binary and the assertion that sex and gender are the same. Instead of conforming to traditional roles of men and women, two-spirit fill a special niche in their communities. For example, two-spirited people are commonly revered for possessing special wisdom and spiritual powers. Two-spirited people also can take part in marriages, either monogamous and polygamous ones. Historically, European colonizers perceived relationships involving two-spirited people as homosexuality, and therefore believed in the moral inferiority of native people. In reaction, colonizers began to impose their own religious and social norms on indigenous communities, diminishing the role of two-spirit people in native cultures. Within reservations, the Religious Crime Code of the 1880s explicitly aimed to "aggressively attack Native sexual and marriage practices". The goal of colonizers was for native peoples to assimilate into Euro-American ideals of family, sexuality, gender expression, and more. The link between constructed sexual meanings and racial ideologies has been studied. According to Joane Nagel, sexual meanings are constructed to maintain racial-ethnic-national boundaries by the denigration of "others" and regulation of sexual behavior within the group. She writes, "both adherence to and deviation from such approved behaviors, define and reinforce racial, ethnic, and nationalist regimes". In the United States people of color face the effects of colonialism in different ways with stereotypes such as the Mammy and Jezebel for Black women; lotus blossom and dragon lady for Asian women; and the spicy Latina. These stereotypes contrast with standards of sexual conservatism, creating a dichotomy that dehumanizes and demonizes the stereotyped groups. An example of a stereotype that lies at the intersection of racism, classism, and misogyny is the archetype of the welfare queen. Cathy Cohen describes how the welfare queen stereotype demonizes poor black single mothers for deviating from conventions surrounding family structure. Reproductive and sexual rights Reproductive and sexual rights encompass the concept of applying human rights to issues related to reproduction and sexuality. This concept is a modern one, and remains controversial since it deals, directly and indirectly, with issues such as contraception, LGBT rights, abortion, sex education, freedom to choose a partner, freedom to decide whether to be sexually active or not, right to bodily integrity, freedom to decide whether or not, and when, to have children. These are all global issues that exist in all cultures to some extent, but manifest differently depending on the specific contexts. According to the Swedish government, "sexual rights include the right of all people to decide over their own bodies and sexuality" and "reproductive rights comprise the right of individuals to decide on the number of children they have and the intervals at which they are born." Such rights are not accepted in all cultures, with practices such criminalization of consensual sexual activities (such as those related to homosexual acts and sexual acts outside marriage), acceptance of forced marriage and child marriage, failure to criminalize all non-consensual sexual encounters (such as marital rape), female genital mutilation, or restricted availability of contraception, being common around the world. Stigma of contraceptives in the U.S. In 1915, Emma Goldman and Margaret Sanger, leaders of the birth control movement, began to spread information regarding contraception in opposition to the laws, such as the Comstock Law, that demonized it. One of their main purposes was to assert that the birth control movement was about empowering women with personal reproductive and economic freedom for those who could not afford to parent a child or simply did not want one. Goldman and Sanger saw it necessary to educate people as contraceptives were quickly being stigmatized as a population control tactic due to being a policy limiting births, disregarding that this limitation did not target ecological, political, or large economic conditions. This stigma targeted lower-class women who had the most need of access to contraception. Birth control finally began to lose stigma in 1936 when the ruling of U.S. v. One Package declared that prescribing contraception to save a person's life or well-being was no longer illegal under the Comstock Law. Although opinions varied on when birth control should be available to women, by 1938, there were 347 birth control clinics in the United States but advertising their services remained illegal. The stigma continued to lose credibility as First Lady Eleanor Roosevelt publicly showed her support for birth control through the four terms her husband served (1933–1945). However, it was not until 1966 that the Federal Government began to fund family planning and subsidized birth control services for lower-class women and families at the order of President Lyndon B. Johnson. This funding continued after 1970 under the Family Planning Services and Population Research Act. Today, all Health Insurance Marketplace plans are required to cover all forms of contraception, including sterilization procedures, as a result of The Affordable Care Act signed by President Barack Obama in 2010. Stigma and activism during the AIDS epidemic In 1981, doctors diagnosed the first reported cases of AIDS in America. The disease disproportionately affected and continues to affect gay and bisexual men, especially black and Latino men. The Reagan administration is criticized for its apathy towards the AIDS epidemic, and audio recordings reveal that Ronald Reagan's press secretary Larry Speakes viewed the epidemic as a joke, mocking AIDS by calling it the "gay plague". The epidemic also carried stigma coming from religious influences. For example, Cardinal Krol voiced that AIDS was "an act of vengeance against the sin of homosexuality", which clarifies the specific meaning behind the pope's mention of "the moral source of AIDS." Activism during the AIDS crisis focused on promoting safe sex practices to raise awareness that the disease could be prevented. The "Safe Sex is Hot Sex" campaign, for example, aimed to promote the use of condoms. Campaigns by the U.S. government, however, diverged from advocacy of safe sex. In 1987, Congress even denied federal funding from awareness campaigns that "[promoted] or [encouraged], directly or indirectly, homosexual activities". Instead, campaigns by the government primarily relied on scare tactics in order to instill fear in men who had sex with other men. In addition to prevention campaigns, activists also sought to counteract narratives that led to the "social death" for people living with AIDS. Gay men from San Francisco and New York City created the Denver Principles, a foundational document that demanded the rights, agency, and dignity of people living with AIDS. In his article "Emergence of Gay Identity and Gay Social Movements in Developing Countries", Matthew Roberts discusses how international AIDS prevention campaigns created opportunities for gay men to interact with other openly gay men from other countries. These interactions allowed western gay "culture" to be introduced to gay men in countries where homosexuality was not an important identifier. Thus, group organizers self-identified as gay more and more, creating the basis for further development of gay consciousness in different countries. Sexual behavior General activities and health In humans, sexual intercourse has been shown to have health benefits, such as an improved sense of smell, reduction in stress and blood pressure, increased immunity, and decreased risk of prostate cancer. Sexual intimacy and orgasms increase levels of oxytocin, which helps people bond and build trust. Some of these benefits, such as stress reduction, also apply to masturbation, as distinct from sexual intercourse with another person. Masturbation is also a healthy element of sexual development in itself. A long-term study of 3,500 people between ages 30 and 101 by clinical neuropsychologist David Weeks, MD, head of old-age psychology at the Royal Edinburgh Hospital in Scotland, said he found that "sex helps you look between four and seven years younger", according to impartial ratings of the subjects' photographs. Exclusive causation, however, is unclear, and the benefits may be indirectly related to sex and directly related to significant reductions in stress, greater contentment, and better sleep that sex promotes. Sexual intercourse can also be a disease vector. There are 19 million new cases of sexually transmitted infections (STI) every year in the U.S., and worldwide there are over 340 million sexually transmitted infections each year. More than half of these occur in adolescents and young adults aged 15–24 years. At least one in four US teenage girls has a sexually transmitted infection. In the U.S., about 30% of 15- to 17-year-olds have had sexual intercourse, but only about 80% of 15- to 19-year-olds report using condoms for their first sexual intercourse. In one study, more than 75% of young women age 18–25 years felt they were at low risk of acquiring an STI. Creating a relationship People both consciously and subconsciously seek to attract others with whom they can form deep relationships. This may be for companionship, procreation, or an intimate relationship. This involves interactive processes whereby people find and attract potential partners and maintain a relationship. These processes, which involve attracting one or more partners and maintaining sexual interest, can include: Flirting, the use of indirect behavior to convey romantic or sexual interest. It can involve verbal or non-verbal cues, such as sexual comments, body language, gazing, or close to another, but non-verbal flirting is more common. Flirting is a socially accepted way of attracting someone. There are different types of flirting, and most people usually have one way of flirting that makes them most comfortable. When flirting, people can be polite, playful, physical, etc. Sometimes it is difficult to know whether or not the person is interested. Non-verbal flirting allows people to test another's interest without fear of direct rejection. Flirting styles vary according to culture. Different cultures have different social etiquette. For example, length of eye contact, or how closely one stands by someone. Seduction, the process whereby one person deliberately entices another to engage in sexual behavior. This behavior is one that the person you are seducing would not usually do, unless sexually aroused. Seduction can be seen as both a positive and a negative. Since the word seduction has a Latin meaning, which is "to lead astray" it can be viewed negatively. Sexual attraction Sexual attraction is attraction on the basis of sexual desire or the quality of arousing such interest. Sexual attractiveness or sex appeal is an individual's ability to attract the sexual or erotic interest of another person and is a factor in sexual selection or mate choice. The attraction can be to the physical or other qualities or traits of a person, or to such qualities in the context in which they appear. The attraction may be to a person's aesthetics or movements or to their voice or smell, besides other factors. The attraction may be enhanced by a person's adornments, clothing, perfume, hair length and style, and anything else which can attract the sexual interest of another person. It can also be influenced by individual genetic, psychological, or cultural factors, or to other, more amorphous qualities of the person. Sexual attraction is also a response to another person that depends on a combination of the person possessing the traits and also on the criteria of the person who is attracted. Though attempts have been made to devise objective criteria of sexual attractiveness and measure it as one of several bodily forms of capital asset (see erotic capital), a person's sexual attractiveness is to a large extent a subjective measure dependent on another person's interest, perception, and sexual orientation. For example, a gay or lesbian person would typically find a person of the same sex to be more attractive than one of the other sex. A bisexual person would find either sex to be attractive. In addition, there are asexual people, who usually do not experience sexual attraction for either sex, though they may have romantic attraction (homoromantic, biromantic or heteroromantic). Interpersonal attraction includes factors such as physical or psychological similarity, familiarity or possessing a preponderance of common or familiar features, similarity, complementarity, reciprocal liking, and reinforcement. The ability of a person's physical and other qualities to create a sexual interest in others is the basis of their use in advertising, music video, pornography, film, and other visual media, as well as in modeling, sex work and other occupations. Legal issues Globally, laws regulate human sexuality in several ways, including criminalizing particular sexual behaviors, granting individuals the privacy or autonomy to make their own sexual decisions, protecting individuals with regard to equality and non-discrimination, recognizing and protecting other individual rights, as well as legislating matters regarding marriage and the family, and creating laws protecting individuals from violence, harassment, and persecution. In the United States, there are two fundamentally different approaches, applied in different states, regarding the way the law is used to attempt to govern a person's sexuality. The "black letter" approach to law focuses on the study of pre-existing legal precedent and attempts to offer a clear framework of rules within which lawyers and others can work. In contrast, the socio-legal approach focuses more broadly on the relationship between the law and society, and offers a more contextualized view of the relationship between legal and social change. Issues regarding human sexuality and human sexual orientation came to the forefront in Western law in the latter half of the twentieth century, as part of the gay liberation movement's encouragement of LGBT individuals to "come out of the closet" and engage with the legal system, primarily through courts. Therefore, many issues regarding human sexuality and the law are found in the opinions of the courts. Sexual privacy While the issue of privacy has been useful to sexual rights claims, some scholars have criticized its usefulness, saying that this perspective is too narrow and restrictive. The law is often slow to intervene in certain forms of coercive behavior that can limit individuals' control over their own sexuality (such as female genital mutilation, forced marriages or lack of access to reproductive health care). Many of these injustices are often perpetuated wholly or in part by private individuals rather than state agents, and as a result, there is an ongoing debate about the extent of state responsibility to prevent harmful practices and to investigate such practices when they do occur. State intervention with regards to sexuality also occurs, and is considered acceptable by some, in certain instances (e.g. same-sex sexual activity or prostitution). The legal systems surrounding prostitution are a topic of debate. Proponents for criminalization argue that sex work is an immoral practice that should not be tolerated, while proponents for decriminalization point out how criminalization does more harm than good. Within the feminist movement, there is also a debate over whether sex work is inherently objectifying and exploitative or whether sex workers have the agency to sell sex as a service. When sex work is criminalized, sex workers do not have support from law enforcement when they fall victim to violence. In a 2003 survey of street-based sex workers in NYC, 80% said they had been threatened with or experienced violence, and many said the police were no help. 27% said they had experienced violence from police officers themselves. Different identities such as being black, transgender, or poor can result in a person being more likely to be criminally profiled by the police. For example, in New York, there is a law against "loitering for the purpose of engaging in prostitution", which has been nicknamed the "walking while trans" law because of how often transgender women are assumed to be sex workers and arrested for simply walking out in public. Religious sexual morality In some religions, sexual behavior is regarded as primarily spiritual. In others it is treated as primarily physical. Some hold that sexual behavior is only spiritual within certain kinds of relationships, when used for specific purposes, or when incorporated into religious ritual. In some religions there are no distinctions between the physical and the spiritual, whereas some religions view human sexuality as a way of completing the gap that exists between the spiritual and the physical. Many religious conservatives, especially those of Abrahamic religions and Christianity in particular, tend to view sexuality in terms of behavior (i.e. homosexuality or heterosexuality is what someone does). These conservatives tend to promote celibacy for gay people, and may also tend to believe that sexuality can be changed through conversion therapy or prayer to become an ex-gay. They may also see homosexuality as a form of mental illness, something that ought to be criminalized, an immoral abomination, caused by ineffective parenting, and view same-sex marriage as a threat to society. On the other hand, most religious liberals define sexuality-related labels in terms of sexual attraction and self-identification. They may also view same-sex activity as morally neutral and as legally acceptable as opposite-sex activity, unrelated to mental illness, genetically or environmentally caused (but not as the result of bad parenting), and fixed. They also tend to be more in favor of same-sex marriage. Judaism According to Judaism, sex between a man and woman within marriage is sacred and should be regularly enjoyed; celibacy is considered sinful. Christianity Early Christianity Desire, including sexual desire and lust, were considered immoral and sinful, according to some authors. Elaine Pagels says, "By the beginning of the fifth century, Augustine had actually declared that spontaneous sexual desire is the proof of—and penalty for—universal original sin", though that this view goes against "most of his Christian predecessors". According to Jennifer Wright Knust, Paul framed desire a force Christians gained control over whereas non-Christians were "enslaved" by it; and he also said the bodies of Christians were members of Christ's body and thus sexual desire must be eschewed. Roman Catholic Church The Roman Catholic Church teaches that sexuality is "noble and worthy" and has a unitive and procreative end. For this reason, sexual activity's ideal should occur in the context of a marriage between a man and a woman, and open to the possibility of life. Pope Francis teaches in Amoris laetitia against "an attitude that would solve everything by applying general rules or deriving undue conclusions from particular theological considerations. and that he also warns that "not all discussions of doctrinal, moral or pastoral issues need to be settled by interventions of the magisterium." and that "We have been called to form consciences, not to replace them." The church has authoritative teachings on sexuality found in the catechism. The church places primacy of conscience especially on the regulation of births. Anglicanism The Anglican Church teaches that human sexuality is a gift from a loving God designed to be between a man and a woman in a monogamous lifetime union of marriage. It also views singleness and dedicated celibacy as Christ-like. It states that people with same sex attraction are loved by God and are welcomed as full members of the Body of Christ, while the Church leadership has a variety of views in regard to homosexual expression and ordination. Some expressions of sexuality are considered sinful including "promiscuity, prostitution, incest, pornography, pedophilia, predatory sexual behavior, and sadomasochism (all of which may be heterosexual and homosexual), adultery, violence against wives, and female circumcision". The Church is concerned with pressures on young people to engage sexually and encourages abstinence. Evangelicalism In matters of sexuality, several Evangelical churches promote the virginity pledge among young Evangelical Christians, who are invited to commit themselves during a public ceremony to sexual abstinence until Christian marriage. This pledge is often symbolized by a purity ring. In evangelical churches, young adults and unmarried couples are encouraged to marry early in order to live a sexuality according to the will of God. Although some churches are discreet on the subject, other evangelical churches in United States and Switzerland speak of a satisfying sexuality as a gift from God and a component of a harmonious Christian marriage, in messages during worship services or conferences. Many evangelical books and websites are specialized on the subject. The perceptions of homosexuality in the Evangelical Churches are varied. They range from liberal through moderate to conservative. The christian marriage is presented by some churches as a protection against sexual misconduct and a compulsory step to obtain a position of responsibility in the church. This concept, however, has been challenged by numerous sex scandals involving married evangelical leaders. Finally, evangelical theologians recalled that celibacy should be more valued in the Church today, since the gift of celibacy was taught and lived by Jesus Christ and Paul of Tarsus. Islam In Islam, desire for sex is considered to be a natural urge that should not be suppressed, although the concept of free sex is not accepted; these urges should be fulfilled responsibly. Marriage is considered to be a good deed; it does not hinder spiritual wayfaring. The term used for marriage within the Quran is . Although Islamic sexuality is restrained via Islamic sexual jurisprudence, it emphasizes sexual pleasure within marriage. It is acceptable for a man to have more than one wife, but he must take care of those wives physically, mentally, emotionally, financially, and spiritually. Muslims believe that sexual intercourse is an act of worship that fulfils emotional and physical needs, and that producing children is one way in which humans can contribute to God's creation, and Islam discourages celibacy once an individual is married. However, homosexuality is strictly forbidden in Islam, and some Muslim lawyers have suggested that gay people should be put to death. Some have argued that Islam has an open and playful approach to sex so long as it is within marriage, free of lewdness, fornication and adultery. Hinduism Hinduism emphasizes that sex is only appropriate between husband and wife, in which satisfying sexual urges through sexual pleasure is an important duty of marriage. Any sex before marriage is considered to interfere with intellectual development, especially between birth and the age of 25, which is said to be brahmacharya and this should be avoided. Kama (sensual pleasures) is one of the four purusharthas or aims of life (dharma, artha, kama, and moksha). The Hindu Kama Sutra deals partially with sexual intercourse; it is not exclusively a sexual or religious work. Sikhism Sikhism views chastity as important, as Sikhs believe that the divine spark of Waheguru is present inside every individual's body, therefore it is important for one to keep clean and pure. Sexual activity is limited to married couples, and extramarital sex is forbidden. Marriage is seen as a commitment to Waheguru and should be viewed as part of spiritual companionship, rather than just sexual intercourse, and monogamy is deeply emphasized in Sikhism. Any other way of living is discouraged, including celibacy and homosexuality. However, in comparison to other religions, the issue of sexuality in Sikhism is not considered one of paramount importance. See also Notes References Further reading Gregersen, E. (1982). Sexual Practices: The Story of Human Sexuality. New York: F. Watts. Lyons, Andrew P. & Harriet D., eds. Sexualities in Anthropology: a reader. Malden, MA: Wiley-Blackwell, 2011 Richardson, Niall; Smith, Clarissa & Werndly, Angela (2013) Studying Sexualities: Theories, Representations, Cultures. London: Palgrave Macmillan Soble, Alan (ed.). Sex from Plato to Paglia: A Philosophical Encyclopedia, 2 volumes. Greenwood Press, 2006. Lay summary of primary source appearing from the University of Calgary, in Science, on prolactin release during sexual activity in mice, and its possible relationship to stroke therapy. External links "Examining the Relationship Between Media Use and Aggression, Sexuality, and Body Image", Journal of Applied Research on Children: Informing Policy for Children at Risk: Vol. 4: Iss. 1, Article 3. Glossary of clinical sexology – Glossario di sessuologia clinica International Encyclopedia of Sexuality full text Janssen, D.F., Growing Up Sexually. Volume I. World Reference Atlas [full text] Masters, William H., Virginia E. Johnson, and Robert C. Kolodny. Crisis: Heterosexual Behavior in the Age of AIDS. New York: Grove Press, 1988. ix, 243 p. National Sexuality Resource Center Durex Global Sex Survey 2005 at data360.org POPLINE is a searchable database of the world's reproductive health literature. The Continuum Complete International Encyclopedia of Sexuality at the Kinsey Institute MRI Video of Human Copulation 1
0.775962
0.999626
0.775671
Stress (biology)
Stress, whether physiological, biological or psychological, is an organism's response to a stressor such as an environmental condition. When stressed by stimuli that alter an organism's environment, multiple systems respond across the body. In humans and most mammals, the autonomic nervous system and hypothalamic-pituitary-adrenal (HPA) axis are the two major systems that respond to stress. Two well-known hormones that humans produce during stressful situations are adrenaline and cortisol. The sympathoadrenal medullary (SAM) axis may activate the fight-or-flight response through the sympathetic nervous system, which dedicates energy to more relevant bodily systems to acute adaptation to stress, while the parasympathetic nervous system returns the body to homeostasis. The second major physiological stress-response center, the HPA axis, regulates the release of cortisol, which influences many bodily functions such as metabolic, psychological and immunological functions. The SAM and HPA axes are regulated by several brain regions, including the limbic system, prefrontal cortex, amygdala, hypothalamus, and stria terminalis. Through these mechanisms, stress can alter memory functions, reward, immune function, metabolism and susceptibility to diseases. Disease risk is particularly pertinent to mental illnesses, whereby chronic or severe stress remains a common risk factor for several mental illnesses. Psychology Acute stressful situations where the stress experienced is severe is a cause of change psychologically to the detriment of the well-being of the individual, such that symptomatic derealization and depersonalization, and anxiety and hyperarousal, are experienced. The International Classification of Diseases includes a group of mental and behavioral disorders which have their aetiology in reaction to severe stress and the consequent adaptive response. Chronic stress, and a lack of coping resources available, or used by an individual, can often lead to the development of psychological issues such as delusions, depression and anxiety (see below for further information). Chronic stress also causes brain atrophy, which is the loss of neurons and the connections between them. It affects the part of the brain that is important for learning, responding to the stressors and cognitive flexibility. Chronic stressors may not be as intense as acute stressors such as natural disaster or a major accident, but persist over longer periods of time and tend to have a more negative effect on health because they are sustained and thus require the body's physiological response to occur daily. This depletes the body's energy more quickly and usually occurs over long periods of time, especially when these microstressors cannot be avoided (i.e. stress of living in a dangerous neighborhood). See allostatic load for further discussion of the biological process by which chronic stress may affect the body. For example, studies have found that caregivers, particularly those of dementia patients, have higher levels of depression and slightly worse physical health than non-caregivers. When humans are under chronic stress, permanent changes in their physiological, emotional, and behavioral responses may occur. Chronic stress can include events such as caring for a spouse with dementia, or may result from brief focal events that have long term effects, such as experiencing a sexual assault. Studies have also shown that psychological stress may directly contribute to the disproportionately high rates of coronary heart disease morbidity and mortality and its etiologic risk factors. Specifically, acute and chronic stress have been shown to raise serum lipids and are associated with clinical coronary events. However, it is possible for individuals to exhibit hardiness—a term referring to the ability to be both chronically stressed and healthy. Even though psychological stress is often connected with illness or disease, most healthy individuals can still remain disease-free after being confronted with chronic stressful events. This suggests that there are individual differences in vulnerability to the potential pathogenic effects of stress; individual differences in vulnerability arise due to both genetic and psychological factors. In addition, the age at which the stress is experienced can dictate its effect on health. Research suggests chronic stress at a young age can have lifelong effects on the biological, psychological, and behavioral responses to stress later in life. Etymology and historical usage The term "stress" had none of its contemporary connotations before the 1920s. It is a form of the Middle English destresse, derived via Old French from the Latin stringere, "to draw tight". The word had long been in use in physics to refer to the internal distribution of a force exerted on a material body, resulting in strain. In the 1920s and '30s, biological and psychological circles occasionally used "stress" to refer to a physiological or environmental perturbation that could cause physiological and mental "strain". The amount of strain in reaction to stress depends on the resilience. Excessive strain would appear as illness. Walter Cannon used it in 1926 to refer to external factors that disrupted what he called homeostasis. But "...stress as an explanation of lived experience is absent from both lay and expert life narratives before the 1930s". Physiological stress represents a wide range of physical responses that occur as a direct effect of a stressor causing an upset in the homeostasis of the body. Upon immediate disruption of either psychological or physical equilibrium the body responds by stimulating the nervous, endocrine, and immune systems. The reaction of these systems causes a number of physical changes that have both short- and long-term effects on the body. The Holmes and Rahe stress scale was developed as a method of assessing the risk of disease from life changes. The scale lists both positive and negative changes that elicit stress. These include things such as a major holiday or marriage, or death of a spouse and firing from a job. Biological need for equilibrium Homeostasis is a concept central to the idea of stress. In biology, most biochemical processes strive to maintain equilibrium (homeostasis), a steady state that exists more as an ideal and less as an achievable condition. Environmental factors, internal or external stimuli, continually disrupt homeostasis; an organism's present condition is a state of constant flux moving about a homeostatic point that is that organism's optimal condition for living. Factors causing an organism's condition to diverge too far from homeostasis can be experienced as stress. A life-threatening situation such as a major physical trauma or prolonged starvation can greatly disrupt homeostasis. On the other hand, an organism's attempt at restoring conditions back to or near homeostasis, often consuming energy and natural resources, can also be interpreted as stress. The brain cannot sustain an equilibrium under chronic stress; the accumulation of such an ever-deepening deficit is called chronic stress. The ambiguity in defining this phenomenon was first recognized by Hans Selye (1907–1982) in 1926. In 1951 a commentator loosely summarized Selye's view of stress as something that "...in addition to being itself, was also the cause of itself, and the result of itself". First to use the term in a biological context, Selye continued to define stress as "the non-specific response of the body to any demand placed upon it". Neuroscientists such as Bruce McEwen and Jaap Koolhaas believe that stress, based on years of empirical research, "should be restricted to conditions where an environmental demand exceeds the natural regulatory capacity of an organism". The brain cannot live in an harsh family environment, it needs some sort of stability between another brain. People who have reported being raised in harsh environments such as verbal and physical aggression have showed a more immune dysfunction and more metabolic dysfunction. Indeed, in 1995 Toates already defined stress as a "chronic state that arises only when defense mechanisms are either being chronically stretched or are actually failing," while according to Ursin (1988) stress results from an inconsistency between expected events ("set value") and perceived events ("actual value") that cannot be resolved satisfactorily, which also puts stress into the broader context of cognitive-consistency theory. Biological background Stress can have many profound effects on the human biological systems. Biology primarily attempts to explain major concepts of stress using a stimulus-response paradigm, broadly comparable to how a psychobiological sensory system operates. The central nervous system (brain and spinal cord) plays a crucial role in the body's stress-related mechanisms. Whether one should interpret these mechanisms as the body's response to a stressor or embody the act of stress itself is part of the ambiguity in defining what exactly stress is. The central nervous system works closely with the body's endocrine system to regulate these mechanisms. The sympathetic nervous system becomes primarily active during a stress response, regulating many of the body's physiological functions in ways that ought to make an organism more adaptive to its environment. Below there follows a brief biological background of neuroanatomy and neurochemistry and how they relate to stress. Stress, either severe, acute stress or chronic low-grade stress may induce abnormalities in three principal regulatory systems in the body: serotonin systems, catecholamine systems, and the hypothalamic-pituitary-adrenocortical axis. Aggressive behavior has also been associated with abnormalities in these systems. Biology of stress The brain endocrine interactions are relevant in the translation of stress into physiological and psychological changes. The autonomic nervous system (ANS), as mentioned above, plays an important role in translating stress into a response. The ANS responds reflexively to both physical stressors (for example baroreception), and to higher level inputs from the brain. The ANS is composed of the parasympathetic nervous system and sympathetic nervous system, two branches that are both tonically active with opposing activities. The ANS directly innervates tissue through the postganglionic nerves, which is controlled by preganglionic neurons originating in the intermediolateral cell column. The ANS receives inputs from the medulla, hypothalamus, limbic system, prefrontal cortex, midbrain and monoamine nuclei. The activity of the sympathetic nervous system drives what is called the "fight or flight" response. The fight or flight response to emergency or stress involves mydriasis, increased heart rate and force contraction, vasoconstriction, bronchodilation, glycogenolysis, gluconeogenesis, lipolysis, sweating, decreased motility of the digestive system, secretion of the epinephrine and cortisol from the adrenal medulla, and relaxation of the bladder wall. The parasympathetic nervous response, "rest and digest", involves return to maintaining homeostasis, and involves miosis, bronchoconstriction, increased activity of the digestive system, and contraction of the bladder walls. Complex relationships between protective and vulnerability factors on the effect of childhood home stress on psychological illness, cardiovascular illness and adaption have been observed. ANS related mechanisms are thought to contribute to increased risk of cardiovascular disease after major stressful events. The HPA axis is a neuroendocrine system that mediates a stress response. Neurons in the hypothalamus, particularly the paraventricular nucleus, release vasopressin and corticotropin releasing hormone, which travel through the hypophysial portal vessel where they travel to and bind to the corticotropin-releasing hormone receptor on the anterior pituitary gland. Multiple CRH peptides have been identified, and receptors have been identified on multiple areas of the brain, including the amygdala. CRH is the main regulatory molecule of the release of ACTH. The secretion of ACTH into systemic circulation allows it to bind to and activate Melanocortin receptor, where it stimulates the release of steroid hormones. Steroid hormones bind to glucocorticoid receptors in the brain, providing negative feedback by reducing ACTH release. Some evidence supports a second long term feedback that is non-sensitive to cortisol secretion. The PVN of the hypothalamus receives inputs from the nucleus of the solitary tract, and lamina terminalis. Through these inputs, it receives and can respond to changes in blood. The PVN innervation from the brain stem nuclei, particularly the noradrenergic nuclei stimulate CRH release. Other regions of the hypothalamus both directly and indirectly inhibit HPA axis activity. Hypothalamic neurons involved in regulating energy balance also influence HPA axis activity through the release of neurotransmitters such as neuropeptide Y, which stimulates HPA axis activity. Generally, the amygdala stimulates, and the prefrontal cortex and hippocampus attenuate, HPA axis activity; however, complex relationships do exist between the regions. The immune system may be heavily influenced by stress. The sympathetic nervous system innervates various immunological structures, such as bone marrow and the spleen, allowing for it to regulate immune function. The adrenergic substances released by the sympathetic nervous system can also bind to and influence various immunological cells, further providing a connection between the systems. The HPA axis ultimately results in the release of cortisol, which generally has immunosuppressive effects. However, the effect of stress on the immune system is disputed, and various models have been proposed in an attempt to account for both the supposedly "immunodeficiency" linked diseases and diseases involving hyper activation of the immune system. One model proposed to account for this suggests a push towards an imbalance of cellular immunity(Th1) and humoral immunity(Th2). The proposed imbalance involved hyperactivity of the Th2 system leading to some forms of immune hypersensitivity, while also increasing risk of some illnesses associated with decreased immune system function, such as infection and cancer. Effects of chronic stress Chronic stress is a term sometimes used to differentiate it from acute stress. Definitions differ, and may be along the lines of continual activation of the stress response, stress that causes an allostatic shift in bodily functions, or just as "prolonged stress". For example, results of one study demonstrated that individuals who reported relationship conflict lasting one month or longer have a greater risk of developing illness and show slower wound healing. It can also reduce the benefits of receiving common vaccines. Similarly, the effects that acute stressors have on the immune system may be increased when there is perceived stress and/or anxiety due to other events. For example, students who are taking exams show weaker immune responses if they also report stress due to daily hassles. While responses to acute stressors typically do not impose a health burden on young, healthy individuals, chronic stress in older or unhealthy individuals may have long-term effects that are detrimental to health. Immunological Acute time-limited stressors, or stressors that lasted less than two hours, results in an up regulation of natural immunity and down regulation of specific immunity. This type of stress saw in increase in granulocytes, natural killer cells, IgA, Interleukin 6, and an increase in cell cytotoxicity. Brief naturalistic stressors elicit a shift from Th1 (cellular) to Th2 (humoral) immunity, while decreased T-cell proliferation, and natural killer cell cytotoxicity. Stressful event sequences did not elicit a consistent immune response; however, some observations such as decreased T-Cell proliferation and cytotoxicity, increase or decrease in natural killer cell cytotoxicity, and an increase in mitogen PHA. Chronic stress elicited a shift toward Th2 immunity, as well as decreased interleukin 2, T cell proliferation, and antibody response to the influenza vaccine. Distant stressors did not consistently elicit a change in immune function. Another response to high impacts of chronic stress that lasts for a long period of time, is more immune dysfunction and more metabolic dysfunction. It is proven in studies that when continuously being in stressful situations, it is more likely to get sick. Also, when being exposed to stress, some claim that the body metabolizes the food in a certain way that adds extra calories to the meal, regardless of the nutritional values of the food. Infectious Some studies have observed increased risk of upper respiratory tract infection during chronic life stress. In patients with HIV, increased life stress and cortisol was associated with poorer progression of HIV. Also with an increased level of stress, studies have proven evidence that it can reactivate latent herpes viruses. Chronic disease A link has been suggested between chronic stress and cardiovascular disease. Stress appears to play a role in hypertension, and may further predispose people to other conditions associated with hypertension. Stress may precipitate abuse of drugs and/or alcohol. Stress may also contribute to aging and chronic diseases in aging, such as depression and metabolic disorders. The immune system also plays a role in stress and the early stages of wound healing. It is responsible for preparing the tissue for repair and promoting recruitment of certain cells to the wound area. Consistent with the fact that stress alters the production of cytokines, Graham et al. found that chronic stress associated with care giving for a person with Alzheimer's disease leads to delayed wound healing. Results indicated that biopsy wounds healed 25% more slowly in the chronically stressed group, or those caring for a person with Alzheimer's disease. Development Chronic stress has also been shown to impair developmental growth in children by lowering the pituitary gland's production of growth hormone, as in children associated with a home environment involving serious marital discord, alcoholism, or child abuse. Chronic stress also has a lot of illnesses and health care problems other than mental that comes with it. Severe chronic stress for long periods of time can lead to an increased chance of catching illnesses such as diabetes, cancer, depression, heart disease and Alzheimer's disease. More generally, prenatal life, infancy, childhood, and adolescence are critical periods in which the vulnerability to stressors is particularly high. This can lead to psychiatric and physical diseases which have long term impacts on an individual. Psychopathology Chronic stress is seen to affect the parts of the brain where memories are processed through and stored. When people feel stressed, stress hormones get over-secreted, which affects the brain. This secretion is made up of glucocorticoids, including cortisol, which are steroid hormones that the adrenal gland releases, although this can increase storage of flashbulb memories it decreases long-term potentiation (LTP). The hippocampus is important in the brain for storing certain kinds of memories and damage to the hippocampus can cause trouble in storing new memories but old memories, memories stored before the damage, are not lost. Also high cortisol levels can be tied to the deterioration of the hippocampus and decline of memory that many older adults start to experience with age. These mechanisms and processes may therefore contribute to age-related disease, or originate risk for earlier-onset disorders. For instance, extreme stress (e.g. trauma) is a requisite factor to produce stress-related disorders such as post-traumatic stress disorder. Chronic stress also shifts learning, forming a preference for habit based learning, and decreased task flexibility and spatial working memory, probably through alterations of the dopaminergic systems. Stress may also increase reward associated with food, leading to weight gain and further changes in eating habits. Stress may contribute to various disorders, such as fibromyalgia, chronic fatigue syndrome, depression, as well as other mental illnesses and functional somatic syndromes. Psychological concepts Eustress Selye published in year 1975 a model dividing stress into eustress and distress. Where stress enhances function (physical or mental, such as through strength training or challenging work), it may be considered eustress. Persistent stress that is not resolved through coping or adaptation, deemed distress, may lead to anxiety or withdrawal (depression) behavior. The difference between experiences that result in eustress and those that result in distress is determined by the disparity between an experience (real or imagined) and personal expectations, and resources to cope with the stress. Alarming experiences, either real or imagined, can trigger a stress response. Coping Responses to stress include adaptation, psychological coping such as stress management, anxiety, and depression. Over the long term, distress can lead to diminished health and/or increased propensity to illness; to avoid this, stress must be managed. Stress management encompasses techniques intended to equip a person with effective coping mechanisms for dealing with psychological stress, with stress defined as a person's physiological response to an internal or external stimulus that triggers the fight-or-flight response. Stress management is effective when a person uses strategies to cope with or alter stressful situations. There are several ways of coping with stress, such as controlling the source of stress or learning to set limits and to say "no" to some of the demands that bosses or family members may make. A person's capacity to tolerate the source of stress may be increased by thinking about another topic such as a hobby, listening to music, or spending time in a wilderness. A way to control stress is first dealing with what is causing the stress if it is something the individual has control over. Other methods to control stress and reduce it can be: to not procrastinate and leave tasks for the last minute, do things you like, exercise, do breathing routines, go out with friends, and take a break. Having support from a loved one also helps a lot in reducing stress. One study showed that the power of having support from a loved one, or just having social support, lowered stress in individual subjects. Painful shocks were applied to married women's ankles. In some trials women were able to hold their husband's hand, in other trials they held a stranger's hand, and then held no one's hand. When the women were holding their husband's hand, the response was reduced in many brain areas. When holding the stranger's hand the response was reduced a little, but not as much as when they were holding their husband's hand. Social support helps reduce stress and even more so if the support is from a loved one. Cognitive appraisal Lazarus argued that, in order for a psychosocial situation to be stressful, it must be appraised as such. He argued that cognitive processes of appraisal are central in determining whether a situation is potentially threatening, constitutes a harm/loss or a challenge, or is benign. Both personal and environmental factors influence this primary appraisal, which then triggers the selection of coping processes. Problem-focused coping is directed at managing the problem, whereas emotion-focused coping processes are directed at managing the negative emotions. Secondary appraisal refers to the evaluation of the resources available to cope with the problem, and may alter the primary appraisal. In other words, primary appraisal includes the perception of how stressful the problem is and the secondary appraisal of estimating whether one has more than or less than adequate resources to deal with the problem that affects the overall appraisal of stressfulness. Further, coping is flexible in that, in general, the individual examines the effectiveness of the coping on the situation; if it is not having the desired effect, they will, in general, try different strategies. Assessment Health risk factors Both negative and positive stressors can lead to stress. The intensity and duration of stress changes depending on the circumstances and emotional condition of the person with it (Arnold. E and Boggs. K. 2007). Some common categories and examples of stressors include: Sensory input such as pain, bright light, noise, temperatures, or environmental issues such as a lack of control over environmental circumstances, such as food, air and/or water quality, housing, health, freedom, or mobility. Social issues can also cause stress, such as struggles with conspecific or difficult individuals and social defeat, or relationship conflict, deception, or break ups, and major events such as birth and deaths, marriage, and divorce. Life experiences such as poverty, unemployment, clinical depression, obsessive compulsive disorder, heavy drinking, or insufficient sleep can also cause stress. Students and workers may face performance pressure stress from exams and project deadlines. Adverse experiences during development (e.g. prenatal exposure to maternal stress, poor attachment histories, sexual abuse) are thought to contribute to deficits in the maturity of an individual's stress response systems. One evaluation of the different stresses in people's lives is the Holmes and Rahe stress scale. General adaptation syndrome Physiologists define stress as how the body reacts to a stressor - a stimulus, real or imagined. Acute stressors affect an organism in the short term; chronic stressors over the longer term. The general adaptation syndrome (GAS), developed by Hans Selye, is a profile of how organisms respond to stress; GAS is characterized by three phases: a nonspecific alarm mobilization phase, which promotes sympathetic nervous system activity; a resistance phase, during which the organism makes efforts to cope with the threat; and an exhaustion phase, which occurs if the organism fails to overcome the threat and depletes its physiological resources. Stage 1 Alarm is the first stage, which is divided into two phases: the shock phase and the antishock phase. Shock phase: During this phase, the body can endure changes such as hypovolemia, hypoosmolarity, hyponatremia, hypochloremia, hypoglycemia—the stressor effect. This phase resembles Addison's disease. The organism's resistance to the stressor drops temporarily below the normal range and some level of shock (e.g. circulatory shock) may be experienced. Antishock phase: When the threat or stressor is identified or realized, the body starts to respond and is in a state of alarm. During this stage, the locus coeruleus and sympathetic nervous system activate the production of catecholamines including adrenaline, engaging the popularly-known fight-or-flight response. Adrenaline temporarily provides increased muscular tonus, increased blood pressure due to peripheral vasoconstriction and tachycardia, and increased glucose in blood. There is also some activation of the HPA axis, producing glucocorticoids (cortisol, aka the S-hormone or stress-hormone). Stage 2 Resistance is the second stage. During this stage, increased secretion of glucocorticoids intensifies the body's systemic response. Glucocorticoids can increase the concentration of glucose, fat, and amino acid in blood. In high doses, one glucocorticoid, cortisol, begins to act similarly to a mineralocorticoid (aldosterone) and brings the body to a state similar to hyperaldosteronism. If the stressor persists, it becomes necessary to attempt some means of coping with the stress. The body attempts to respond to stressful stimuli, but after prolonged activation, the body's chemical resources will be gradually depleted, leading to the final stage. Stage 3 The third stage could be either exhaustion or recovery: Recovery stage follows when the system's compensation mechanisms have successfully overcome the stressor effect (or have completely eliminated the factor which caused the stress). The high glucose, fat and amino acid levels in blood prove useful for anabolic reactions, restoration of homeostasis and regeneration of cells. Exhaustion is the alternative third stage in the GAS model. At this point, all of the body's resources are eventually depleted and the body is unable to maintain normal function. The initial autonomic nervous system symptoms may reappear (panic attacks, muscle aches, sore eyes, difficulty breathing, fatigue, heartburn, high blood pressure, and difficulty sleeping, etc.). If stage three is extended, long-term damage may result (prolonged vasoconstriction results in ischemia which in turn leads to cell necrosis), as the body's immune system becomes exhausted, and bodily functions become impaired, resulting in decompensation. The result can manifest itself in obvious illnesses, such as general trouble with the digestive system (e.g. occult bleeding, melena, constipation/obstipation), diabetes, or even cardiovascular problems (angina pectoris), along with clinical depression and other mental illnesses. History in research The current usage of the word stress arose out of Hans Selye's 1930s experiments. He started to use the term to refer not just to the agent but to the state of the organism as it responded and adapted to the environment. His theories of a universal non-specific stress response attracted great interest and contention in academic physiology and he undertook extensive research programs and publication efforts. While the work attracted continued support from advocates of psychosomatic medicine, many in experimental physiology concluded that his concepts were too vague and unmeasurable. During the 1950s, Selye turned away from the laboratory to promote his concept through popular books and lecture tours. He wrote for both non-academic physicians and, in an international bestseller entitled Stress of Life, for the general public. A broad biopsychosocial concept of stress and adaptation offered the promise of helping everyone achieve health and happiness by successfully responding to changing global challenges and the problems of modern civilization. Selye coined the term "eustress" for positive stress, by contrast to distress. He argued that all people have a natural urge and need to work for their own benefit, a message that found favor with industrialists and governments. He also coined the term stressor to refer to the causative event or stimulus, as opposed to the resulting state of stress. Selye was in contact with the tobacco industry from 1958 and they were undeclared allies in litigation and the promotion of the concept of stress, clouding the link between smoking and cancer, and portraying smoking as a "diversion", or in Selye's concept a "deviation", from environmental stress. From the late 1960s, academic psychologists started to adopt Selye's concept; they sought to quantify "life stress" by scoring "significant life events", and a large amount of research was undertaken to examine links between stress and disease of all kinds. By the late 1970s, stress had become the medical area of greatest concern to the general population, and more basic research was called for to better address the issue. There was also renewed laboratory research into the neuroendocrine, molecular, and immunological bases of stress, conceived as a useful heuristic not necessarily tied to Selye's original hypotheses. The US military became a key center of stress research, attempting to understand and reduce combat neurosis and psychiatric casualties. The psychiatric diagnosis post-traumatic stress disorder (PTSD) was coined in the mid-1970s, in part through the efforts of anti-Vietnam War activists and the Vietnam Veterans Against the War, and Chaim F. Shatan. The condition was added to the Diagnostic and Statistical Manual of Mental Disorders as posttraumatic stress disorder in 1980. PTSD was considered a severe and ongoing emotional reaction to an extreme psychological trauma, and as such often associated with soldiers, police officers, and other emergency personnel. The stressor may involve threat to life (or viewing the actual death of someone else), serious physical injury, or threat to physical or psychological integrity. In some cases, it can also be from profound psychological and emotional trauma, apart from any actual physical harm or threat. Often, however, the two are combined. By the 1990s, "stress" had become an integral part of modern scientific understanding in all areas of physiology and human functioning, and one of the great metaphors of Western life. Focus grew on stress in certain settings, such as workplace stress, and stress management techniques were developed. The term also became a euphemism, a way of referring to problems and eliciting sympathy without being explicitly confessional, just "stressed out". It came to cover a huge range of phenomena from mild irritation to the kind of severe problems that might result in a real breakdown of health. In popular usage, almost any event or situation between these extremes could be described as stressful. The American Psychological Association's 2015 Stress In America Study found that nationwide stress is on the rise and that the three leading sources of stress were "money", "family responsibility", and "work". See also Autonomic nervous system Defense physiology HPA axis Inflammation Plant stress measurement Trier social stress test Xenohormesis Stress in early childhood Weathering hypothesis Endorphins References External links The American Institute of Stress "Research on Work-Related Stress", European Agency for Safety and Health at Work (EU-OSHA) Coping With Stress Endocrine system Sympathetic nervous system
0.77769
0.997265
0.775563
Pharmacology
Pharmacology is the science of drugs and medications, including a substance's origin, composition, pharmacokinetics, pharmacodynamics, therapeutic use, and toxicology. More specifically, it is the study of the interactions that occur between a living organism and chemicals that affect normal or abnormal biochemical function. If substances have medicinal properties, they are considered pharmaceuticals. The field encompasses drug composition and properties, functions, sources, synthesis and drug design, molecular and cellular mechanisms, organ/systems mechanisms, signal transduction/cellular communication, molecular diagnostics, interactions, chemical biology, therapy, and medical applications and antipathogenic capabilities. The two main areas of pharmacology are pharmacodynamics and pharmacokinetics. Pharmacodynamics studies the effects of a drug on biological systems, and pharmacokinetics studies the effects of biological systems on a drug. In broad terms, pharmacodynamics discusses the chemicals with biological receptors, and pharmacokinetics discusses the absorption, distribution, metabolism, and excretion (ADME) of chemicals from the biological systems. Pharmacology is not synonymous with pharmacy and the two terms are frequently confused. Pharmacology, a biomedical science, deals with the research, discovery, and characterization of chemicals which show biological effects and the elucidation of cellular and organismal function in relation to these chemicals. In contrast, pharmacy, a health services profession, is concerned with the application of the principles learned from pharmacology in its clinical settings; whether it be in a dispensing or clinical care role. In either field, the primary contrast between the two is their distinctions between direct-patient care, pharmacy practice, and the science-oriented research field, driven by pharmacology. Etymology The word pharmacology is derived from Greek word , pharmakon, meaning "drug" or "poison", together with another Greek word , logia with the meaning of "study of" or "knowledge of" (cf. the etymology of pharmacy). Pharmakon is related to pharmakos, the ritualistic sacrifice or exile of a human scapegoat or victim in Ancient Greek religion. The modern term pharmacon is used more broadly than the term drug because it includes endogenous substances, and biologically active substances which are not used as drugs. Typically it includes pharmacological agonists and antagonists, but also enzyme inhibitors (such as monoamine oxidase inhibitors). History The origins of clinical pharmacology date back to the Middle Ages, with pharmacognosy and Avicenna's The Canon of Medicine, Peter of Spain's Commentary on Isaac, and John of St Amand's Commentary on the Antedotary of Nicholas. Early pharmacology focused on herbalism and natural substances, mainly plant extracts. Medicines were compiled in books called pharmacopoeias. Crude drugs have been used since prehistory as a preparation of substances from natural sources. However, the active ingredient of crude drugs are not purified and the substance is adulterated with other substances. Traditional medicine varies between cultures and may be specific to a particular culture, such as in traditional Chinese, Mongolian, Tibetan and Korean medicine. However much of this has since been regarded as pseudoscience. Pharmacological substances known as entheogens may have spiritual and religious use and historical context. In the 17th century, the English physician Nicholas Culpeper translated and used pharmacological texts. Culpeper detailed plants and the conditions they could treat. In the 18th century, much of clinical pharmacology was established by the work of William Withering. Pharmacology as a scientific discipline did not further advance until the mid-19th century amid the great biomedical resurgence of that period. Before the second half of the nineteenth century, the remarkable potency and specificity of the actions of drugs such as morphine, quinine and digitalis were explained vaguely and with reference to extraordinary chemical powers and affinities to certain organs or tissues. The first pharmacology department was set up by Rudolf Buchheim in 1847, at University of Tartu, in recognition of the need to understand how therapeutic drugs and poisons produced their effects. Subsequently, the first pharmacology department in England was set up in 1905 at University College London. Pharmacology developed in the 19th century as a biomedical science that applied the principles of scientific experimentation to therapeutic contexts. The advancement of research techniques propelled pharmacological research and understanding. The development of the organ bath preparation, where tissue samples are connected to recording devices, such as a myograph, and physiological responses are recorded after drug application, allowed analysis of drugs' effects on tissues. The development of the ligand binding assay in 1945 allowed quantification of the binding affinity of drugs at chemical targets. Modern pharmacologists use techniques from genetics, molecular biology, biochemistry, and other advanced tools to transform information about molecular mechanisms and targets into therapies directed against disease, defects or pathogens, and create methods for preventive care, diagnostics, and ultimately personalized medicine. Divisions The discipline of pharmacology can be divided into many sub disciplines each with a specific focus. Systems of the body Pharmacology can also focus on specific systems comprising the body. Divisions related to bodily systems study the effects of drugs in different systems of the body. These include neuropharmacology, in the central and peripheral nervous systems; immunopharmacology in the immune system. Other divisions include cardiovascular, renal and endocrine pharmacology. Psychopharmacology is the study of the use of drugs that affect the psyche, mind and behavior (e.g. antidepressants) in treating mental disorders (e.g. depression). It incorporates approaches and techniques from neuropharmacology, animal behavior and behavioral neuroscience, and is interested in the behavioral and neurobiological mechanisms of action of psychoactive drugs. The related field of neuropsychopharmacology focuses on the effects of drugs at the overlap between the nervous system and the psyche. Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Pharmacometabolomics can be applied to measure metabolite levels following the administration of a drug, in order to monitor the effects of the drug on metabolic pathways. Pharmacomicrobiomics studies the effect of microbiome variations on drug disposition, action, and toxicity. Pharmacomicrobiomics is concerned with the interaction between drugs and the gut microbiome. Pharmacogenomics is the application of genomic technologies to drug discovery and further characterization of drugs related to an organism's entire genome. For pharmacology regarding individual genes, pharmacogenetics studies how genetic variation gives rise to differing responses to drugs. Pharmacoepigenetics studies the underlying epigenetic marking patterns that lead to variation in an individual's response to medical treatment. Clinical practice and drug discovery Pharmacology can be applied within clinical sciences. Clinical pharmacology is the application of pharmacological methods and principles in the study of drugs in humans. An example of this is posology, which is the study of dosage of medicines. Pharmacology is closely related to toxicology. Both pharmacology and toxicology are scientific disciplines that focus on understanding the properties and actions of chemicals. However, pharmacology emphasizes the therapeutic effects of chemicals, usually drugs or compounds that could become drugs, whereas toxicology is the study of chemical's adverse effects and risk assessment. Pharmacological knowledge is used to advise pharmacotherapy in medicine and pharmacy. Drug discovery Drug discovery is the field of study concerned with creating new drugs. It encompasses the subfields of drug design and development. Drug discovery starts with drug design, which is the inventive process of finding new drugs. In the most basic sense, this involves the design of molecules that are complementary in shape and charge to a given biomolecular target. After a lead compound has been identified through drug discovery, drug development involves bringing the drug to the market. Drug discovery is related to pharmacoeconomics, which is the sub-discipline of health economics that considers the value of drugs Pharmacoeconomics evaluates the cost and benefits of drugs in order to guide optimal healthcare resource allocation. The techniques used for the discovery, formulation, manufacturing and quality control of drugs discovery is studied by pharmaceutical engineering, a branch of engineering. Safety pharmacology specialises in detecting and investigating potential undesirable effects of drugs. Development of medication is a vital concern to medicine, but also has strong economical and political implications. To protect the consumer and prevent abuse, many governments regulate the manufacture, sale, and administration of medication. In the United States, the main body that regulates pharmaceuticals is the Food and Drug Administration; they enforce standards set by the United States Pharmacopoeia. In the European Union, the main body that regulates pharmaceuticals is the EMA, and they enforce standards set by the European Pharmacopoeia. The metabolic stability and the reactivity of a library of candidate drug compounds have to be assessed for drug metabolism and toxicological studies. Many methods have been proposed for quantitative predictions in drug metabolism; one example of a recent computational method is SPORCalc. A slight alteration to the chemical structure of a medicinal compound could alter its medicinal properties, depending on how the alteration relates to the structure of the substrate or receptor site on which it acts: this is called the structural activity relationship (SAR). When a useful activity has been identified, chemists will make many similar compounds called analogues, to try to maximize the desired medicinal effect(s). This can take anywhere from a few years to a decade or more, and is very expensive. One must also determine how safe the medicine is to consume, its stability in the human body and the best form for delivery to the desired organ system, such as tablet or aerosol. After extensive testing, which can take up to six years, the new medicine is ready for marketing and selling. Because of these long timescales, and because out of every 5000 potential new medicines typically only one will ever reach the open market, this is an expensive way of doing things, often costing over 1 billion dollars. To recoup this outlay pharmaceutical companies may do a number of things: Carefully research the demand for their potential new product before spending an outlay of company funds. Obtain a patent on the new medicine preventing other companies from producing that medicine for a certain allocation of time. The inverse benefit law describes the relationship between a drugs therapeutic benefits and its marketing. When designing drugs, the placebo effect must be considered to assess the drug's true therapeutic value. Drug development uses techniques from medicinal chemistry to chemically design drugs. This overlaps with the biological approach of finding targets and physiological effects. Wider contexts Pharmacology can be studied in relation to wider contexts than the physiology of individuals. For example, pharmacoepidemiology concerns the variations of the effects of drugs in or between populations, it is the bridge between clinical pharmacology and epidemiology. Pharmacoenvironmentology or environmental pharmacology is the study of the effects of used pharmaceuticals and personal care products (PPCPs) on the environment after their elimination from the body. Human health and ecology are intimately related so environmental pharmacology studies the environmental effect of drugs and pharmaceuticals and personal care products in the environment. Drugs may also have ethnocultural importance, so ethnopharmacology studies the ethnic and cultural aspects of pharmacology. Emerging fields Photopharmacology is an emerging approach in medicine in which drugs are activated and deactivated with light. The energy of light is used to change for shape and chemical properties of the drug, resulting in different biological activity. This is done to ultimately achieve control when and where drugs are active in a reversible manner, to prevent side effects and pollution of drugs into the environment. Theory of pharmacology The study of chemicals requires intimate knowledge of the biological system affected. With the knowledge of cell biology and biochemistry increasing, the field of pharmacology has also changed substantially. It has become possible, through molecular analysis of receptors, to design chemicals that act on specific cellular signaling or metabolic pathways by affecting sites directly on cell-surface receptors (which modulate and mediate cellular signaling pathways controlling cellular function). Chemicals can have pharmacologically relevant properties and effects. Pharmacokinetics describes the effect of the body on the chemical (e.g. half-life and volume of distribution), and pharmacodynamics describes the chemical's effect on the body (desired or toxic). Systems, receptors and ligands Pharmacology is typically studied with respect to particular systems, for example endogenous neurotransmitter systems. The major systems studied in pharmacology can be categorised by their ligands and include acetylcholine, adrenaline, glutamate, GABA, dopamine, histamine, serotonin, cannabinoid and opioid. Molecular targets in pharmacology include receptors, enzymes and membrane transport proteins. Enzymes can be targeted with enzyme inhibitors. Receptors are typically categorised based on structure and function. Major receptor types studied in pharmacology include G protein coupled receptors, ligand gated ion channels and receptor tyrosine kinases. Network pharmacology is a subfield of pharmacology that combines principles from pharmacology, systems biology, and network analysis to study the complex interactions between drugs and targets (e.g., receptors or enzymes etc.) in biological systems. The topology of a biochemical reaction network determines the shape of drug dose-response curve as well as the type of drug-drug interactions, thus can help designing efficient and safe therapeutic strategies. The topology Network pharmacology utilizes computational tools and network analysis algorithms to identify drug targets, predict drug-drug interactions, elucidate signaling pathways, and explore the polypharmacology of drugs. Pharmacodynamics Pharmacodynamics is defined as how the body reacts to the drugs. Pharmacodynamics theory often investigates the binding affinity of ligands to their receptors. Ligands can be agonists, partial agonists or antagonists at specific receptors in the body. Agonists bind to receptors and produce a biological response, a partial agonist produces a biological response lower than that of a full agonist, antagonists have affinity for a receptor but do not produce a biological response. The ability of a ligand to produce a biological response is termed efficacy, in a dose-response profile it is indicated as percentage on the y-axis, where 100% is the maximal efficacy (all receptors are occupied). Binding affinity is the ability of a ligand to form a ligand-receptor complex either through weak attractive forces (reversible) or covalent bond (irreversible), therefore efficacy is dependent on binding affinity. Potency of drug is the measure of its effectiveness, EC50 is the drug concentration of a drug that produces an efficacy of 50% and the lower the concentration the higher the potency of the drug therefore EC50 can be used to compare potencies of drugs. Medication is said to have a narrow or wide therapeutic index, certain safety factor or therapeutic window. This describes the ratio of desired effect to toxic effect. A compound with a narrow therapeutic index (close to one) exerts its desired effect at a dose close to its toxic dose. A compound with a wide therapeutic index (greater than five) exerts its desired effect at a dose substantially below its toxic dose. Those with a narrow margin are more difficult to dose and administer, and may require therapeutic drug monitoring (examples are warfarin, some antiepileptics, aminoglycoside antibiotics). Most anti-cancer drugs have a narrow therapeutic margin: toxic side-effects are almost always encountered at doses used to kill tumors. The effect of drugs can be described with Loewe additivity which is one of several common reference models. Other models include the Hill equation, Cheng-Prusoff equation and Schild regression. Pharmacokinetics Pharmacokinetics is the study of the bodily absorption, distribution, metabolism, and excretion of drugs. When describing the pharmacokinetic properties of the chemical that is the active ingredient or active pharmaceutical ingredient (API), pharmacologists are often interested in L-ADME: Liberation – How is the API disintegrated (for solid oral forms (breaking down into smaller particles), dispersed, or dissolved from the medication? Absorption – How is the API absorbed (through the skin, the intestine, the oral mucosa)? Distribution – How does the API spread through the organism? Metabolism – Is the API converted chemically inside the body, and into which substances. Are these active (as well)? Could they be toxic? Excretion – How is the API excreted (through the bile, urine, breath, skin)? Drug metabolism is assessed in pharmacokinetics and is important in drug research and prescribing. Pharmacokinetics is the movement of the drug in the body, it is usually described as 'what the body does to the drug' the physico-chemical properties of a drug will affect the rate and extent of absorption, extent of distribution, metabolism and elimination. The drug needs to have the appropriate molecular weight, polarity etc. in order to be absorbed, the fraction of a drug the reaches the systemic circulation is termed bioavailability, this is simply a ratio of the peak plasma drug levels after oral administration and the drug concentration after an IV administration(first pass effect is avoided and therefore no amount drug is lost). A drug must be lipophilic (lipid soluble) in order to pass through biological membranes this is true because biological membranes are made up of a lipid bilayer (phospholipids etc.) Once the drug reaches the blood circulation it is then distributed throughout the body and being more concentrated in highly perfused organs. Administration, drug policy and safety Drug policy In the United States, the Food and Drug Administration (FDA) is responsible for creating guidelines for the approval and use of drugs. The FDA requires that all approved drugs fulfill two requirements: The drug must be found to be effective against the disease for which it is seeking approval (where 'effective' means only that the drug performed better than placebo or competitors in at least two trials). The drug must meet safety criteria by being subject to animal and controlled human testing. Gaining FDA approval usually takes several years. Testing done on animals must be extensive and must include several species to help in the evaluation of both the effectiveness and toxicity of the drug. The dosage of any drug approved for use is intended to fall within a range in which the drug produces a therapeutic effect or desired outcome. The safety and effectiveness of prescription drugs in the U.S. are regulated by the federal Prescription Drug Marketing Act of 1987. The Medicines and Healthcare products Regulatory Agency (MHRA) has a similar role in the UK. Medicare Part D is a prescription drug plan in the U.S. The Prescription Drug Marketing Act (PDMA) is an act related to drug policy. Prescription drugs are drugs regulated by legislation. Societies and education Societies and administration The International Union of Basic and Clinical Pharmacology, Federation of European Pharmacological Societies and European Association for Clinical Pharmacology and Therapeutics are organisations representing standardisation and regulation of clinical and scientific pharmacology. Systems for medical classification of drugs with pharmaceutical codes have been developed. These include the National Drug Code (NDC), administered by Food and Drug Administration.; Drug Identification Number (DIN), administered by Health Canada under the Food and Drugs Act; Hong Kong Drug Registration, administered by the Pharmaceutical Service of the Department of Health (Hong Kong) and National Pharmaceutical Product Index in South Africa. Hierarchical systems have also been developed, including the Anatomical Therapeutic Chemical Classification System (AT, or ATC/DDD), administered by World Health Organization; Generic Product Identifier (GPI), a hierarchical classification number published by MediSpan and SNOMED, C axis. Ingredients of drugs have been categorised by Unique Ingredient Identifier. Education The study of pharmacology overlaps with biomedical sciences and is the study of the effects of drugs on living organisms. Pharmacological research can lead to new drug discoveries, and promote a better understanding of human physiology. Students of pharmacology must have a detailed working knowledge of aspects in physiology, pathology, and chemistry. They may also require knowledge of plants as sources of pharmacologically active compounds. Modern pharmacology is interdisciplinary and involves biophysical and computational sciences, and analytical chemistry. A pharmacist needs to be well-equipped with knowledge on pharmacology for application in pharmaceutical research or pharmacy practice in hospitals or commercial organisations selling to customers. Pharmacologists, however, usually work in a laboratory undertaking research or development of new products. Pharmacological research is important in academic research (medical and non-medical), private industrial positions, science writing, scientific patents and law, consultation, biotech and pharmaceutical employment, the alcohol industry, food industry, forensics/law enforcement, public health, and environmental/ecological sciences. Pharmacology is often taught to pharmacy and medicine students as part of a Medical School curriculum. See also References External links American Society for Pharmacology and Experimental Therapeutics British Pharmacological Society International Conference on Harmonisation US Pharmacopeia International Union of Basic and Clinical Pharmacology IUPHAR Committee on Receptor Nomenclature and Drug Classification IUPHAR/BPS Guide to Pharmacology Further reading Biochemistry Life sciences industry
0.777176
0.997744
0.775423
Cognitive skill
Cognitive skills are skills of the mind, as opposed to other types of skills such as motor skills or social skills. Some examples of cognitive skills are literacy, self-reflection, logical reasoning, abstract thinking, critical thinking, introspection and mental arithmetic. Cognitive skills vary in processing complexity, and can range from more fundamental processes such as perception and various memory functions, to more sophisticated processes such as decision making, problem solving and metacognition. Specialisation of functions Cognitive science has provided theories of how the brain works, and these have been of great interest to researchers who work in the empirical fields of brain science. A fundamental question is whether cognitive functions, for example visual processing and language, are autonomous modules, or to what extent the functions depend on each other. Research evidence points towards a middle position, and it is now generally accepted that there is a degree of modularity in aspects of brain organisation. In other words, cognitive skills or functions are specialised, but they also overlap or interact with each other. Deductive reasoning, on the other hand, has been shown to be related to either visual or linguistic processing, depending on the task; although there are also aspects that differ from them. All in all, research evidence does not provide strong support for classical models of cognitive psychology. Cognitive functioning Cognitive functioning refers to a person's ability to process thoughts. It is defined as "the ability of an individual to perform the various mental activities most closely associated with learning and problem-solving. Examples include the verbal, spatial, psychomotor, and processing-speed ability." Cognition mainly refers to things like memory, speech, and the ability to learn new information. The brain is usually capable of learning new skills in the aforementioned areas, typically in early childhood, and of developing personal thoughts and beliefs about the world. Old age and disease may affect cognitive functioning, causing memory loss and trouble thinking of the right words while speaking or writing ("drawing a blank"). Multiple sclerosis (MS), for example, can eventually cause memory loss, an inability to grasp new concepts or information, and depleted verbal fluency. Humans generally have a high capacity for cognitive functioning once born, so almost every person is capable of learning or remembering. Intelligence is tested with IQ tests and others, although these have issues with accuracy and completeness. In such tests, patients may be asked a series of questions, or to perform tasks, with each measuring a cognitive skill, such as level of consciousness, memory, awareness, problem-solving, motor skills, analytical abilities, or other similar concepts. Early childhood is when the brain is most malleable to orientate to tasks that are relevant in the person's environment. See also Adaptive behavior Adaptive functioning Intelligence Quotient (IQ) Cognition Cognitive Abilities Test Jungian cognitive functions Notes References NCME - Glossary of Important Assessment and Measurement Terms [cognitive ability] Cognition Skills
0.778758
0.995677
0.775392
Classification of mental disorders
The classification of mental disorders, also known as psychiatric nosology or psychiatric taxonomy, is central to the practice of psychiatry and other mental health professions. The two most widely used psychiatric classification systems are chapter V of the International Classification of Diseases, 10th edition (ICD-10), produced by the World Health Organization (WHO); and the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5), produced by the American Psychiatric Association (APA). Both systems list disorders thought to be distinct types, and in recent revisions the two systems have deliberately converged their codes so that their manuals are often broadly comparable, though differences remain. Both classifications employ operational definitions. Other classification schemes, used more locally, include the Chinese Classification of Mental Disorders. Manuals of limited use, by practitioners with alternative theoretical persuasions, include the Psychodynamic Diagnostic Manual. Definitions In the scientific and academic literature on the definition or categorization of mental disorders, one extreme argues that it is entirely a matter of value judgments (including of what is normal) while another proposes that it is or could be entirely objective and scientific (including by reference to statistical norms); other views argue that the concept refers to a "fuzzy prototype" that can never be precisely defined, or that the definition will always involve a mixture of scientific facts (e.g. that a natural or evolved function is not working properly) and value judgments (e.g. that it is harmful or undesired). Lay concepts of mental disorder vary considerably across different cultures and countries, and may refer to different sorts of individual and social problems. The WHO and national surveys report that there is no single consensus on the definition of mental disorder, and that the phrasing used depends on the social, cultural, economic and legal context in different contexts and in different societies. The WHO reports that there is intense debate about which conditions should be included under the concept of mental disorder; a broad definition can cover mental illness, intellectual disability, personality disorder and substance dependence, but inclusion varies by country and is reported to be a complex and debated issue. There may be a criterion that a condition should not be expected to occur as part of a person's usual culture or religion. However, despite the term "mental", there is not necessarily a clear distinction drawn between mental (dys)functioning and brain (dys)functioning, or indeed between the brain and the rest of the body. Most international clinical documents avoid the term "mental illness", preferring the term "mental disorder". However, some use "mental illness" as the main overarching term to encompass mental disorders. Some consumer/survivor movement organizations oppose use of the term "mental illness" on the grounds that it supports the dominance of a medical model. The term "serious mental impairment" (SMI) is sometimes used to refer to more severe and long-lasting disorders while "mental health problems" may be used as a broader term, or to refer only to milder or more transient issues. Confusion often surrounds the ways and contexts in which these terms are used. Mental disorders are generally classified separately to neurological disorders, learning disabilities or intellectual disabilities. ICD-10 The International Classification of Diseases (ICD) is an international standard diagnostic classification for a wide variety of health conditions. The ICD-10 states that mental disorder is "not an exact term", although is generally used "...to imply the existence of a clinically recognisable set of symptoms or behaviours associated in most cases with distress and with interference with personal functions." Chapter V focuses on "mental and behavioural disorders" and consists of 10 main groups: F0 – F9: Organic, including symptomatic, mental disorders F10 – F-19: Mental and behavioural disorders due to use of psychoactive substances F20 – F25: Schizophrenia, schizotypal and delusional disorders F30 – F39: Mood [affective] disorders F40 – F49: Neurotic, stress-related and somatoform disorders F50 – F59: Behavioural syndromes associated with physiological disturbances and physical factors F60 – F69: Disorders of personality and behaviour in adult persons F70 – F79: Mental retardation F80 – F89: Disorders of psychological development F90 – 98: Behavioural and emotional disorders with onset usually occurring in childhood and adolescence In addition, a group of F99 "unspecified mental disorders". Within each group there are more specific subcategories. The WHO has revised ICD-10 to produce the latest version of the ICD, ICD-11 adopted by the 72nd World Health Assembly in 2019 and came into effect on 1 January 2022. DSM-IV The DSM-IV was originally published in 1994 and listed more than 250 mental disorders. It was produced by the American Psychiatric Association and it characterizes mental disorder as "a clinically significant behavioral or psychological syndrome or pattern that occurs in an individual,...is associated with present distress...or disability...or with a significantly increased risk of suffering" but that "...no definition adequately specifies precise boundaries for the concept of 'mental disorder'...different situations call for different definitions" (APA, 1994 and 2000). The DSM also states that "there is no assumption that each category of mental disorder is a completely discrete entity with absolute boundaries dividing it from other mental disorders or no mental disorders." The DSM-IV-TR (Text Revision, 2000) consisted of five axes (domains) on which disorder could be assessed. The five axes were: Axis I: Clinical Disorders (all mental disorders except Personality Disorders and Mental Retardation) Axis II: Personality Disorders and Mental Retardation Axis III: General Medical Conditions (must be connected to a Mental Disorder) Axis IV: Psychosocial and Environmental Problems (for example limited social support network) Axis V: Global Assessment of Functioning (Psychological, social and job-related functions are evaluated on a continuum between mental health and extreme mental disorder) The axis classification system was removed in the DSM-5 and is now mostly of historical significance. The main categories of disorder in the DSM are: Other schemes The Chinese Society of Psychiatry's Chinese Classification of Mental Disorders (currently CCMD-3) The Latin American Guide for Psychiatric Diagnosis (GLDP). The Research Domain Criteria (RDoC), a framework being developed by the National Institute of Mental Health The Hierarchical Taxonomy of Psychopathology (HiTOP), developed by the HiTOP consortium, a group of psychologists and psychiatrists who had a record of scientific contributions to classification of psychopathology. Childhood diagnosis Child and adolescent psychiatry sometimes uses specific manuals in addition to the DSM and ICD. The Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood (DC:0-3) was first published in 1994 by Zero to Three to classify mental health and developmental disorders in the first four years of life. It has been published in 9 languages. The Research Diagnostic criteria-Preschool Age (RDC-PA) was developed between 2000 and 2002 by a task force of independent investigators with the goal of developing clearly specified diagnostic criteria to facilitate research on psychopathology in this age group. The French Classification of Child and Adolescent Mental Disorders (CFTMEA), operational since 1983, is the classification of reference for French child psychiatrists. Usage The ICD and DSM classification schemes have achieved widespread acceptance in psychiatry. A survey of 205 psychiatrists, from 66 countries across all continents, found that ICD-10 was more frequently used and more valued in clinical practice and training, while the DSM-IV was more frequently used in clinical practice in the United States and Canada, and was more valued for research, with accessibility to either being limited, and usage by other mental health professionals, policy makers, patients and families less clear. . A primary care (e.g. general or family physician) version of the mental disorder section of ICD-10 has been developed (ICD-10-PHC) which has also been used quite extensively internationally. A survey of journal articles indexed in various biomedical databases between 1980 and 2005 indicated that 15,743 referred to the DSM and 3,106 to the ICD. In Japan, most university hospitals use either the ICD or DSM. ICD appears to be the somewhat more used for research or academic purposes, while both were used equally for clinical purposes. Other traditional psychiatric schemes may also be used. Types of classification schemes Categorical schemes The classification schemes in common usage are based on separate (but may be overlapping) categories of disorder schemes sometimes termed "neo-Kraepelinian" (after the psychiatrist Kraepelin) which is intended to be atheoretical with regard to etiology (causation). These classification schemes have achieved some widespread acceptance in psychiatry and other fields, and have generally been found to have improved inter-rater reliability, although routine clinical usage is less clear. Questions of validity and utility have been raised, both scientifically and in terms of social, economic and political factors—notably over the inclusion of certain controversial categories, the influence of the pharmaceutical industry, or the stigmatizing effect of being categorized or labelled. Non-categorical schemes Some approaches to classification do not use categories with single cut-offs separating the ill from the healthy or the abnormal from the normal (a practice sometimes termed "threshold psychiatry" or "dichotomous classification"). Classification may instead be based on broader underlying "spectra", where each spectrum links together a range of related categorical diagnoses and nonthreshold symptom patterns. Some approaches go further and propose continuously varying dimensions that are not grouped into spectra or categories; each individual simply has a profile of scores across different dimensions. DSM-5 planning committees are currently seeking to establish a research basis for a hybrid dimensional classification of personality disorders. However, the problem with entirely dimensional classifications is they are said to be of limited practical value in clinical practice where yes/no decisions often need to be made, for example whether a person requires treatment, and moreover the rest of medicine is firmly committed to categories, which are assumed to reflect discrete disease entities. While the Psychodynamic Diagnostic Manual has an emphasis on dimensionality and the context of mental problems, it has been structured largely as an adjunct to the categories of the DSM. Moreover, dimensionality approach was criticized for its reliance on independent dimensions whereas all systems of behavioral regulations show strong inter-dependence, feedback and contingent relationships Descriptive vs Somatic Descriptive classifications are based almost exclusively on either descriptions of behavior as reported by various observers, such as parents, teachers, and medical personnel; or symptoms as reported by individuals themselves. As such, they are quite subjective, not amenable to verification by third parties, and not readily transferable across chronologic and/or cultural barriers. Somatic nosology, on the other hand, is based almost exclusively on the objective histologic and chemical abnormalities which are characteristic of various diseases and can be identified by appropriately trained pathologists. While not all pathologists will agree in all cases, the degree of uniformity allowed is orders of magnitude greater than that enabled by the constantly changing classification embraced by the DSM system. Some models, like Functional Ensemble of Temperament suggest to unify nosology of somatic, biologically based individual differences in healthy people (temperament) and their deviations in a form of mental disorders in one taxonomy. Cultural differences Classification schemes may not apply to all cultures. The DSM is based on predominantly American research studies and has been said to have a decidedly American outlook, meaning that differing disorders or concepts of illness from other cultures (including personalistic rather than naturalistic explanations) may be neglected or misrepresented, while Western cultural phenomena may be taken as universal. Culture-bound syndromes are those hypothesized to be specific to certain cultures (typically taken to mean non-Western or non-mainstream cultures); while some are listed in an appendix of the DSM-IV they are not detailed and there remain open questions about the relationship between Western and non-Western diagnostic categories and sociocultural factors, which are addressed from different directions by, for example, cross-cultural psychiatry or anthropology. Historical development Antiquity In Ancient Greece, Hippocrates and his followers are generally credited with the first classification system for mental illnesses, including mania, melancholia, paranoia, phobias and Scythian disease (transvestism). They held that they were due to different kinds of imbalance in four humors. Middle ages to Renaissance The Persian physicians 'Ali ibn al-'Abbas al-Majusi and Najib ad-Din Samarqandi elaborated upon Hippocrates' system of classification. Avicenna (980−1037 CE) in the Canon of Medicine listed a number of mental disorders, including "passive male homosexuality". Laws generally distinguished between "idiots" and "lunatics". Thomas Sydenham (1624–1689), the "English Hippocrates", emphasized careful clinical observation and diagnosis and developed the concept of a syndrome, a group of associated symptoms having a common course, which would later influence psychiatric classification. 18th century Evolution in the scientific concepts of psychopathology (literally referring to diseases of the mind) took hold in the late 18th and 19th centuries following the Renaissance and Enlightenment. Individual behaviors that had long been recognized came to be grouped into syndromes. Boissier de Sauvages developed an extremely extensive psychiatric classification in the mid-18th century, influenced by the medical nosology of Thomas Sydenham and the biological taxonomy of Carl Linnaeus. It was only part of his classification of 2400 medical diseases. These were divided into 10 "classes", one of which comprised the bulk of the mental diseases, divided into four "orders" and 23 "genera". One genus, melancholia, was subdivided into 14 "species". William Cullen advanced an influential medical nosology which included four classes of neuroses: coma, adynamias, spasms, and vesanias. The vesanias included amentia, melancholia, mania, and oneirodynia. Towards the end of the 18th century and into the 19th, Pinel, influenced by Cullen's scheme, developed his own, again employing the terminology of genera and species. His simplified revision of this reduced all mental illnesses to four basic types. He argued that mental disorders are not separate entities but stem from a single disease that he called "mental alienation". Attempts were made to merge the ancient concept of delirium with that of insanity, the latter sometimes described as delirium without fever. On the other hand, Pinel had started a trend for diagnosing forms of insanity 'without delirium' (meaning hallucinations or delusions) – a concept of partial insanity. Attempts were made to distinguish this from total insanity by criteria such as intensity, content or generalization of delusions. 19th century Pinel's successor, Esquirol, extended Pinel's categories to five. Both made a clear distinction between insanity (including mania and dementia) as opposed to mental retardation (including idiocy and imbecility). Esquirol developed a concept of monomania—a periodic delusional fixation or undesirable disposition on one theme—that became a broad and common diagnosis and a part of popular culture for much of the 19th century. The diagnosis of "moral insanity" coined by James Prichard also became popular; those with the condition did not seem delusional or intellectually impaired but seemed to have disordered emotions or behavior. The botanical taxonomic approach was abandoned in the 19th century, in favor of an anatomical-clinical approach that became increasingly descriptive. There was a focus on identifying the particular psychological faculty involved in particular forms of insanity, including through phrenology, although some argued for a more central "unitary" cause. French and German psychiatric nosology was in the ascendency. The term "psychiatry" ("Psychiatrie") was coined by German physician Johann Christian Reil in 1808, from the Greek "ψυχή" (psychē: "soul or mind") and "ιατρός" (iatros: "healer or doctor"). The term "alienation" took on a psychiatric meaning in France, later adopted into medical English. The terms psychosis and neurosis came into use, the former viewed psychologically and the latter neurologically. In the second half of the century, Karl Kahlbaum and Ewald Hecker developed a descriptive categorizion of syndromes, employing terms such as dysthymia, cyclothymia, catatonia, paranoia and hebephrenia. Wilhelm Griesinger (1817–1869) advanced a unitary scheme based on a concept of brain pathology. French psychiatrists Jules Baillarger described "folie à double forme" and Jean-Pierre Falret described "la folie circulaire"—alternating mania and depression. The concept of adolescent insanity or developmental insanity was advanced by Scottish Asylum Superintendent and Lecturer in Mental Diseases Thomas Clouston in 1873, describing a psychotic condition which generally impacts those aged 18–24 years, particularly males, and in 30% of cases proceeded to "a secondary dementia". The concept of hysteria (wandering womb) had long been used, perhaps since ancient Egyptian times, and was later adopted by Freud. Descriptions of a specific syndrome now known as somatization disorder were first developed by the French physician, Paul Briquet in 1859. An American physician, Beard, described "neurasthenia" in 1869. German neurologist Westphal, coined the term "obsessional neurosis" now termed obsessive-compulsive disorder, and agoraphobia. Alienists created a whole new series of diagnoses that highlighted single, impulsive behavior, such as kleptomania, dipsomania, pyromania, and nymphomania. The diagnosis of drapetomania was also developed in the Southern United States to explain the perceived irrationality of black slaves trying to escape what was thought to be a suitable role. The scientific study of homosexuality began in the 19th century, informally viewed either as natural or as a disorder. Kraepelin included it as a disorder in his Compendium der Psychiatrie that he published in successive editions from 1883. In the late 19th century, Koch referred to "psychopathic inferiority" as a new term for moral insanity. In the 20th century the term became known as "psychopathy" or "sociopathy", related specifically to antisocial behavior. Related studies led to the DSM-III category of antisocial personality disorder. 20th century Influenced by the approach of Kahlbaum and others, and developing his concepts in publications spanning the turn of the century, German psychiatrist Emil Kraepelin advanced a new system. He grouped together a number of existing diagnoses that appeared to all have a deteriorating course over time—such as catatonia, hebephrenia and dementia paranoides—under another existing term "dementia praecox" (meaning "early senility", later renamed schizophrenia). Another set of diagnoses that appeared to have a periodic course and better outcome were grouped together under the category of manic-depressive insanity (mood disorder). He also proposed a third category of psychosis, called paranoia, involving delusions but not the more general deficits and poor course attributed to dementia praecox. In all he proposed 15 categories, also including psychogenic neurosis, psychopathic personality, and syndromes of defective mental development (mental retardation). He eventually included homosexuality in the category of "mental conditions of constitutional origin". The neuroses were later split into anxiety disorders and other disorders. Freud wrote extensively on hysteria and also coined the term, "anxiety neurosis", which appeared in DSM-I and DSM-II. Checklist criteria for this led to studies that were to define panic disorder for DSM-III. Early 20th century schemes in Europe and the United States reflected a brain disease (or degeneration) model that had emerged during the 19th century, as well as some ideas from Darwin's theory of evolution and/or Freud's psychoanalytic theories. Psychoanalytic theory did not rest on classification of distinct disorders, but pursued analyses of unconscious conflicts and their manifestations within an individual's life. It dealt with neurosis, psychosis, and perversion. The concept of borderline personality disorder and other personality disorder diagnoses were later formalized from such psychoanalytic theories, though such ego psychology-based lines of development diverged substantially from the paths taken elsewhere within psychoanalysis. The philosopher and psychiatrist Karl Jaspers made influential use of a "biographical method" and suggested ways to diagnose based on the form rather than content of beliefs or perceptions. In regard to classification in general he prophetically remarked that: "When we design a diagnostic schema, we can only do so if we forego something at the outset … and in the face of facts we have to draw the line where none exists... A classification therefore has only provisional value. It is a fiction which will discharge its function if it proves to be the most apt for the time". Adolph Meyer advanced a mixed biosocial scheme that emphasized the reactions and adaptations of the whole organism to life experiences. In 1945, William C. Menninger advanced a classification scheme for the US army, called Medical 203, synthesizing ideas of the time into five major groups. This system was adopted by the Veterans Administration in the United States and strongly influenced the DSM. The term stress, having emerged from endocrinology work in the 1930s, was popularized with an increasingly broad biopsychosocial meaning, and was increasingly linked to mental disorders. The diagnosis of post-traumatic stress disorder was later created. Mental disorders were first included in the sixth revision of the International Classification of Diseases (ICD-6) in 1949. Three years later, in 1952, the American Psychiatric Association created its own classification system, DSM-I. The Feighner Criteria group described fourteen major psychiatric disorders for which careful research studies were available, including homosexuality. These developed as the Research Diagnostic Criteria, adopted and further developed by the DSM-III. The DSM and ICD developed, partly in sync, in the context of mainstream psychiatric research and theory. Debates continued and developed about the definition of mental illness, the medical model, categorical vs dimensional approaches, and whether and how to include suffering and impairment criteria. There is some attempt to construct novel schemes, for example from an attachment perspective where patterns of symptoms are construed as evidence of specific patterns of disrupted attachment, coupled with specific types of subsequent trauma. 21st century The ICD-11 and DSM-5 are being developed at the start of the 21st century. Any radical new developments in classification are said to be more likely to be introduced by the APA than by the WHO, mainly because the former only has to persuade its own board of trustees whereas the latter has to persuade the representatives of over 200 countries at a formal revision conference. In addition, while the DSM is a bestselling publication that makes huge profits for APA, the WHO incurs major expense in determining international consensus for revisions to the ICD. Although there is an ongoing attempt to reduce trivial or accidental differences between the DSM and ICD, it is thought that the APA and the WHO are likely to continue to produce new versions of their manuals and, in some respects, to compete with one another. Criticism There is some ongoing scientific doubt concerning the construct validity and reliability of psychiatric diagnostic categories and criteria even though they have been increasingly standardized to improve inter-rater agreement in controlled research. In the United States, there have been calls and endorsements for a congressional hearing to explore the nature and extent of harm potentially caused by this "minimally investigated enterprise". Other specific criticisms of the current schemes include: attempts to demonstrate natural boundaries between related syndromes, or between a common syndrome and normality, have failed; inappropriateness of statistical (factor-analytic) arguments and lack of functionality considerations in the analysis of a structure of behavioral pathology; the disorders of current classification are probably surface phenomena that can have many different interacting causes, yet "the mere fact that a diagnostic concept is listed in an official nomenclature and provided with a precise operational definition tends to encourage us to assume that it is a "quasi-disease entity" that can be invoked to explain the patient's symptoms"; and that the diagnostic manuals have led to an unintended decline in careful evaluation of each individual person's experiences and social context. Psychodynamic schemes have traditionally given the latter phenomenological aspect more consideration, but in psychoanalytic terms that have been long criticized on numerous grounds. Some have argued that reliance on operational definition demands that intuitive concepts, such as depression, need to be operationally defined before they become amenable to scientific investigation. However, John Stuart Mill pointed out the dangers of believing that anything that could be given a name must refer to a thing and Stephen Jay Gould and others have criticized psychologists for doing just that. One critic states that "Instead of replacing 'metaphysical' terms such as 'desire' and 'purpose', they used it to legitimize them by giving them operational definitions. Thus in psychology, as in economics, the initial, quite radical operationalist ideas eventually came to serve as little more than a 'reassurance fetish' (Koch 1992, 275) for mainstream methodological practice." According to Tadafumi Kato, since the era of Kraepelin, psychiatrists have been trying to differentiate mental disorders by using clinical interviews. Kato argues there has been little progress over the last century and that only modest improvements are possible in this way; he suggests that only neurobiological studies using modern technology could form the basis for a new classification. According to Heinz Katsching, expert committees have combined phenomenological criteria in variable ways into categories of mental disorders, repeatedly defined and redefined over the last half century. The diagnostic categories are termed "disorders" and yet, despite not being validated by biological criteria as most medical diseases are, are framed as medical diseases identified by medical diagnoses. He describes them as top-down classification systems similar to the botanic classifications of plants in the 17th and 18th centuries, when experts decided a priori which visible aspects of plants were relevant. Katsching notes that while psychopathological phenomena are certainly observed and experienced, the conceptual basis of psychiatric diagnostic categories is questioned from various ideological perspectives. Psychiatrist Joel Paris argues that psychiatry is sometimes susceptible to diagnostic fads. Some have been based on theory (overdiagnosis of schizophrenia), some based on etiological (causation) concepts (overdiagnosis of post-traumatic stress disorder), and some based on the development of treatments. Paris points out that psychiatrists like to diagnose conditions they can treat, and gives examples of what he sees as prescribing patterns paralleling diagnostic trends, for example an increase in bipolar diagnosis once lithium came into use, and similar scenarios with the use of electroconvulsive therapy, neuroleptics, tricyclic antidepressants, and SSRIs. He notes that there was a time when every patient seemed to have "latent schizophrenia" and another time when everything in psychiatry seemed to be "masked depression", and he fears that the boundaries of the bipolar spectrum concept, including in application to children, are similarly expanding. Allen Frances has suggested fad diagnostic trends regarding autism and Attention deficit hyperactivity disorder. Since the 1980s, psychologist Paula Caplan has had concerns about psychiatric diagnosis, and people being arbitrarily "slapped with a psychiatric label". Caplan says psychiatric diagnosis is unregulated, so doctors are not required to spend much time understanding patients situations or to seek another doctor's opinion. The criteria for allocating psychiatric labels are contained in the Diagnostic and Statistical Manual of Mental Disorders, which can "lead a therapist to focus on narrow checklists of symptoms, with little consideration for what is causing the patient's suffering". So, according to Caplan, getting a psychiatric diagnosis and label often hinders recovery. The DSM and ICD approach remains under attack both because of the implied causality model and because some researchers believe it better to aim at underlying brain differences which can precede symptoms by many years. See also Abnormal psychology Diagnosis Diagnostic classification and rating scales used in psychiatry Medical classification DSM-IV codes Structured Clinical Interview for DSM-IV (SCID) Nosology Operationalism Psychopathology Relational disorder (proposed DSM-5 new diagnosis) References External links Dalal PK, Sivakumar T. (2009) Moving towards ICD-11 and DSM-V: Concept and evolution of psychiatric classification. Indian Journal of Psychiatry, Volume 51, Issue 4, Page 310–319. Classification of mental disorders Mental disorders
0.78015
0.993848
0.77535
Sublimation (psychology)
In psychology, sublimation is a mature type of defense mechanism, in which socially unacceptable impulses or idealizations are transformed into socially acceptable actions or behavior, possibly resulting in a long-term conversion of the initial impulse. Sigmund Freud believed that sublimation was a sign of maturity and civilization, allowing people to function normally in culturally acceptable ways. He defined sublimation as the process of deflecting sexual instincts into acts of higher social valuation, being "an especially conspicuous feature of cultural development; it is what makes it possible for higher psychical activities, scientific, artistic or ideological, to play such an 'important' part in civilized life." Wade and Travis present a similar view, stating that sublimation occurs when displacement "serves a higher cultural or socially useful purpose, as in the creation of art or inventions." Nietzsche In the opening section of Human, All Too Human entitled "Of first and last things", Nietzsche wrote: There is, strictly speaking, neither unselfish conduct, nor a wholly disinterested point of view. Both are simply sublimations in which the basic element seems almost evaporated and betrays its presence only to the keenest observation. All that we need and that could possibly be given us in the present state of development of the sciences, is a chemistry of the moral, religious, aesthetic conceptions and feeling, as well as of those emotions which we experience in the affairs, great and small, of society and civilization, and which we are sensible of even in solitude. But what if this chemistry established the fact that, even in its domain, the most magnificent results were attained with the basest and most despised ingredients? Would many feel disposed to continue such investigations? Mankind loves to put by the questions of its origin and beginning: must one not be almost inhuman in order to follow the opposite course? Freud and psychoanalytic theory In Freud's psychoanalytical theory, erotic energy is allowed a limited amount of expression, owing to the constraints of human society and civilization itself. It therefore requires other outlets, especially if an individual is to remain psychologically balanced. The ego must act as a mediator between the moral norms of the super-ego, the realistic expectations of reality, and the drives and impulses of the id. One method by which the ego lessens the stress that unacceptably strong urges or emotions can cause is through sublimation. Sublimation is the process of transforming libido into "socially useful" achievements, including artistic, cultural, and intellectual pursuits. Freud considered this psychical operation to be fairly salutary compared to the others that he identified, such as repression, displacement, denial, reaction formation, intellectualisation, and projection. In The Ego and the Mechanisms of Defence (1936), his daughter, Anna, classed sublimation as one of the major 'defence mechanisms' of the psyche. Freud got the idea of sublimation while reading The Harz Journey by Heinrich Heine. The story is about Johann Friedrich Dieffenbach who cut off the tails of dogs he encountered in childhood and later became a surgeon. Freud concluded that sublimation could be a conflict between the need for satisfaction and the need for security without perturbation of awareness. In an action performed many times throughout one's life, which firstly appears sadistic, thought is ultimately refined into an activity which is of benefit to mankind. Sexual sublimation Sexual sublimation was according to Freud a deflection of sexual instincts into non-sexual activity, based upon a principle akin to the conservation of energy in physics. There is a finite amount of activity, and it is converted, in a mechanistic fashion like a mechanical engine, from sexual activity to non-sexual. One such example is the case of Wolf Man, a case in which a young boy's sexual attraction to his father was redirected toward Christianity and eventually led the boy to obsessional neurosis in the form of uncontrollable sacrilegious reverence. Freud travelled to Clark University to speak about instances of sexual sublimation, but he was not wholly convinced of his own theories. 20th century psychological thought by the likes of Melanie Klein has largely relegated the idea and replaced it with subtler ideas. One such idea is that the sexual desires are not made totally non-sexual, but rather transformed into a more appropriate desire. Although superficially valid, with anecdotal examples from non-psychologists of civilizations at large and specific great achievers repressing sexual urges (e.g. Renoir "painting with his cock", Wayland Young stating that "love's loss is empire's gain", Lawrence Stone's view that Western civilization has achieved so much because of sublimation, and the claims by biographers of many people from Higgins on Rider Haggard to Sinclair on George Grey), it is ill-defined and comes with the caveats that it rarely happens in practice, that many things attributed to it are actually the results of something else, and that it is most definitely not some quasi-physical transfer of some sort of "sexual energy" in the modern psychoanalytical view but rather an internal thought process. Jung C. G. Jung argued that Freud's opinion: ...can only be based on the totally erroneous supposition that the unconscious is a monster. It is a view that springs from fear of nature and the realities of life. Freud invented the idea of sublimation to save us from the imaginary claws of the unconscious. But what is real, what actually exists, cannot be alchemically sublimated, and if anything is apparently sublimated it never was what a false interpretation took it to be. In the same article, Jung went on to suggest that unconscious processes became dangerous only to the extent that people repress them. The more people come to assimilate and recognize the unconscious, the less of a danger it becomes. In this view sublimation requires not repression of drives through will, but acknowledgement of the creativity of unconscious processes and a learning of how to work with them. This differs fundamentally from Freud's view of the concept. For Freud, sublimation helped explain the plasticity of the sexual instincts (and their convertibility to non-sexual ends) - see libido. The concept also underpinned Freud's psychoanalytical theories, which showed the human psyche at the mercy of conflicting impulses (such as the super-ego and the id). In his private letters, Jung criticized Freud for obscuring the alchemical origins of sublimation and for attempting instead to make the concept appear scientifically credible: Sublimation is part of the royal art where the true gold is made. Of this Freud knows nothing; worse still, he barricades all the paths that could lead to true sublimation. This is just about the opposite of what Freud understands by sublimation. It is not a voluntary and forcible channeling of instinct into a spurious field of application, but an alchymical transformation for which fire and prima materia are needed. Sublimation is a great mystery. Freud has appropriated this concept and usurped it for the sphere of the will and the bourgeois, rationalistic ethos. Lacan Das Ding The French psychoanalyst Jacques Lacan's exposition of sublimation is framed within a discussion about the relationship of psychoanalysis and ethics within the seventh book of his seminars. Lacanian sublimation is defined with reference to the concept Das Ding (later in his career Lacan termed this objet petit a); Das Ding is German for "the thing" though Lacan conceives it as an abstract notion and one of the defining characteristics of the human condition. Broadly speaking it is the vacuum one experiences as a human being and which one endeavours to fill with differing human relationships, objects and experiences, all of which are used to plug a gap in one's psychical needs. Unfortunately, all attempts to overcome the vacuity of Das Ding are insufficient in wholly satisfying the individual. For this reason, Lacan also considers Das Ding to be a non-Thing or vacuole. Lacan considers Das Ding a lost object ever in the process of being recuperated by Man. Temporarily the individual will be duped by his or her own psyche into believing that this object, this person or this circumstance can be relied upon to satisfy his needs in a stable and enduring manner when in fact it is in its nature that the object as such is lost—and will never be found again. Something is there while one waits for something better, or worse, but which one wants, and again Das Ding "is to be found at most as something missed. One doesn't find it, but only its pleasurable associations." Human life unravels as a series of detours in the quest for the lost object or the absolute Other of the individual: "The pleasure principle governs the search for the object and imposes detours which maintain the distance to Das Ding in relation to its end." Lacanian sublimation Lacanian sublimation centres to a large part on the notion of Das Ding. His general formula for sublimation is that "it raises an object ... to the dignity of The Thing." Lacan considers these objects (whether human, aesthetic, credal, or philosophical) to be signifiers which are representative of Das Ding and that "the function of the pleasure principle is, in effect, to lead the subject from signifier to signifier, by generating as many signifiers as are required to maintain at as low a level as possible the tension that regulates the whole functioning of the psychic apparatus." Furthermore, man is the "artisan of his support system", in other words, he creates or finds the signifiers which delude him into believing he has overcome the emptiness of Das Ding. Lacan also considers sublimation to be a process of creation ex nihilo (creating out of nothing), whereby an object, human or manufactured, comes to be defined in relation to the emptiness of Das Ding. Lacan's prime example of this is the courtly love of the troubadours and Minnesänger who dedicated their poetic verse to a love-object which was not only unreachable (and therefore experienced as something missing) but whose existence and desirability also centered around a hole (the vagina). For Lacan such courtly love was "a paradigm of sublimation." He affirms that the word 'troubadour' is etymologically linked to the Provençal verb trobar (like the French trouver), "to find". If we consider again the definition of Das Ding, it is dependent precisely on the expectation of the subject to re-find the lost object in the mistaken belief that it will continue to satisfy him (or her). Lacan maintains that creation ex nihilo operates in other noteworthy fields as well. In pottery for example vases are created around an empty space. They are primitive and even primordial artifacts which have benefited mankind not only in the capacity of utensils but also as metaphors of (cosmic) creation ex nihilo. Lacan cites Heidegger who situates the vase between the earthly (raising clay from the ground) and the ethereal (pointing upwards to receive). In architecture, Lacan asserts, buildings are designed around an empty space and in art paintings proceed from an empty canvas, and often depict empty spaces through perspective. In myth, Pan pursues the nymph Syrinx who is transformed into hollow reeds in order to avoid the clutches of the god, who subsequently cuts the reeds down in anger and transforms them into what we today call panpipes (both reeds and panpipes rely on their hollowness for the production of sound). Lacan briefly remarks that religion and science are also based around emptiness. In regard to religion, Lacan refers the reader to Freud, stating that much obsessional religious behavior can be attributed to the avoidance of the primordial emptiness of Das Ding or in the respecting of it. As for the discourse of science this is based on the notion of Verwerfung (the German word for "dismissal") which results in the dismissing, foreclosing or exclusion of the notion of Das Ding presumably because it defies empirical categorisation. Empirical research A study by Kim, Zeppenfeld, and Cohen studied sublimation by empirical methods. These investigators view their research, published 2013 in the Journal of Personality and Social Psychology, as providing "possibly the first experimental evidence for sublimation and [suggesting] a cultural psychological approach to defense mechanisms." Religious and spiritual views As espoused in its foundational text , the Tanya, the Chabad Lubavitcher sect of Judaism views sublimation of the animal soul as an essential task in life, wherein the goal is to transform animalistic and earthy cravings for physical pleasure into holy desires to connect with God. Different schools of thought describe general sexual urges as carriers of spiritual essence, and have the varied names of vital energy, vital winds (prana), spiritual energy, ojas, shakti, tummo, or kundalini. In fiction One of the best-known examples in Western literature is in Thomas Mann's novella, Death in Venice, where the protagonist Gustav von Aschenbach, a famous writer, sublimates his desire for an adolescent boy into writing poetry. In The Diamond Age by Neal Stephenson, sublimation is presented as the source of the Neo-Victorians' dominance: "...it was precisely their emotional repression that made the Victorians the richest and most powerful people in the world. Their ability to submerge their feelings, far from pathological, was rather a kind of mystical art that gave them nearly magical power over Nature and over the more intuitive tribes. Such was also the strength of the Nipponese." References Bibliography Further reading External links Defence mechanisms Freudian psychology Human sexuality Motivation Psychoanalytic terminology Mental states
0.778178
0.996289
0.77529
Theory of mind
In psychology, theory of mind refers to the capacity to understand other people by ascribing mental states to them. A theory of mind includes the understanding that others' beliefs, desires, intentions, emotions, and thoughts may be different from one's own. Possessing a functional theory of mind is crucial for success in everyday human social interactions. People utilize a theory of mind when analyzing, judging, and inferring others' behaviors. The discovery and development of theory of mind primarily came from studies done with animals and infants. Factors including drug and alcohol consumption, language development, cognitive delays, age, and culture can affect a person's capacity to display theory of mind. Having a theory of mind is similar to but not identical with having the capacity for empathy or sympathy. It has been proposed that deficits in theory of mind may occur in people with autism, anorexia nervosa, schizophrenia, dysphoria, cocaine addiction, and brain damage caused by alcohol's neurotoxicity. Neuroimaging shows that the medial prefrontal cortex (mPFC), the posterior superior temporal sulcus (pSTS), the precuneus, and the amygdala are associated with theory of mind tasks. Patients with frontal lobe or temporoparietal junction lesions find some theory of mind tasks difficult. One's theory of mind develops in childhood as the prefrontal cortex develops. It has been argued that children in a culture of collectivism develop knowledge access earlier and understand diverse beliefs later than Western children in a culture of individualism. Definition The "theory of mind" is described as a theory, because the behavior of the other person, such as their statements and expressions, is the only thing being directly observed; no one has direct access to the mind of another, and the existence and nature of the mind must be inferred. It is typically assumed others have minds analogous to one's own; this assumption is based on three reciprocal social interactions, as observed in joint attention, the functional use of language, and the understanding of others' emotions and actions. Theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their intentions. It enables one to understand that mental states can be the cause of—and can be used to explain and predict—the behavior of others. Being able to attribute mental states to others and understanding them as causes of behavior implies, in part, one must be able to conceive of the mind as a "generator of representations". If a person does not have a mature theory of mind, it may be a sign of cognitive or developmental impairment. Theory of mind appears to be an innate potential ability in humans that requires social and other experience over many years for its full development. Different people may develop more or less effective theories of mind. Neo-Piagetian theories of cognitive development maintain that theory of mind is a byproduct of a broader hypercognitive ability of the human mind to register, monitor, and represent its own functioning. Empathy—the recognition and understanding of the states of mind of others, including their beliefs, desires, and particularly emotions—is a related concept. Empathy is often characterized as the ability to "put oneself into another's shoes". Recent neuro-ethological studies of animal behavior suggest that rodents may exhibit empathetic abilities. While empathy is known as emotional perspective-taking, theory of mind is defined as cognitive perspective-taking. Research on theory of mind, in humans and animals, adults and children, normally and atypically developing, has grown rapidly in the years since Premack and Guy Woodruff's 1978 paper, "Does the chimpanzee have a theory of mind?". The field of social neuroscience has also begun to address this debate by imaging the brains of humans while they perform tasks that require the understanding of an intention, belief, or other mental state in others. An alternative account of theory of mind is given in operant psychology and provides empirical evidence for a functional account of both perspective-taking and empathy. The most developed operant approach is founded on research on derived relational responding and is subsumed within relational frame theory. Derived relational responding relies on the ability to identify derived relations, or relationships between stimuli that are not directly learned or reinforced; for example, if "snake" is related to "danger" and "danger" is related to "fear", people may know to fear snakes even without learning an explicit connection between snakes and fear. According to this view, empathy and perspective-taking comprise a complex set of derived relational abilities based on learning to discriminate and respond verbally to ever more complex relations between self, others, place, and time, and through established relations. Philosophical and psychological roots Discussions of theory of mind have their roots in philosophical debate from the time of René Descartes' Second Meditation, which set the foundations for considering the science of the mind. Two contrasting approaches in the philosophical literature to theory of mind are theory-theory and simulation theory. The theory-theorist posits a veritable theory—"folk psychology"—that people use to reason about others' minds. Such a theory is developed automatically and innately, by concepts and rules we have for ourselves, though it is instantiated through social interactions. It is also closely related to person perception and attribution theory from social psychology. It is common and intuitive to assume that others are minded. People anthropomorphize non-human animals, inanimate objects, and even natural phenomena. Daniel Dennett referred to this tendency as taking an "intentional stance" toward things: we assume they have intentions, to help predict their future behavior. However, there is an important distinction between taking an "intentional stance" toward something and entering a "shared world" with it. The intentional stance is a functional relationship, describing the use of a theory due to its practical utility, rather than the accuracy of its representation of the world. As such, it is something people resort to during interpersonal interactions. A shared world is directly perceived and its existence structures reality itself for the perceiver. It is not just a lens, through which the perceiver views the world; it in many ways constitutes the cognition, as both its object and the blueprint used to structure perception into understanding. The philosophical roots of another perspective, the relational frame theory (RFT) account of theory of mind, arise from contextual psychology, which refers to the study of organisms (both human and non-human) interacting in and with a historical and current situational context. It is an approach based on contextualism, a philosophy in which any event is interpreted as an ongoing act inseparable from its current and historical context and in which a radically functional approach to truth and meaning is adopted. As a variant of contextualism, RFT focuses on the construction of practical, scientific knowledge. This scientific form of contextual psychology is virtually synonymous with the philosophy of operant psychology. Development The study of which animals are capable of attributing knowledge and mental states to others, as well as the development of this ability in human ontogeny and phylogeny, identifies several behavioral precursors to theory of mind. Understanding attention, understanding of others' intentions, and imitative experience with others are hallmarks of a theory of mind that may be observed early in the development of what later becomes a full-fledged theory. Simon Baron-Cohen proposed that infants' understanding of attention in others acts as a critical precursor to the development of theory of mind. Understanding attention involves understanding that seeing can be directed selectively as attention, that the looker assesses the seen object as "of interest", and that seeing can induce beliefs. A possible illustration of theory of mind in infants is joint attention. Joint attention refers to when two people look at and attend to the same thing. Parents often use the act of pointing to prompt infants to engage in joint attention; understanding this prompt requires that infants take into account another person's mental state and understand that the person notices an object or finds it of interest. Baron-Cohen speculates that the inclination to spontaneously reference an object in the world as of interest, via pointing, ("Proto declarative pointing") and to likewise appreciate the directed attention of another, may be the underlying motive behind all human communication. Understanding others' intentions is another critical precursor to understanding other minds because intentionality is a fundamental feature of mental states and events. The "intentional stance" was defined by Daniel Dennett as an understanding that others' actions are goal-directed and arise from particular beliefs or desires. Both two and three-year-old children could discriminate when an experimenter intentionally or accidentally marked a box with stickers. Even earlier in development, Andrew N. Meltzoff found that 18-month-old infants could perform target tasks involving the manipulation of objects that adult experimenters attempted and failed, suggesting the infants could represent the object-manipulating behavior of adults as involving goals and intentions. While attribution of intention and knowledge is investigated in young humans and nonhuman animals to detect precursors to a theory of mind, Gagliardi et al. have pointed out that even adult humans do not always act in a way consistent with an attributional perspective (i.e., based on attribution of knowledge to others). In their experiment, adult human subjects attempted to choose the container baited with a small object from a selection of four containers when guided by confederates who could not see which container was baited. Research in developmental psychology suggests that an infant's ability to imitate others lies at the origins of both theory of mind and other social-cognitive achievements like perspective-taking and empathy. According to Meltzoff, the infant's innate understanding that others are "like me" allows them to recognize the equivalence between the physical and mental states apparent in others and those felt by the self. For example, the infant uses their own experiences, orienting their head and eyes toward an object of interest to understand the movements of others who turn toward an object; that is, they will generally attend to objects of interest or significance. Some researchers in comparative disciplines have hesitated to put too much weight on imitation as a critical precursor to advanced human social-cognitive skills like mentalizing and empathizing, especially if true imitation is no longer employed by adults. A test of imitation by Alexandra Horowitz found that adult subjects imitated an experimenter demonstrating a novel task far less closely than children did. Horowitz points out that the precise psychological state underlying imitation is unclear and cannot, by itself, be used to draw conclusions about the mental states of humans. While much research has been done on infants, theory of mind develops continuously throughout childhood and into late adolescence as the synapses in the prefrontal cortex develop. The prefrontal cortex is thought to be involved in planning and decision-making. Children seem to develop theory of mind skills sequentially. The first skill to develop is the ability to recognize that others have diverse desires. Children are able to recognize that others have diverse beliefs soon after. The next skill to develop is recognizing that others have access to different knowledge bases. Finally, children are able to understand that others may have false beliefs and that others are capable of hiding emotions. While this sequence represents the general trend in skill acquisition, it seems that more emphasis is placed on some skills in certain cultures, leading to more valued skills to develop before those that are considered not as important. For example, in individualistic cultures such as the United States, a greater emphasis is placed on the ability to recognize that others have different opinions and beliefs. In a collectivistic culture, such as China, this skill may not be as important and therefore may not develop until later. Language There is evidence that the development of theory of mind is closely intertwined with language development in humans. One meta-analysis showed a moderate to strong correlation (r = 0.43) between performance on theory of mind and language tasks. Both language and theory of mind begin to develop around the same time in children (between ages two and five), but many other abilities develop during this same time period as well, and they do not produce such high correlations with one another nor with theory of mind. Pragmatic theories of communication assume that infants must possess an understanding of beliefs and mental states of others to infer the communicative content that proficient language users intend to convey. Since spoken phrases can have different meanings depending on context, theory of mind can play a crucial role in understanding the intentions of others and inferring the meaning of words. Some empirical results suggest that even 13-month-old infants have an early capacity for communicative mind-reading that enables them to infer what relevant information is transferred between communicative partners, which implies that human language relies at least partially on theory of mind skills. Carol A. Miller posed further possible explanations for this relationship. Perhaps the extent of verbal communication and conversation involving children in a family could explain theory of mind development. Such language exposure could help introduce a child to the different mental states and perspectives of others. Empirical findings indicate that participation in family discussion predicts scores on theory of mind tasks, and that deaf children who have hearing parents and may not be able to communicate with their parents much during early years of development tend to score lower on theory of mind tasks. Another explanation of the relationship between language and theory of mind development has to do with a child's understanding of mental-state words such as "think" and "believe". Since a mental state is not something that one can observe from behavior, children must learn the meanings of words denoting mental states from verbal explanations alone, requiring knowledge of the syntactic rules, semantic systems, and pragmatics of a language. Studies have shown that understanding of these mental state words predicts theory of mind in four-year-olds. A third hypothesis is that the ability to distinguish a whole sentence ("Jimmy thinks the world is flat") from its embedded complement ("the world is flat") and understand that one can be true while the other can be false is related to theory of mind development. Recognizing these complements as being independent of one another is a relatively complex syntactic skill and correlates with increased scores on theory of mind tasks in children. There is also evidence that the areas of the brain responsible for language and theory of mind are closely connected. The temporoparietal junction (TPJ) is involved in the ability to acquire new vocabulary, as well as to perceive and reproduce words. The TPJ also contains areas that specialize in recognizing faces, voices, and biological motion, and in theory of mind. Since all of these areas are located so closely together, it is reasonable to suspect that they work together. Studies have reported an increase in activity in the TPJ when patients are absorbing information through reading or images regarding other peoples' beliefs but not while observing information about physical control stimuli. Theory of mind in adults Neurotypical adults have theory of mind concepts that they developed as children (concepts such as belief, desire, knowledge, and intention). They use these concepts to meet the diverse demands of social life, ranging from snap decisions about how to trick an opponent in a competitive game, to keeping up with who knows what in a fast-moving conversation, to judging the guilt or innocence of the accused in a court of law. Boaz Keysar, Dale Barr, and colleagues found that adults often failed to use their theory of mind abilities to interpret a speaker's message, and acted as if unaware that the speaker lacked critical knowledge about a task. In one study, a confederate instructed adult participants to rearrange objects, some of which were not visible to the confederate, as part of a communication game. Only objects that were visible to both the confederate and the participant were part of the game. Despite knowing that the confederate could not see some of the objects, a third of the participants still tried to move those objects. Other studies show that adults are prone to egocentric biases, with which they are influenced by their own beliefs, knowledge, or preferences when judging those of other people, or that they neglect other people's perspectives entirely. There is also evidence that adults with greater memory, inhibitory capacity, and motivation are more likely to use their theory of mind abilities. In contrast, evidence about indirect effects of thinking about other people's mental states suggests that adults may sometimes use their theory of mind automatically. Agnes Kovacs and colleagues measured the time it took adults to detect the presence of a ball as it was revealed from behind an occluder. They found that adults' speed of response was influenced by whether another person (the "agent") in the scene thought there was a ball behind the occluder, even though adults were not asked to pay attention to what the agent thought. Dana Samson and colleagues measured the time it took adults to judge the number of dots on the wall of a room. They found that adults responded more slowly when another person standing in the room happened to see fewer dots than they did, even when they had never been asked to pay attention to what the person could see. It has been questioned whether these "altercentric biases" truly reflect automatic processing of what another person is thinking or seeing or, instead, reflect attention and memory effects cued by the other person, but not involving any representation of what they think or see. Different theories seek to explain such results. If theory of mind is automatic, this would help explain how people keep up with the theory of mind demands of competitive games and fast-moving conversations. It might also explain evidence that human infants and some non-human species sometimes appear capable of theory of mind, despite their limited resources for memory and cognitive control. If theory of mind is effortful and not automatic, on the other hand, this explains why it feels effortful to decide whether a defendant is guilty or whether a negotiator is bluffing. Economy of effort would help explain why people sometimes neglect to use their theory of mind. Ian Apperly and Stephen Butterfill suggested that people have "two systems" for theory of mind, in common with "two systems" accounts in many other areas of psychology. In this account, "system 1" is cognitively efficient and enables theory of mind for a limited but useful set of circumstances. "System 2" is cognitively effortful, but enables much more flexible theory of mind abilities. Philosopher Peter Carruthers disagrees, arguing that the same core theory of mind abilities can be used in both simple and complex ways. The account has been criticized by Celia Heyes who suggests that "system 1" theory of mind abilities do not require representation of mental states of other people, and so are better thought of as "sub-mentalizing". Aging In older age, theory of mind capacities decline, irrespective of how exactly they are tested. However, the decline in other cognitive functions is even stronger, suggesting that social cognition is better preserved. In contrast to theory of mind, empathy shows no impairments in aging. There are two kinds of theory of mind representations: cognitive (concerning mental states, beliefs, thoughts, and intentions) and affective (concerning the emotions of others). Cognitive theory of mind is further separated into first order (e.g., I think she thinks that) and second order (e.g. he thinks that she thinks that). There is evidence that cognitive and affective theory of mind processes are functionally independent from one another. In studies of Alzheimer's disease, which typically occurs in older adults, patients display impairment with second order cognitive theory of mind, but usually not with first order cognitive or affective theory of mind. However, it is difficult to discern a clear pattern of theory of mind variation due to age. There have been many discrepancies in the data collected thus far, likely due to small sample sizes and the use of different tasks that only explore one aspect of theory of mind. Many researchers suggest that theory of mind impairment is simply due to the normal decline in cognitive function. Cultural variations Researchers propose that five key aspects of theory of mind develop sequentially for all children between the ages of three and five: diverse desires, diverse beliefs, knowledge access, false beliefs, and hidden emotions. Australian, American, and European children acquire theory of mind in this exact order, and studies with children in Canada, India, Peru, Samoa, and Thailand indicate that they all pass the false belief task at around the same time, suggesting that children develop theory of mind consistently around the world. However, children from Iran and China develop theory of mind in a slightly different order. Although they begin the development of theory of mind around the same time, toddlers from these countries understand knowledge access before Western children but take longer to understand diverse beliefs. Researchers believe this swap in the developmental order is related to the culture of collectivism in Iran and China, which emphasizes interdependence and shared knowledge as opposed to the culture of individualism in Western countries, which promotes individuality and accepts differing opinions. Because of these different cultural values, Iranian and Chinese children might take longer to understand that other people have different beliefs and opinions. This suggests that the development of theory of mind is not universal and solely determined by innate brain processes but also influenced by social and cultural factors. Historiography Theory of mind can help historians to more properly understand historical figures' characters, for example Thomas Jefferson. Emancipationists like Douglas L. Wilson and scholars at the Thomas Jefferson Foundation view Jefferson as an opponent of slavery all his life, noting Jefferson's attempts within the limited range of options available to him to undermine slavery, his many attempts at abolition legislation, the manner in which he provided for slaves, and his advocacy of their more humane treatment. This view contrasts with that of revisionists like Paul Finkelman, who criticizes Jefferson for racism, slavery, and hypocrisy. Emancipationist views on this hypocrisy recognize that if he tried to be true to his word, it would have alienated his fellow Virginians. In another example, Franklin D. Roosevelt did not join NAACP leaders in pushing for federal anti-lynching legislation, as he believed that such legislation was unlikely to pass and that his support for it would alienate Southern congressmen, including many of Roosevelt's fellow Democrats. Empirical investigation Whether children younger than three or four years old have a theory of mind is a topic of debate among researchers. It is a challenging question, due to the difficulty of assessing what pre-linguistic children understand about others and the world. Tasks used in research into the development of theory of mind must take into account the umwelt of the pre-verbal child. False-belief task One of the most important milestones in theory of mind development is the ability to attribute false belief: in other words, to understand that other people can believe things which are not true. To do this, it is suggested, one must understand how knowledge is formed, that people's beliefs are based on their knowledge, that mental states can differ from reality, and that people's behavior can be predicted by their mental states. Numerous versions of false-belief task have been developed, based on the initial task created by Wimmer and Perner (1983). In the most common version of the false-belief task (often called the Sally-Anne test), children are told a story about Sally and Anne. Sally has a marble, which she places into her basket, and then leaves the room. While she is out of the room, Anne takes the marble from the basket and puts it into the box. The child being tested is then asked where Sally will look for the marble once she returns. The child passes the task if she answers that Sally will look in the basket, where Sally put the marble; the child fails the task if she answers that Sally will look in the box. To pass the task, the child must be able to understand that another's mental representation of the situation is different from their own, and the child must be able to predict behavior based on that understanding. Another example depicts a boy who leaves chocolate on a shelf and then leaves the room. His mother puts it in the fridge. To pass the task, the child must understand that the boy, upon returning, holds the false belief that his chocolate is still on the shelf. The results of research using false-belief tasks have been called into question: most typically developing children are able to pass the tasks from around age four. Yet early studies asserted that 80% of children diagnosed with autism were unable to pass this test, while children with other disabilities like Down syndrome were able to. However this assertion could not be replicated by later studies. It instead was concluded that children fail these tests due to a lack of understanding of extraneous processes and a basic lack of mental processing capabilities. Adults may also struggle with false beliefs, for instance when they show hindsight bias. In one experiment, adult subjects who were asked for an independent assessment were unable to disregard information on actual outcome. Also in experiments with complicated situations, when assessing others' thinking, adults can fail to correctly disregard certain information that they have been given. Unexpected contents Other tasks have been developed to try to extend the false-belief task. In the "unexpected contents" or "smarties" task, experimenters ask children what they believe to be the contents of a box that looks as though it holds Smarties. After the child guesses "Smarties", it is shown that the box in fact contained pencils. The experimenter then re-closes the box and asks the child what she thinks another person, who has not been shown the true contents of the box, will think is inside. The child passes the task if he/she responds that another person will think that "Smarties" exist in the box, but fails the task if she responds that another person will think that the box contains pencils. Gopnik & Astington found that children pass this test at age four or five years. Though the use of such implicit tests has yet to reach a consensus on their validity and reproducibility of study results. Other tasks The "false-photograph" task also measures theory of mind development. In this task, children must reason about what is represented in a photograph that differs from the current state of affairs. Within the false-photograph task, either a location or identity change exists. In the location-change task, the examiner puts an object in one location (e.g. chocolate in an open green cupboard), whereupon the child takes a Polaroid photograph of the scene. While the photograph is developing, the examiner moves the object to a different location (e.g. a blue cupboard), allowing the child to view the examiner's action. The examiner asks the child two control questions: "When we first took the picture, where was the object?" and "Where is the object now?" The subject is also asked a "false-photograph" question: "Where is the object in the picture?" The child passes the task if he/she correctly identifies the location of the object in the picture and the actual location of the object at the time of the question. However, the last question might be misinterpreted as "Where in this room is the object that the picture depicts?" and therefore some examiners use an alternative phrasing. To make it easier for animals, young children, and individuals with classical autism to understand and perform theory of mind tasks, researchers have developed tests in which verbal communication is de-emphasized: some whose administration does not involve verbal communication on the part of the examiner, some whose successful completion does not require verbal communication on the part of the subject, and some that meet both of those standards. One category of tasks uses a preferential-looking paradigm, with looking time as the dependent variable. For instance, nine-month-old infants prefer looking at behaviors performed by a human hand over those made by an inanimate hand-like object. Other paradigms look at rates of imitative behavior, the ability to replicate and complete unfinished goal-directed acts, and rates of pretend play. Early precursors Research on the early precursors of theory of mind has invented ways to observe preverbal infants' understanding of other people's mental states, including perception and beliefs. Using a variety of experimental procedures, studies show that infants from their first year of life have an implicit understanding of what other people see and what they know. A popular paradigm used to study infants' theory of mind is the violation-of-expectation procedure, which exploits infants' tendency to look longer at unexpected and surprising events compared to familiar and expected events. The amount of time they look at an event gives researchers an indication of what infants might be inferring, or their implicit understanding of events. One study using this paradigm found that 16-month-olds tend to attribute beliefs to a person whose visual perception was previously witnessed as being "reliable", compared to someone whose visual perception was "unreliable". Specifically, 16-month-olds were trained to expect a person's excited vocalization and gaze into a container to be associated with finding a toy in the reliable-looker condition or an absence of a toy in the unreliable-looker condition. Following this training phase, infants witnessed, in an object-search task, the same persons searching for a toy either in the correct or incorrect location after they both witnessed the location of where the toy was hidden. Infants who experienced the reliable looker were surprised and therefore looked longer when the person searched for the toy in the incorrect location compared to the correct location. In contrast, the looking time for infants who experienced the unreliable looker did not differ for either search locations. These findings suggest that 16-month-old infants can differentially attribute beliefs about a toy's location based on the person's prior record of visual perception. Methodological problems With the methods used to test theory of mind, it has been experimentally shown that very simple robots that only react by reflexes and are not built to have any complex cognition at all can pass the tests for having theory of mind abilities that psychology textbooks assume to be exclusive to humans older than four or five years. Whether such a robot passes the test is influenced by completely non-cognitive factors such as placement of objects and the structure of the robot body influencing how the reflexes are conducted. It has therefore been suggested that theory of mind tests may not actually test cognitive abilities. Furthermore, early research into theory of mind in autistic children is argued to constitute epistemological violence due to implicit or explicit negative and universal conclusions about autistic individuals being drawn from empirical data that viably supports other (non-universal) conclusions. Deficits Theory of mind impairment, or mind-blindness, describes a difficulty someone would have with perspective-taking. Individuals with theory of mind impairment struggle to see phenomena from any other perspective than their own. Individuals who experience a theory of mind deficit have difficulty determining the intentions of others, lack understanding of how their behavior affects others, and have a difficult time with social reciprocity. Theory of mind deficits have been observed in people with autism spectrum disorders, schizophrenia, nonverbal learning disorder and along with people under the influence of alcohol and narcotics, sleep-deprived people, and people who are experiencing severe emotional or physical pain. Theory of mind deficits have also been observed in deaf children who are late signers (i.e. are born to hearing parents), but such a deficit is due to the delay in language learning, not any cognitive deficit, and therefore disappears once the child learns sign language. Autism In 1985 Simon Baron-Cohen, Alan M. Leslie, and Uta Frith suggested that children with autism do not employ theory of mind and that autistic children have particular difficulties with tasks requiring the child to understand another person's beliefs. These difficulties persist when children are matched for verbal skills and they have been taken as a key feature of autism. Although in a 2019 review, Gernsbacher and Yergeau argued that "the claim that autistic people lack a theory of mind is empirically questionable", as there have been numerous failed replications of classic ToM studies and the meta-analytical effect sizes of such replications were minimal to small. Many individuals classified as autistic have severe difficulty assigning mental states to others, and some seem to lack theory of mind capabilities. Researchers who study the relationship between autism and theory of mind attempt to explain the connection in a variety of ways. One account assumes that theory of mind plays a role in the attribution of mental states to others and in childhood pretend play. According to Leslie, theory of mind is the capacity to mentally represent thoughts, beliefs, and desires, regardless of whether the circumstances involved are real. This might explain why some autistic individuals show extreme deficits in both theory of mind and pretend play. However, Hobson proposes a social-affective justification, in which deficits in theory of mind in autistic people result from a distortion in understanding and responding to emotions. He suggests that typically developing individuals, unlike autistic individuals, are born with a set of skills (such as social referencing ability) that later lets them comprehend and react to other people's feelings. Other scholars emphasize that autism involves a specific developmental delay, so that autistic children vary in their deficiencies, because they experience difficulty in different stages of growth. Very early setbacks can alter proper advancement of joint-attention behaviors, which may lead to a failure to form a full theory of mind. It has been speculated that theory of mind exists on a continuum as opposed to the traditional view of a discrete presence or absence. While some research has suggested that some autistic populations are unable to attribute mental states to others, recent evidence points to the possibility of coping mechanisms that facilitate the attribution of mental states. A binary view regarding theory of mind contributes to the stigmatization of autistic adults who do possess perspective-taking capacity, as the assumption that autistic people do not have empathy can become a rationale for dehumanization. Tine et al. report that autistic children score substantially lower on measures of social theory of mind (i.e., "reasoning about others mental states", p. 1) in comparison to children diagnosed with Asperger syndrome. Generally, children with more advanced theory of mind abilities display more advanced social skills, greater adaptability to new situations, and greater cooperation with others. As a result, these children are typically well-liked. However, "children may use their mind-reading abilities to manipulate, outwit, tease, or trick their peers." Individuals possessing inferior theory of mind skills, such as children with autism spectrum disorder, may be socially rejected by their peers since they are unable to communicate effectively. Social rejection has been proven to negatively impact a child's development and can put the child at greater risk of developing depressive symptoms. Peer-mediated interventions (PMI) are a school-based treatment approach for children and adolescents with autism spectrum disorder in which peers are trained to be role models in order to promote social behavior. Laghi et al. studied whether analysis of prosocial (nice) and antisocial (nasty) theory-of-mind behaviors could be used, in addition to teacher recommendations, to select appropriate candidates for PMI programs. Selecting children with advanced theory-of-mind skills who use them in prosocial ways will theoretically make the program more effective. While the results indicated that analyzing the social uses of theory of mind of possible candidates for a PMI program may increase the program's efficacy, it may not be a good predictor of a candidate's performance as a role model. A 2014 Cochrane review on interventions based on theory of mind found that such a theory could be taught to individuals with autism but claimed little evidence of skill maintenance, generalization to other settings, or development effects on related skills. Some 21st century studies have shown that the results of some studies of theory of mind tests on autistic people may be misinterpreted based on the double empathy problem, which proposes that rather than autistic people specifically having trouble with theory of mind, autistic people and non-autistic people have equal difficulty understanding one-another due to their neurological differences. Studies have shown that autistic adults perform better in theory of mind tests when paired with other autistic adults as well as possibly autistic close family members. Academics who acknowledge the double empathy problem also propose that it is likely autistic people understand non-autistic people to a higher degree than vice-versa, due to the necessity of functioning in a non-autistic society. Psychopathy Psychopathy is another deficit that is of large importance when discussing theory of mind. While psychopathic individuals show impaired emotional behavior including a lack of emotional responsiveness to others and deficient empathy, as well as impaired social behavior, there are many controversies regarding psychopathic individuals' theory of mind. Many different studies provide contradictory information on a correlation between theory of mind impairment and psychopathic individuals. There have been some speculations made about the similarities between individuals with autism and psychopathic individuals in the theory of mind performance. In this study in 2008, the Happé's advanced test of theory of mind was presented to a group of 25 psychopaths, and 25 non-psychopaths incarcerated. This test showed that there wasn’t a difference in the performance of the task for the psychopaths and non-psychopaths. However, they were able to see that the psychopaths were performing significantly better than the most highly able adult autistic population. This shows that there is not a similarity between individuals with autism and psychopathic individuals. There have been repetitive suggestions regarding the possibility that a deficient or biased grasp of others’ mental states, or theory of mind, could potentially contribute to antisocial behavior, aggression, and psychopathy. In one study named ‘Reading the Mind in the Eyes’, the participants view photographs of an individual’s eye and had to attribute a mental state, or emotion, to the individual. This is an interesting test because Magnetic resonance imaging studies showed that this task produced increased activity in the dorsolateral prefrontal and the left medial frontal cortices, the superior temporal gyrus, and the left amygdala. There is extensive literature suggesting amygdala dysfunction in psychopathy however, this test shows that both groups of psychopathic and non-psychopathic adults performed equally well on the test. Thus, disregarding that there isn’t Theory of Mind impairment in psychopathic individuals. In another study using a systemic review and meta-analysis, data was gathered from 42 different studies and found that psychopathic traits are associated with impairment in the theory of mind task performance. This relationship was not regulated by age, population, psychopathy measurement (self-report versus clinical checklist) or theory of mind task type (cognitive versus affective). This study used past studies to show that there is a relationship between psychopathic individuals and theory of mind impairments. In 2009 a study was conducted to test whether impairment in the emotional aspects of theory of mind rather that the general theory of mind abilities may account for some of the impaired social behavior in psychopathy. This study involved criminal offenders diagnosed with antisocial personality disorder who had high psychopathy features, participants with localized lesions in the orbitofrontal cortex, participants with non-frontal lesions, and healthy control subjects. Subjects were tested with a task that examines affective versus cognitive theory of mind. They found that the individuals with psychopathy and those with orbitofrontal cortex lesions were both impaired on the affective theory of mind but not in cognitive theory of mind when compared to the control group. Schizophrenia Individuals diagnosed with schizophrenia can show deficits in theory of mind. Mirjam Sprong and colleagues investigated the impairment by examining 29 different studies, with a total of over 1500 participants. This meta-analysis showed significant and stable deficit of theory of mind in people with schizophrenia. They performed poorly on false-belief tasks, which test the ability to understand that others can hold false beliefs about events in the world, and also on intention-inference tasks, which assess the ability to infer a character's intention from reading a short story. Schizophrenia patients with negative symptoms, such as lack of emotion, motivation, or speech, have the most impairment in theory of mind and are unable to represent the mental states of themselves and of others. Paranoid schizophrenic patients also perform poorly because they have difficulty accurately interpreting others' intentions. The meta-analysis additionally showed that IQ, gender, and age of the participants do not significantly affect the performance of theory of mind tasks. Research suggests that impairment in theory of mind negatively affects clinical insight—the patient's awareness of their mental illness. Insight requires theory of mind; a patient must be able to adopt a third-person perspective and see the self as others do. A patient with good insight can accurately self-represent, by comparing himself with others and by viewing himself from the perspective of others. Insight allows a patient to recognize and react appropriately to his symptoms. A patient who lacks insight does not realize that he has a mental illness, because of his inability to accurately self-represent. Therapies that teach patients perspective-taking and self-reflection skills can improve abilities in reading social cues and taking the perspective of another person. Research indicates that theory-of-mind deficit is a stable trait-characteristic rather than a state-characteristic of schizophrenia. The meta-analysis conducted by Sprong et al. showed that patients in remission still had impairment in theory of mind. This indicates that the deficit is not merely a consequence of the active phase of schizophrenia. Schizophrenic patients' deficit in theory of mind impairs their interactions with others. Theory of mind is particularly important for parents, who must understand the thoughts and behaviors of their children and react accordingly. Dysfunctional parenting is associated with deficits in the first-order theory of mind, the ability to understand another person's thoughts, and in the second-order theory of mind, the ability to infer what one person thinks about another person's thoughts. Compared with healthy mothers, mothers with schizophrenia are found to be more remote, quiet, self-absorbed, insensitive, unresponsive, and to have fewer satisfying interactions with their children. They also tend to misinterpret their children's emotional cues, and often misunderstand neutral faces as negative. Activities such as role-playing and individual or group-based sessions are effective interventions that help the parents improve on perspective-taking and theory of mind. There is a strong association between theory of mind deficit and parental role dysfunction. Alcohol use disorders Impairments in theory of mind, as well as other social-cognitive deficits, are commonly found in people who have alcohol use disorders, due to the neurotoxic effects of alcohol on the brain, particularly the prefrontal cortex. Depression and dysphoria Individuals in a major depressive episode, a disorder characterized by social impairment, show deficits in theory of mind decoding. Theory of mind decoding is the ability to use information available in the immediate environment (e.g., facial expression, tone of voice, body posture) to accurately label the mental states of others. The opposite pattern, enhanced theory of mind, is observed in individuals vulnerable to depression, including those individuals with past major depressive disorder (MDD), dysphoric individuals, and individuals with a maternal history of MDD. Developmental language disorder Children diagnosed with developmental language disorder (DLD) exhibit much lower scores on reading and writing sections of standardized tests, yet have a normal nonverbal IQ. These language deficits can be any specific deficits in lexical semantics, syntax, or pragmatics, or a combination of multiple problems. Such children often exhibit poorer social skills than normally developing children, and seem to have problems decoding beliefs in others. A recent meta-analysis confirmed that children with DLD have substantially lower scores on theory of mind tasks compared to typically developing children. This strengthens the claim that language development is related to theory of mind. Brain mechanisms In neurotypical people Research on theory of mind in autism led to the view that mentalizing abilities are subserved by dedicated mechanisms that can—in some cases—be impaired while general cognitive function remains largely intact. Neuroimaging research supports this view, demonstrating specific brain regions are consistently engaged during theory of mind tasks. Positron emission tomography (PET) research on theory of mind, using verbal and pictorial story comprehension tasks, identifies a set of brain regions including the medial prefrontal cortex (mPFC), and area around posterior superior temporal sulcus (pSTS), and sometimes precuneus and amygdala/temporopolar cortex. Research on the neural basis of theory of mind has diversified, with separate lines of research focusing on the understanding of beliefs, intentions, and more complex properties of minds such as psychological traits. Studies from Rebecca Saxe's lab at MIT, using a false-belief versus false-photograph task contrast aimed at isolating the mentalizing component of the false-belief task, have consistently found activation in the mPFC, precuneus, and temporoparietal junction (TPJ), right-lateralized. In particular, Saxe et al. proposed that the right TPJ (rTPJ) is selectively involved in representing the beliefs of others. Some debate exists, as the same rTPJ region is consistently activated during spatial reorienting of visual attention; Jean Decety from the University of Chicago and Jason Mitchell from Harvard thus propose that the rTPJ subserves a more general function involved in both false-belief understanding and attentional reorienting, rather than a mechanism specialized for social cognition. However, it is possible that the observation of overlapping regions for representing beliefs and attentional reorienting may simply be due to adjacent, but distinct, neuronal populations that code for each. The resolution of typical fMRI studies may not be good enough to show that distinct/adjacent neuronal populations code for each of these processes. In a study following Decety and Mitchell, Saxe and colleagues used higher-resolution fMRI and showed that the peak of activation for attentional reorienting is approximately 6–10 mm above the peak for representing beliefs. Further corroborating that differing populations of neurons may code for each process, they found no similarity in the patterning of fMRI response across space. Using single-cell recordings in the human dorsomedial prefrontal cortex (dmPFC), researchers at MGH identified neurons that encode information about others' beliefs, which were distinct from self-beliefs, across different scenarios in a false-belief task. They further showed that these neurons could provide detailed information about others' beliefs, and could accurately predict these beliefs' verity. These findings suggest a prominent role of distinct neuronal populations in the dmPFC in theory of mind complemented by the TPJ and pSTS. Functional imaging also illuminates the detection of mental state information in animations of moving geometric shapes similar to those used in Heider and Simmel (1944), which typical humans automatically perceive as social interactions laden with intention and emotion. Three studies found remarkably similar patterns of activation during the perception of such animations versus a random or deterministic motion control: mPFC, pSTS, fusiform face area (FFA), and amygdala were selectively engaged during the theory of mind condition. Another study presented subjects with an animation of two dots moving with a parameterized degree of intentionality (quantifying the extent to which the dots chased each other), and found that pSTS activation correlated with this parameter. A separate body of research implicates the posterior superior temporal sulcus in the perception of intentionality in human action. This area is also involved in perceiving biological motion, including body, eye, mouth, and point-light display motion. One study found increased pSTS activation while watching a human lift his hand versus having his hand pushed up by a piston (intentional versus unintentional action). Several studies found increased pSTS activation when subjects perceive a human action that is incongruent with the action expected from the actor's context and inferred intention. Examples would be: a human performing a reach-to-grasp motion on empty space next to an object, versus grasping the object; a human shifting eye gaze toward empty space next to a checkerboard target versus shifting gaze toward the target; an unladen human turning on a light with his knee, versus turning on a light with his knee while carrying a pile of books; and a walking human pausing as he passes behind a bookshelf, versus walking at a constant speed. In these studies, actions in the "congruent" case have a straightforward goal, and are easy to explain in terms of the actor's intention. The incongruent actions, on the other hand, require further explanation (why would someone twist empty space next to a gear?), and apparently demand more processing in the STS. This region is distinct from the temporoparietal area activated during false belief tasks. pSTS activation in most of the above studies was largely right-lateralized, following the general trend in neuroimaging studies of social cognition and perception. Also right-lateralized are the TPJ activation during false belief tasks, the STS response to biological motion, and the FFA response to faces. Neuropsychological evidence supports neuroimaging results regarding the neural basis of theory of mind. Studies with patients with a lesion of the frontal lobes and the temporoparietal junction of the brain (between the temporal lobe and parietal lobe) report that they have difficulty with some theory of mind tasks. This shows that theory of mind abilities are associated with specific parts of the human brain. However, the fact that the medial prefrontal cortex and temporoparietal junction are necessary for theory of mind tasks does not imply that these regions are specific to that function. TPJ and mPFC may subserve more general functions necessary for Theory of Mind. Research by Vittorio Gallese, Luciano Fadiga, and Giacomo Rizzolatti shows that some sensorimotor neurons, referred to as mirror neurons and first discovered in the premotor cortex of rhesus monkeys, may be involved in action understanding. Single-electrode recording revealed that these neurons fired when a monkey performed an action, as well as when the monkey viewed another agent performing the same action. fMRI studies with human participants show brain regions (assumed to contain mirror neurons) that are active when one person sees another person's goal-directed action. These data led some authors to suggest that mirror neurons may provide the basis for theory of mind in the brain, and to support simulation theory of mind reading. There is also evidence against a link between mirror neurons and theory of mind. First, macaque monkeys have mirror neurons but do not seem to have a 'human-like' capacity to understand theory of mind and belief. Second, fMRI studies of theory of mind typically report activation in the mPFC, temporal poles, and TPJ or STS, but those brain areas are not part of the mirror neuron system. Some investigators, like developmental psychologist Andrew Meltzoff and neuroscientist Jean Decety, believe that mirror neurons merely facilitate learning through imitation and may provide a precursor to the development of theory of mind. Others, like philosopher Shaun Gallagher, suggest that mirror-neuron activation, on a number of counts, fails to meet the definition of simulation as proposed by the simulation theory of mindreading. In autism Several neuroimaging studies have looked at the neural basis for theory of mind impairment in subjects with Asperger syndrome and high-functioning autism (HFA). The first PET study of theory of mind in autism (also the first neuroimaging study using a task-induced activation paradigm in autism) replicated a prior study in neurotypical individuals, which employed a story-comprehension task. This study found displaced and diminished mPFC activation in subjects with autism. However, because the study used only six subjects with autism, and because the spatial resolution of PET imaging is relatively poor, these results should be considered preliminary. A subsequent fMRI study scanned normally developing adults and adults with HFA while performing a "reading the mind in the eyes" task: viewing a photo of a human's eyes and choosing which of two adjectives better describes the person's mental state, versus a gender discrimination control. The authors found activity in orbitofrontal cortex, STS, and amygdala in normal subjects, and found less amygdala activation and abnormal STS activation in subjects with autism. A more recent PET study looked at brain activity in individuals with HFA and Asperger syndrome while viewing Heider-Simmel animations (see above) versus a random motion control. In contrast to normally-developing subjects, those with autism showed little STS or FFA activation, and less mPFC and amygdala activation. Activity in extrastriate regions V3 and LO was identical across the two groups, suggesting intact lower-level visual processing in the subjects with autism. The study also reported less functional connectivity between STS and V3 in the autism group. However decreased temporal correlation between activity in STS and V3 would be expected simply from the lack of an evoked response in STS to intent-laden animations in subjects with autism. A more informative analysis would be to compute functional connectivity after regressing out evoked responses from all-time series. A subsequent study, using the incongruent/congruent gaze-shift paradigm described above, found that in high-functioning adults with autism, posterior STS (pSTS) activation was undifferentiated while they watched a human shift gaze toward a target and then toward adjacent empty space. The lack of additional STS processing in the incongruent state may suggest that these subjects fail to form an expectation of what the actor should do given contextual information, or that feedback about the violation of this expectation does not reach STS. Both explanations involve an impairment or deficit in the ability to link eye gaze shifts with intentional explanations. This study also found a significant anticorrelation between STS activation in the incongruent-congruent contrast and social subscale score on the Autism Diagnostic Interview-Revised, but not scores on the other subscales. An fMRI study demonstrated that the right temporoparietal junction (rTPJ) of higher-functioning adults with autism was not more selectively activated for mentalizing judgments when compared to physical judgments about self and other. rTPJ selectivity for mentalizing was also related to individual variation on clinical measures of social impairment: individuals whose rTPJ was increasingly more active for mentalizing compared to physical judgments were less socially impaired, while those who showed little to no difference in response to mentalizing or physical judgments were the most socially impaired. This evidence builds on work in typical development that suggests rTPJ is critical for representing mental state information, whether it is about oneself or others. It also points to an explanation at the neural level for the pervasive mind-blindness difficulties in autism that are evident throughout the lifespan. In schizophrenia The brain regions associated with theory of mind include the superior temporal gyrus (STS), the temporoparietal junction (TPJ), the medial prefrontal cortex (mPFC), the precuneus, and the amygdala. The reduced activity in the mPFC of individuals with schizophrenia is associated with theory of mind deficit and may explain impairments in social function among people with schizophrenia. Increased neural activity in mPFC is related to better perspective-taking, emotion management, and increased social functioning. Disrupted brain activities in areas related to theory of mind may increase social stress or disinterest in social interaction, and contribute to the social dysfunction associated with schizophrenia. Practical validity Group member average scores of theory of mind abilities, measured with the Reading the Mind in the Eyes test (RME), are possibly drivers of successful group performance. High group average scores on the RME are correlated with the collective intelligence factor c, defined as a group's ability to perform a wide range of mental tasks, a group intelligence measure similar to the g factor for general individual intelligence. RME is a theory of mind test for adults that shows sufficient test-retest reliability and constantly differentiates control groups from individuals with functional autism or Asperger syndrome. It is one of the most widely accepted and well-validated tests for theory of mind abilities within adults. Evolution The evolutionary origin of theory of mind remains obscure. While many theories make claims about its role in the development of human language and social cognition, few of them specify in detail any evolutionary neurophysiological precursors. One theory claims that theory of mind has its roots in two defensive reactions—immobilization stress and tonic immobility—which are implicated in the handling of stressful encounters and also figure prominently in mammalian childrearing practice. Their combined effect seems capable of producing many of the hallmarks of theory of mind, such as eye-contact, gaze-following, inhibitory control, and intentional attributions. Non-human An open question is whether non-human animals have a genetic endowment and social environment that allows them to acquire a theory of mind like human children do. This is a contentious issue because of the difficulty of inferring from animal behavior the existence of thinking or of particular thoughts, or the existence of a concept of self or self-awareness, consciousness, and qualia. One difficulty with non-human studies of theory of mind is the lack of sufficient numbers of naturalistic observations, giving insight into what the evolutionary pressures might be on a species' development of theory of mind. Non-human research still has a major place in this field. It is especially useful in illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to possible stepping points in the evolution of that aspect of social cognition. While it is difficult to study human-like theory of mind and mental states in species of whose potential mental states we have an incomplete understanding, researchers can focus on simpler components of more complex capabilities. For example, many researchers focus on animals' understanding of intention, gaze, perspective, or knowledge (of what another being has seen). A study that looked at understanding of intention in orangutans, chimpanzees, and children showed that all three species understood the difference between accidental and intentional acts. Individuals exhibit theory of mind by extrapolating another's internal mental states from their observable behavior. So one challenge in this line of research is to distinguish this from more run-of-the-mill stimulus-response learning, with the other's observable behavior being the stimulus. Recently, most non-human theory of mind research has focused on monkeys and great apes, who are of most interest in the study of the evolution of human social cognition. Other studies relevant to attributions theory of mind have been conducted using plovers and dogs, which show preliminary evidence of understanding attention—one precursor of theory of mind—in others. There has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals. For example, Povinelli et al. presented chimpanzees with the choice of two experimenters from whom to request food: one who had seen where food was hidden, and one who, by virtue of one of a variety of mechanisms (having a bucket or bag over his head, a blindfold over his eyes, or being turned away from the baiting) does not know, and can only guess. They found that the animals failed in most cases to differentially request food from the "knower". By contrast, Hare, Call, and Tomasello found that subordinate chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine which container of hidden food they approached. William Field and Sue Savage-Rumbaugh believe that bonobos have developed theory of mind, and cite their communications with a captive bonobo, Kanzi, as evidence. In one experiment, ravens (Corvus corax) took into account visual access of unseen conspecifics. The researchers argued that "ravens can generalize from their own perceptual experience to infer the possibility of being seen". Evolutionary anthropologist Christopher Krupenye studied the existence of theory of mind, and particularly false beliefs, in non-human primates. Keren Haroush and Ziv Williams outlined the case for a group of neurons in primates' brains that uniquely predicted the choice selection of their interacting partner. These primates' neurons, located in the anterior cingulate cortex of rhesus monkeys, were observed using single-unit recording while the monkeys played a variant of the iterative prisoner's dilemma game. By identifying cells that represent the yet unknown intentions of a game partner, Haroush & Williams' study supports the idea that theory of mind may be a fundamental and generalized process, and suggests that anterior cingulate cortex neurons may act to complement the function of mirror neurons during social interchange. See also Attribution bias Cephalopod intelligence Cetacean intelligence Eliminative materialism Empathy Grounding in communication Intentional stance Joint attention Mental body Mentalization Mini-SEA Origin of language Perspective-taking Quantum mind Relational frame theory Self-awareness Social neuroscience Embodied cognition Space mapping The Mind of an Ape Turing test Type physicalism Interpersonal accuracy References Further reading Excerpts taken from: Davis, E. (2007) "Mental Verbs in Nicaraguan Sign Language and the Role of Language in Theory of Mind". Undergraduate senior thesis, Barnard College, Columbia University. External links Eye Test Simon Baron Cohen The Computational Theory of Mind The Identity Theory of Mind Sally-Anne and Smarties tests Functional Contextualism Theory of Mind article in the Internet Encyclopedia of Philosophy Research into Theory of mind Cognitive science Concepts in epistemology Concepts in metaphysics Concepts in the philosophy of mind Metaphysics of mind Ontology Psychological theories
0.776261
0.998729
0.775275
Human condition
The human condition can be defined as the characteristics and key events of human life, including birth, learning, emotion, aspiration, reason, morality, conflict, and death. This is a very broad topic that has been and continues to be pondered and analyzed from many perspectives, including those of art, biology, literature, philosophy, psychology, and religion. As a literary term, "human condition" is typically used in the context of ambiguous subjects, such as the meaning of life or moral concerns. Some perspectives Each major religion has definitive beliefs regarding the human condition. For example, Buddhism teaches that existence is a perpetual cycle of suffering, death, and rebirth from which humans can be liberated via the Noble Eightfold Path. Meanwhile, many Christians believe that humans are born in a sinful condition and are doomed in the afterlife unless they receive salvation through Jesus Christ. Philosophers have provided many perspectives. An influential ancient view was that of the Republic in which Plato explored the question "what is justice?" and postulated that it is not primarily a matter among individuals but of society as a whole, prompting him to devise a utopia. Two thousand years later René Descartes declared "I think, therefore I am" because he believed the human mind, particularly its faculty of reason, to be the primary determiner of truth; for this he is often credited as the father of modern philosophy. One such modern school, existentialism, attempts to reconcile an individual's sense of disorientation and confusion in a universe believed to be absurd. Many works of literature provide a perspective on the human condition. One famous example is Shakespeare's monologue "All the world's a stage" which pensively summarizes seven phases of human life. Psychology has many theories, including Maslow's hierarchy of needs and the notions of identity crisis and terror management. It also has various methods, e.g. the logotherapy developed by Holocaust survivor Viktor Frankl to discover and affirm a sense of meaning. Another method, cognitive behavioral therapy, has become a widespread treatment for clinical depression. Charles Darwin established the biological theory of evolution, which posits that the human species is related to all others, living and extinct, and that natural selection is the primary survival factor. This led to subsequent beliefs, such as social Darwinism, which eventually lost its connection to natural selection, and theistic evolution of a creator deity acting through laws of nature, including evolution. See also Human nature Know thyself References Concepts in philosophical anthropology Concepts in social philosophy Concepts in the philosophy of mind Existentialist concepts Humans Personal life Philosophy of life Psychological concepts
0.776889
0.997638
0.775055
Persuasion
Persuasion or persuasion arts is an umbrella term for influence. Persuasion can influence a person's beliefs, attitudes, intentions, motivations, or behaviours. Persuasion is studied in many disciplines. Rhetoric studies modes of persuasion in speech and writing and is often taught as a classical subject. Psychology looks at persuasion through the lens of individual behaviour and neuroscience studies the brain activity associated with this behaviour. History and political science are interested in the role of propaganda in shaping historical events. In business, persuasion is aimed at influencing a person's (or group's) attitude or behaviour towards some event, idea, object, or another person (s) by using written, spoken, or visual methods to convey information, feelings, or reasoning, or a combination thereof. Persuasion is also often used to pursue personal gain, such as election campaigning, giving a sales pitch, or in trial advocacy. Persuasion can also be interpreted as using personal or positional resources to change people. Forms Propaganda is a form of persuasion used to indoctrinate a population towards an individual or a particular agenda. Coercion is a form of persuasion that uses aggressive threats and the provocation of fear and/or shame to influence a person's behavior. Systematic persuasion is the process through which attitudes or beliefs are leveraged by appeals to logic and reason. Heuristic persuasion, on the other hand, is the process through which attitudes or beliefs are leveraged by appeals to habit or emotion. History and philosophy The academic study of persuasion began with the Greeks, who emphasized rhetoric and elocution as the highest standard for a successful politician. All trials were held in front of the Assembly, and the likelihood of success of the prosecution versus the defense rested on the persuasiveness of the speaker. Rhetoric is the art of effective persuasive speaking, often through the use of figures of speech, metaphors, and other techniques. The Greek philosopher Aristotle listed four reasons why one should learn the art of persuasion: Truth and justice are perfect; thus if a case loses, it is the fault of the speaker. It is an excellent tool for teaching. A good rhetorician must be able to argue both sides to understand the whole problem, and There is no better way to defend one's self. He described three fundamental ways to communicate persuasively: Ethos (credibility): refers to the effort to convince your audience of your credibility or character. It is not automatic and can be created through actions, deeds, understanding, or expertise by the speaker. Logos (reason): refers to the effort to convince your audience by using logic and reason. This can be formal and non-formal. Formal reasoning uses syllogisms, arguments where two statements validly imply a third statement. Non-formal reasoning uses enthymemes, arguments that have valid reasoning but are informal and assume the audience has prior knowledge. Pathos (emotion): refers to the effort to persuade your audience by making an appeal to their feelings. Ethics of persuasion Many philosophers have commented on the morality of persuasion. Socrates argued that rhetoric was based on appearances rather than the essence of a matter. Thomas Hobbes was critical of use rhetoric to create controversy, particularly the use of metaphor. Immanuel Kant was critical of rhetoric, arguing that it could cause people to reach conclusions that are at odds with those that they would have reached if they had applied their full judgment. He draws parallels between the function of rhetoric and the deterministic function of the mind like a machine. Aristotle was critical of persuasion, though argued that judges would often allow themselves to be persuaded by choosing to apply emotions rather than reason. However, he argued that persuasion could be used to induce an individual to apply reason and judgment. Writers such as William Keith and Christian O. Lundberg argue that uses of force and threats in trying to influence others does not lead to persuasion, but rather talking to people does, going further to add "While Rhetoric certainly has its dark side that deals in tricks and perceptions... the systematic study of rhetoric generally ignores these techniques, in part because they are not very systematic or reliable." There is also in legal disputes, the matter of the burden of proof when bringing up an argument, where it often falls on the hands of the one presenting a case to prove its validity to another person and where presumptions may be made where of the burden of proof has not been met, an argument may be dropped such as in a more famous example of "Innocent until proven guilty", although this line of presumption or burden of proof may not always be followed. While Keith and Lundberg do go into detail about the different intricacies of persuasion, they do explain that lapses in logic and or reasoning could lead to persuasive arguments with faults. These faults can come as enthymemes, where more likely than not only certain audiences with specific pieces of knowledge may understand the reasoning being presented with missing logic, or the more egregious example of fallacies where conclusions may be drawn (almost always incorrectly) through invalid argument. In contrast to the reasoning behind enthymemes, the use of examples can help prove a person's rhetorical claims through inductive reasoning, which assumes that "if something is true in specific cases, it is true in general". Examples can be split into two categories real and hypothetical. Real examples come from personal experience or academic/scientific research which can support the argument you're making. Hypothetical examples are made-up. When arguing something, speakers can put forward a hypothetical situation that illustrates the point they are making to connect better with the audience. These examples must be plausible to properly illustrate a persuasive argument. Theories There are many psychological theories for what influences an individual's behaviour in different situations. These theories will have implications about how persuasion works. Attribution theory Humans attempt to explain the actions of others through either dispositional attribution or situational attribution. Dispositional attribution, also referred to as internal attribution, attempts to point to a person's traits, abilities, motives, or dispositions as a cause or explanation for their actions. A citizen criticizing a president by saying the nation is lacking economic progress and health because the president is either lazy or lacking in economic intuition is utilizing a dispositional attribution. Situational attribution, also referred to as external attribution, attempts to point to the context around the person and factors of his surroundings, particularly things that are completely out of his control. A citizen claiming that a lack of economic progress is not a fault of the president but rather the fact that he inherited a poor economy from the previous president is situational attribution. A fundamental attribution error occurs when people wrongly attribute either a shortcoming or accomplishment to internal factors while disregarding all external factors. In general, people use dispositional attribution more often than situational attribution when trying to explain or understand the behavior of others. This happens because we focus more on the individual when we lack information about that individual's situation and context. When trying to persuade others to like us or another person, we tend to explain positive behaviors and accomplishments with dispositional attribution and negative behaviors and shortcomings with situational attributions. Behaviour change theories The Theory of Planned Behavior is the foremost theory of behaviour change. It has support from meta-analyses which reveals it can predict around 30% of behaviour. Theories, by nature however, prioritize internal validity, over external validity. They are coherent and therefore make for an easily reappropriated story. On the other hand, they will correspond more poorly with the evidence, and mechanics of reality, than a straightforward itemization of the behaviour change interventions (techniques) by their individual efficacy. These behaviour change interventions have been categorized by behavioral scientists. A mutually exclusive, comprehensively exhaustive (MECE) translation of this taxonomy, in decreasing order of effectiveness are: positive and negative consequences offering/removing incentives, offering/removing threats/punishments, distraction, changing exposure to cues (triggers) for the behaviour, prompts/cues, goal-setting, (increasing the salience of) emotional/health/social/environmental/regret consequences, self-monitoring of the behaviour and outcomes of behaviour, mental rehearsal of successful performance (planning?), self-talk, focus on past success, comparison of outcomes via persuasive argument, pros/cons and comparative imaging of future outcomes, identification of self as role model, self-affirmation, reframing, cognitive dissonance, reattribution, (increasing salience of) antecedents A typical instantiations of these techniques in therapy isexposure / response prevention for OCD. Conditioning theories Conditioning plays a huge part in the concept of persuasion. It is more often about leading someone into taking certain actions of their own, rather than giving direct commands. In advertisements for example, this is done by attempting to connect a positive emotion to a brand/product logo. This is often done by creating commercials that make people laugh, using a sexual undertone, inserting uplifting images and/or music etc. and then ending the commercial with a brand/product logo. Great examples of this are professional athletes. They are paid to connect themselves to things that can be directly related to their roles; sport shoes, tennis rackets, golf balls, or completely irrelevant things like soft drinks, popcorn poppers and panty hose. The important thing for the advertiser is to establish a connection to the consumer. This conditioning is thought to affect how people view certain products, knowing that most purchases are made on the basis of emotion. Just like you sometimes recall a memory from a certain smell or sound, the objective of some ads is solely to bring back certain emotions when you see their logo in your local store. The hope is that repeating the message several times makes consumers more likely to purchase the product because they already connect it with a good emotion and positive experience. Stefano DellaVigna and Matthew Gentzkow did a comprehensive study on the effects of persuasion in different domains. They discovered that persuasion has little or no effect on advertisement; however, there was a substantial effect of persuasion on voting if there was face-to-face contact. Cognitive dissonance theory Leon Festinger originally proposed the theory of cognitive dissonance in 1957. He theorized that human beings constantly strive for mental consistency. Our cognition (thoughts, beliefs, or attitudes) can be in agreement, unrelated, or in disagreement with each other. Our cognition can also be in agreement or disagreement with our behaviors. When we detect conflicting cognition, or dissonance, it gives us a sense of incompleteness and discomfort. For example, a person who is addicted to smoking cigarettes but also suspects it could be detrimental to their health suffers from cognitive dissonance. Festinger suggests that we are motivated to reduce this dissonance until our cognition is in harmony with itself. We strive for mental consistency. There are four main ways we go about reducing or eliminating our dissonance: changing our minds about one of the facets of cognition reducing the importance of a cognition increasing the overlap between the two, and re-evaluating the cost/reward ratio. Revisiting the example of the smoker, they can either quit smoking, reduce the importance of their health, convince themself they are not at risk, or decide that the reward of smoking is worth the cost of their health. Cognitive dissonance is powerful when it relates to competition and self-concept. The most famous example of how cognitive dissonance can be used for persuasion comes from Festinger and Carlsmith's 1959 experiment in which participants were asked to complete a very dull task for an hour. Some were paid $20, while others were paid $1, and afterwards they were instructed to tell the next waiting participants that the experiment was fun and exciting. Those who were paid $1 were much more likely to convince the next participants that the experiment really was enjoyable than those who received $20. This is because $20 is enough reason to participate in a dull task for an hour, so there is no dissonance. Those who received $1 experienced great dissonance, so they had to truly convince themselves that the task actually was enjoyable to avoid feeling taken advantage of, and therefore reduce their dissonance. Elaboration likelihood model Persuasion has traditionally been associated with two routes: Central route: Whereby an individual evaluates information presented to them based on the pros and cons of it and how well it supports their values Peripheral route: Change is mediated by how attractive the source of communication is and by bypassing the deliberation process. The Elaboration likelihood model (ELM) forms a new facet of the route theory. It holds that the probability of effective persuasion depends on how successful the communication is at bringing to mind a relevant mental representation, which is the elaboration likelihood. Thus if the target of the communication is personally relevant, this increases the elaboration likelihood of the intended outcome and would be more persuasive if it were through the central route. Communication which does not require careful thought would be better suited to the peripheral route. Functional theories Functional theorists attempt to understand the divergent attitudes individuals have towards people, objects or issues in different situations. There are four main functional attitudes: Adjustment function: A main motivation for individuals is to increase positive external rewards and minimize the costs. Attitudes serve to direct behavior towards the rewards and away from punishment. Ego Defensive function: The process by which an individual protects their ego from being threatened by their own negative impulses or threatening thoughts. Value-expressive: When an individual derives pleasure from presenting an image of themselves which is in line with their self-concept and the beliefs that they want to be associated with. Knowledge function: The need to attain a sense of understanding and control over one's life. An individual's attitudes therefore serve to help set standards and rules which govern their sense of being. When communication targets an underlying function, its degree of persuasiveness influences whether individuals change their attitude after determining that another attitude would more effectively fulfill that function. Inoculation theory A vaccine introduces a weak form of a virus that can easily be defeated to prepare the immune system should it need to fight off a stronger form of the same virus. In much the same way, the theory of inoculation suggests that a certain party can introduce a weak form of an argument that is easily thwarted in order to make the audience inclined to disregard a stronger, full-fledged form of that argument from an opposing party. This often occurs in negative advertisements and comparative advertisements—both for products and political causes. An example would be a manufacturer of a product displaying an ad that refutes one particular claim made about a rival's product, so that when the audience sees an ad for said rival product, they refute the product claims automatically. Narrative transportation theory Narrative transportation theory proposes that when people lose themselves in a story, their attitudes and intentions change to reflect that story. The mental state of narrative transportation can explain the persuasive effect of stories on people, who may experience narrative transportation when certain contextual and personal preconditions are met, as Green and Brock postulate for the transportation-imagery model. Narrative transportation occurs whenever the story receiver experiences a feeling of entering a world evoked by the narrative because of empathy for the story characters and imagination of the story plot. Social judgment theory Social judgment theory suggests that when people are presented with an idea or any kind of persuasive proposal, their natural reaction is to immediately seek a way to sort the information subconsciously and react to it. We evaluate the information and compare it with the attitude we already have, which is called the initial attitude or anchor point. When trying to sort incoming persuasive information, an audience evaluates whether it lands in their latitude of acceptance, latitude of non-commitment or indifference, or the latitude of rejection. The size of these latitudes varies from topic to topic. Our "ego-involvement" generally plays one of the largest roles in determining the size of these latitudes. When a topic is closely connected to how we define and perceive ourselves, or deals with anything we care passionately about, our latitudes of acceptance and non-commitment are likely to be much smaller and our attitude of rejection much larger. A person's anchor point is considered to be the center of their latitude of acceptance, the position that is most acceptable to them. An audience is likely to distort incoming information to fit into their unique latitudes. If something falls within the latitude of acceptance, the subject tends to assimilate the information and consider it closer to his anchor point than it really is. Inversely, if something falls within the latitude of rejection, the subject tends to contrast the information and convince themself the information is farther away from their anchor point than it really is. When trying to persuade an individual target or an entire audience, it is vital to first learn the average latitudes of acceptance, non-commitment, and rejection of your audience. It is ideal to use persuasive information that lands near the boundary of the latitude of acceptance if the goal is to change the audience's anchor point. Repeatedly suggesting ideas on the fringe of the acceptance latitude makes people gradually adjust their anchor points, while suggesting ideas in the rejection latitude or even the non-commitment latitude does not change the audience's anchor point. Methods Persuasion methods are also sometimes referred to as persuasion tactics or persuasion strategies. Use of force There is the use of force in persuasion, which does not have any scientific theories, except for its use to make demands. The use of force is then a precedent to the failure of less direct means of persuasion. Application of this strategy can be interpreted as a threat since the persuader does not give options to their request. Weapons of influence Robert Cialdini, in Influence, his book on persuasion, defined six "influence cues or weapons of influence": Influence is the process of changing. Reciprocity The principle of reciprocity states that when a person provides us with something, we attempt to repay them in kind. Reciprocation produces a sense of obligation, which can be a powerful tool in persuasion. The reciprocity rule is effective because it can be overpowering and instill in us a sense of obligation. Generally, we have a dislike for individuals who neglect to return a favor or provide payment when offered a free service or gift. As a result, reciprocation is a widely held principle. This societal standard makes reciprocity extremely powerful persuasive technique, as it can result in unequal exchanges and can even apply to an uninvited first favor. Reciprocity applies to the marketing field because of its use as a powerful persuasive technique. The marketing tactic of "free samples" demonstrates the reciprocity rule because of the sense of obligation that the rule produces. This sense of obligation comes from the desire to repay the marketer for the gift of a "free sample." Commitment and consistency Consistency is an important aspect of persuasion because it: is highly valued by society, results in a beneficial approach to daily life, and provides a valuable shortcut through the complicated nature of modern existence. Consistency allows us to more effectively make decisions and process information. The concept of consistency states that someone who commits to something, orally or in writing, is more likely to honor that commitment. This is especially true for written commitments, as they appear psychologically more concrete and can create hard proof. Someone who commits to a stance tends to behave according to that commitment. Commitment is an effective persuasive technique, because once you get someone to commit, they are more likely to engage in self-persuasion, providing themselves and others with reasons and justifications to support their commitment in order to avoid dissonance. Cialdini notes Chinese brainwashing of American prisoners of war in Korean War to rewrite their self-image and gain automatic unenforced compliance. Another example is children being made to repeat the Pledge of Allegiance each morning and why marketers make you close popups by saying "I'll sign up later" or "No thanks, I prefer not making money". Social proof Social learning, also known as social proof, is a core principle among almost all forms of persuasion. It is based on the idea of peer influence, and is considered essential for audience-centered approaches to persuasive messages. The principle of social proof suggests what people believe or do is typically learned by observing the norms of those around us. People naturally conform their actions and beliefs to fit what society expects, as the rewards for doing so are usually greater than standing out. "The power of the crowd" is thought to be highly involved in the decisions we make. Social proof is often utilized by people in a situation that requires a decision be made. In uncertain or ambiguous situations, when multiple possibilities create choices we must make, people are likely to conform to what others do. We take cues from those around us as to what the appropriate behavior is in that moment. People often feel they will make fewer mistakes "by acting in accord with social evidence than by behaving contrary to it." Likeness This principle is simple and concise. People say "yes" to people that they like. Two major factors contribute to overall likeness. The first is physical attractiveness. People who are physically attractive seem more persuasive. They get what they want and they can easily change others' attitudes. This attractiveness is proven to send favorable messages/impressions of other traits that a person may have, such as talent, kindness, and intelligence. The second factor is similarity. People are more easily persuaded by others they deem as similar to themselves. Authority People are more prone to believing those with authority. They have the tendency to believe that if an expert says something, it must be true. People are more likely to adhere to opinions of individuals who are knowledgeable and trustworthy. Although a message often stands or falls on the weight of its ideas and arguments, a person's attributes or implied authority can have a large effect on the success of their message. In The True Believer, Eric Hoffer noted, "People whose lives are barren and insecure seem to show a greater willingness to obey than people who are self-sufficient and self-confident. To the frustrated, freedom from responsibility is more attractive than freedom from restraint. . . . They willingly abdicate the directing of their lives to those who want to plan, command and shoulder all responsibility." In the Milgram study, a series of experiments begun in 1961, a "teacher" and a "learner" were placed in two different rooms. The "learner" was attached to an electric harness that could administer shock. The "teacher" was told by a supervisor, dressed in a white scientist's coat, to ask the learner questions and punish him when he got a question wrong. The teacher was instructed by the study supervisor to deliver an electric shock from a panel under the teacher's control. After delivery, the teacher had to up the voltage to the next notch. The voltage went up to 450 volts. The catch to this experiment was that the teacher did not know that the learner was an actor faking the pain sounds he heard and was not actually being harmed. The experiment was being done to see how obedient we are to authority. "When an authority tells ordinary people it is their job to deliver harm, how much suffering will each subject be willing to inflict on an entirely innocent other person if the instructions come 'from above'?." In this study, the results showed that the teachers were willing to give as much pain as was available to them. The conclusion was that people are willing to bring pain upon others when they are directed to do so by some authority figure. Scarcity Scarcity could play an important role in the process of persuasion. When something has limited availability, people assign it more value. As one of the six basic principles behind the science of persuasion, then, "scarcity" can be leveraged to convince people to buy into some suggestions, heed the advice or accept the business proposals. According to Robert Cialdini, Regents' Professor of Psychology and Marketing at Arizona State University and Distinguished Professor of Marketing in the W. P. Carey School, whatever is rare, uncommon or dwindling in availability — this idea of scarcity — confers value on objects, or even relationships. There are two major reasons why the scarcity principle works: When things are difficult to get, they are usually more valuable, so that can make it seem to have better quality. When things become less available, we could lose the chance to acquire them. When this happens, people usually assign the scarce item or service more value simply because it is harder to acquire. This principle is that everyone wants things that are out of their reach. Something easily available is not that desirable as something very rare. Persuasive technology List of methods By appeal to reason: Logic Logical argument Rhetoric Scientific evidence (proof) Scientific method By appeal to emotion: Cosmetic Advertising Presentation and Imagination Pity Propaganda Manipulation (psychology) Seduction Tradition Aids to persuasion: Body language Communication skill or Rhetoric Personality tests and conflict style inventory help devise strategy based on an individual's preferred style of interaction Sales techniques Other techniques: Deception Hypnosis Power (social and political) Subliminal advertising Coercive techniques, some of which are highly controversial or not scientifically proven effective: Brainwashing Coercive persuasion Force Mind control Torture In culture It is through a basic cultural personal definition of persuasion that everyday people understand how others are attempting to influence them and then how they influence others. The dialogue surrounding persuasion is constantly evolving because of the necessity to use persuasion in everyday life. Persuasion tactics traded in society have influences from researchers, which may sometimes be misinterpreted. To keep evolutionary advantage, in the sense of wealth and survival, you must persuade and not be persuaded. To understand cultural persuasion, researchers gather knowledge from domains such as "buying, selling, advertising, and shopping, as well as parenting and courting." Methods of persuasion vary by culture, both in prevalence and effectiveness. For example, advertisements tend to appeal to different values according to whether they are used in collectivistic or individualistic cultures. Persuasion Knowledge Model (PKM) The Persuasion Knowledge Model (PKM) was created by Friestad and Wright in 1994. This framework allows the researchers to analyze the process of gaining and using everyday persuasion knowledge. The researchers suggest the necessity of including "the relationship and interplay between everyday folk knowledge and scientific knowledge on persuasion, advertising, selling, and marketing in general." To educate the general population about research findings and new knowledge about persuasion, a teacher must draw on their pre-existing beliefs from folk persuasion to make the research relevant and informative to lay people, which creates "mingling of their scientific insights and commonsense beliefs." As a result of this constant mingling, the issue of persuasion expertise becomes messy. Expertise status can be interpreted from a variety of sources like job titles, celebrity, or published scholarship. It is through this multimodal process that we create concepts like, "Stay away from car salesmen, they will try to trick you." The kind of persuasion techniques blatantly employed by car salesmen creates an innate distrust of them in popular culture. According to Psychology Today, they employ tactics ranging from making personal life ties with the customer to altering reality by handing the customer the new car keys before the purchase. Campbell proposed and empirically demonstrated that some persuasive advertising approaches lead consumers to infer manipulative intent on the marketer's part. Once consumers infer manipulative intent, they are less persuaded by the marketer, as indicated by attenuated advertising attitudes, brand attitudes and purchase intentions. Campbell and Kirmani developed an explicit model of the conditions under which consumers use persuasion knowledge in evaluating influence agents such as salespersons. Neurobiology An article showed that EEG measures of anterior prefrontal asymmetry might be a predictor of persuasion. Research participants were presented with arguments that favored and arguments that opposed the attitudes they already held. Those whose brain was more active in left prefrontal areas said that they paid the most attention to statements with which they agreed while those with a more active right prefrontal area said that they paid attention to statements that disagreed. This is an example of defensive repression, the avoidance or forgetting of unpleasant information. Research has shown that the trait of defensive repression is related to relative left prefrontal activation. In addition, when pleasant or unpleasant words, probably analogous to agreement or disagreement, were seen incidental to the main task, an fMRI scan showed preferential left prefrontal activation to the pleasant words. One way therefore to increase persuasion would seem to be to selectively activate the right prefrontal cortex. This is easily done by monaural stimulation to the contralateral ear. The effect apparently depends on selective attention rather than merely the source of stimulation. This manipulation had the expected outcome: more persuasion for messages coming from the left. Modern Persuasion and Fallacies "Persuasion, traditionally studied through classical frameworks such as Aristotle's appeals of logos, ethos, and pathos, has evolved with modern rhetorical theories. Kenneth Burke, a prominent 20th-century rhetorician, expanded the understanding of persuasion by introducing the concept of identification. According to Burke, effective persuasion is not merely about logical argumentation or emotional appeal but about creating a sense of shared identity and values between the speaker and the audience. In Burke’s view, persuasion works when the audience feels a connection or alignment with the speaker's perspective, thus making the message more compelling. This contrasts with the more transactional nature of classical persuasion and highlights a relational and symbolic aspect of communication. Burke also pointed out that rhetoric is deeply embedded in social interactions, not just in public speeches or debates but in everyday communication. He argued that language shapes how people perceive their social world, influencing actions and decisions. Thus, persuasion is fundamentally about how language constructs and maintains social realities, making it a critical force in both personal and public life. In addition to these modern perspectives, understanding fallacies is crucial for effective persuasion. Fallacies are logical errors that can compromise the integrity of an argument. Kenneth Burke’s emphasis on ethical persuasion highlights the importance of recognizing these fallacies to avoid manipulation and misinformation. Examples of common fallacies include: Ad Hominem: Attacking the character of the speaker rather than the argument itself. This shifts focus from the message to irrelevant personal traits. Appeal to Ignorance: Arguing that something is true because it has not been proven false, which bypasses the need for evidence and logical reasoning. Slippery Slope: Suggesting that a minor action will inevitably lead to a series of negative consequences without providing evidence for this causal chain. Understanding these fallacies allows individuals to critique arguments effectively and ensures that persuasion remains an ethical practice. In this way, Burke’s theory not only broadens the scope of persuasion to include identification and shared meaning but also reinforces the need for ethical and transparent communication practices." See also References Further reading Cialdini, Robert B. "Harnessing the Science of Persuasion" (Archive). Harvard Business Review. October 2001. Druckman, James N. 2022. "A Framework for the Study of Persuasion." Annual Review of Political Science. Herbert I. Abelson, Persuasion: How opinions and attitudes are changed, Springer Publishing Company, 1965 Jacquelyn Kegley and Krzysztof Piotr Skowroński, eds. Persuasion and Compulsion in Democracy, Lexington 2013. Richard E. Vatz, The Only Authentic Book of Persuasion, Kendall Hunt, 2013 External links Attitude change Belief
0.780279
0.993296
0.775048
Life skills
Life skills are abilities for adaptive and positive behavior that enable humans to deal effectively with the demands and challenges of life. This concept is also termed as psychosocial competency. The subject varies greatly depending on social norms and community expectations but skills that function for well-being and aid individuals to develop into active and productive members of their communities are considered as life skills. Enumeration and categorization The UNICEF Evaluation Office suggests that "there is no definitive list" of psychosocial skills; nevertheless UNICEF enumerates psychosocial and interpersonal skills that are generally well-being oriented, and essential alongside literacy and numeracy skills. Since it changes its meaning from culture to culture and life positions, it is considered a concept that is elastic in nature. But UNICEF acknowledges social and emotional life skills identified by Collaborative for Academic, Social and Emotional Learning (CASEL). Life skills are a product of synthesis: many skills are developed simultaneously through practice, like humor, which allows a person to feel in control of a situation and make it more manageable in perspective. It allows the person to release fears, anger, and stress & achieve a qualitative life. For example, decision-making often involves critical thinking ("what are my options?") and values clarification ("what is important to me?"), ("How do I feel about this?"). Ultimately, the interplay between the skills is what produces powerful behavioral outcomes, especially where this approach is supported by other strategies. Life skills can vary from financial literacy, through substance-abuse prevention, to therapeutic techniques to deal with disabilities such as autism. Core skills The World Health Organization in 1999 identified the following core cross-cultural areas of life skills: decision-making and problem-solving; creative thinking (see also: lateral thinking) and critical thinking; communication and interpersonal skills; self-awareness and empathy; assertiveness and equanimity; and resilience and coping with emotions and coping with stress. UNICEF listed similar skills and related categories in its 2012 report. Life skills curricular designed for K-12 often emphasize communications and practical skills needed for successful independent living as well as for developmental-disabilities/special-education students with an Individualized Education Program (IEP). There are various courses being run based on WHO's list supported by UNFPA. In Madhya Pradesh, India, the programme is being run with Government to teach these through Government Schools. Skills for work and life Skills for work and life, known as technical and vocational education and training (TVET) is comprising education, training and skills development relating to a wide range of occupational fields, production, services and livelihoods. TVET, as part of lifelong learning, can take place at secondary, post-secondary and tertiary levels, and includes work-based learning and continuing training and professional development which may lead to qualifications. TVET also includes a wide range of skills development opportunities attuned to national and local contexts. Learning to learn and the development of literacy and numeracy skills, transversal skills and citizenship skills are integral components of TVET. Parenting: a venue of life skills nourishment Life skills are often taught in the domain of parenting, either indirectly through the observation and experience of the child, or directly with the purpose of teaching a specific skill. Parenting itself can be considered as a set of life skills which can be taught or comes natural to a person. Educating a person in skills for dealing with pregnancy and parenting can also coincide with additional life skills development for the child and enable the parents to guide their children in adulthood. Many life skills programs are offered when traditional family structures and healthy relationships have broken down, whether due to parental lapses, divorce, psychological disorders or due to issues with the children (such as substance abuse or other risky behavior). For example, the International Labour Organization is teaching life skills to ex-child laborers and at-risk children in Indonesia to help them avoid and to recover from worst forms of child abuse. Models: behavior prevention vs. positive development While certain life skills programs focus on teaching the prevention of certain behaviors, they can be relatively ineffective. Based upon their research, the Family and Youth Services Bureau, a division of the U.S. Department of Health and Human Services advocates the theory of positive youth development (PYD) as a replacement for the less effective prevention programs. PYD focuses on the strengths of an individual as opposed to the older decrepit models which tend to focus on the "potential" weaknesses that have yet to be shown. "..life skills education, have found to be an effective psychosocial intervention strategy for promoting positive social, and mental health of adolescents which plays an important role in all aspects such as strengthening coping strategies and developing self-confidence and emotional intelligence..." See also Sources Further reading People Skills & Self-Management (free online guide), Alliances for Psychosocial Advancements in Living: Communication Connections (APAL-CC) Reaching Your Potential: Personal and Professional Development, 4th Edition Life Skills: A Course in Applied Problem Solving., Saskatchewan NewStart Inc., First Ave and River Street East, Prince Albert, Saskatchewan, Canada. References
0.779872
0.993628
0.774903
Praxis (process)
Praxis is the process by which a theory, lesson, or skill is enacted, embodied, realized, applied, or put into practice. "Praxis" may also refer to the act of engaging, applying, exercising, realizing, or practising ideas. This has been a recurrent topic in the field of philosophy, discussed in the writings of Plato, Aristotle, St. Augustine, Francis Bacon, Immanuel Kant, Søren Kierkegaard, Ludwig von Mises, Karl Marx, Antonio Gramsci, Martin Heidegger, Hannah Arendt, Jean-Paul Sartre, Paulo Freire, Murray Rothbard, and many others. It has meaning in the political, educational, spiritual and medical realms. Origins The word praxis is from . In Ancient Greek the word praxis (πρᾶξις) referred to activity engaged in by free people. The philosopher Aristotle held that there were three basic activities of humans: theoria (thinking), poiesis (making), and praxis (doing). Corresponding to these activities were three types of knowledge: theoretical, the end goal being truth; poietical, the end goal being production; and practical, the end goal being action. Aristotle further divided the knowledge derived from praxis into ethics, economics, and politics. He also distinguished between eupraxia (εὐπραξία, "good praxis") and dyspraxia (δυσπραξία, "bad praxis, misfortune"). Marxism Young Hegelian August Cieszkowski was one of the earliest philosophers to use the term praxis to mean "action oriented towards changing society" in his 1838 work Prolegomena zur Historiosophie (Prolegomena to a Historiosophy). Cieszkowski argued that while absolute truth had been achieved in the speculative philosophy of Hegel, the deep divisions and contradictions in man's consciousness could only be resolved through concrete practical activity that directly influences social life. Although there is no evidence that Karl Marx himself read this book, it may have had an indirect influence on his thought through the writings of his friend Moses Hess. Marx uses the term "praxis" to refer to the free, universal, creative and self-creative activity through which man creates and changes his historical world and himself. Praxis is an activity unique to man, which distinguishes him from all other beings. The concept appears in two of Marx's early works: the Economic and Philosophical Manuscripts of 1844 and the Theses on Feuerbach (1845). In the former work, Marx contrasts the free, conscious productive activity of human beings with the unconscious, compulsive production of animals. He also affirms the primacy of praxis over theory, claiming that theoretical contradictions can only be resolved through practical activity. In the latter work, revolutionary practice is a central theme: Marx here criticizes the materialist philosophy of Ludwig Feuerbach for envisaging objects in a contemplative way. Marx argues that perception is itself a component of man's practical relationship to the world. To understand the world does not mean considering it from the outside, judging it morally or explaining it scientifically. Society cannot be changed by reformers who understand its needs, only by the revolutionary praxis of the mass whose interest coincides with that of society as a whole—the proletariat. This will be an act of society understanding itself, in which the subject changes the object by the very fact of understanding it. Seemingly inspired by the Theses, the nineteenth century socialist Antonio Labriola called Marxism the "philosophy of praxis". This description of Marxism would appear again in Antonio Gramsci's Prison Notebooks and the writings of the members of the Frankfurt School. Praxis is also an important theme for Marxist thinkers such as Georg Lukacs, Karl Korsch, Karel Kosik and Henri Lefebvre, and was seen as the central concept of Marx's thought by Yugoslavia's Praxis School, which established a journal of that name in 1964. Jean-Paul Sartre In the Critique of Dialectical Reason, Jean-Paul Sartre posits a view of individual praxis as the basis of human history. In his view, praxis is an attempt to negate human need. In a revision of Marxism and his earlier existentialism, Sartre argues that the fundamental relation of human history is scarcity. Conditions of scarcity generate competition for resources, exploitation of one over another and division of labor, which in its turn creates struggle between classes. Each individual experiences the other as a threat to his or her own survival and praxis; it is always a possibility that one's individual freedom limits another's. Sartre recognizes both natural and man-made constraints on freedom: he calls the non-unified practical activity of humans the "practico-inert". Sartre opposes to individual praxis a "group praxis" that fuses each individual to be accountable to each other in a common purpose. Sartre sees a mass movement in a successful revolution as the best exemplar of such a fused group. Hannah Arendt In The Human Condition, Hannah Arendt argues that Western philosophy too often has focused on the contemplative life (vita contemplativa) and has neglected the active life (vita activa). This has led humanity to frequently miss much of the everyday relevance of philosophical ideas to real life. For Arendt, praxis is the highest and most important level of the active life. Thus, she argues that more philosophers need to engage in everyday political action or praxis, which she sees as the true realization of human freedom. According to Arendt, our capacity to analyze ideas, wrestle with them, and engage in active praxis is what makes us uniquely human. In Maurizio Passerin d'Etreves's estimation, "Arendt's theory of action and her revival of the ancient notion of praxis represent one of the most original contributions to twentieth century political thought. ... Moreover, by viewing action as a mode of human togetherness, Arendt is able to develop a conception of participatory democracy which stands in direct contrast to the bureaucratized and elitist forms of politics so characteristic of the modern epoch." Education Praxis is used by educators to describe a recurring passage through a cyclical process of experiential learning, such as the cycle described and popularised by David A. Kolb. Paulo Freire defines praxis in Pedagogy of the Oppressed as "reflection and action directed at the structures to be transformed." Through praxis, oppressed people can acquire a critical awareness of their own condition, and, with teacher-students and students-teachers, struggle for liberation. In the British Channel 4 television documentary New Order: Play at Home, Factory Records owner Tony Wilson describes praxis as "doing something, and then only afterwards, finding out why you did it". Praxis may be described as a form of critical thinking and comprises the combination of reflection and action. Praxis can be viewed as a progression of cognitive and physical actions: Taking the action Considering the impacts of the action Analysing the results of the action by reflecting upon it Altering and revising conceptions and planning following reflection Implementing these plans in further actions This creates a cycle which can be viewed in terms of educational settings, learners and educational facilitators. Scott and Marshall (2009) refer to praxis as "a philosophical term referring to human action on the natural and social world". Furthermore, Gramsci (1999) emphasises the power of praxis in Selections from the Prison Notebooks by stating that "The philosophy of praxis does not tend to leave the simple in their primitive philosophy of common sense but rather to lead them to a higher conception of life". To reveal the inadequacies of religion, folklore, intellectualism and other such 'one-sided' forms of reasoning, Gramsci appeals directly in his later work to Marx's 'philosophy of praxis', describing it as a 'concrete' mode of reasoning. This principally involves the juxtaposition of a dialectical and scientific audit of reality; against all existing normative, ideological, and therefore counterfeit accounts. Essentially a 'philosophy' based on 'a practice', Marx's philosophy, is described correspondingly in this manner, as the only 'philosophy' that is at the same time a 'history in action' or a 'life' itself (Gramsci, Hoare and Nowell-Smith, 1972, p. 332). Spirituality Praxis is also key in meditation and spirituality, where emphasis is placed on gaining first-hand experience of concepts and certain areas, such as union with the Divine, which can only be explored through praxis due to the inability of the finite mind (and its tool, language) to comprehend or express the infinite. In an interview for YES! Magazine, Matthew Fox explained it this way: According to Strong's Concordance, the Hebrew word ta‛am is, properly, a taste. This is, figuratively, perception and, by implication, intelligence; transitively, a mandate: advice, behaviour, decree, discretion, judgment, reason, taste, understanding. Medicine Praxis is the ability to perform voluntary skilled movements. The partial or complete inability to do so in the absence of primary sensory or motor impairments is known as apraxia. See also Apraxia Christian theological praxis Hexis Lex artis Orthopraxy Praxeology Praxis Discussion Series Praxis (disambiguation) Praxis intervention Praxis school Practice (social theory) Theses on Feuerbach References Further reading Paulo Freire (1970), Pedagogy of the Oppressed, Continuum International Publishing Group. External links Entry for "praxis" at the Encyclopaedia of Informal Education Der Begriff Praxis Concepts in the philosophy of mind Marxism
0.776842
0.997456
0.774866
Egoism
Egoism is a philosophy concerned with the role of the self, or , as the motivation and goal of one's own action. Different theories of egoism encompass a range of disparate ideas and can generally be categorized into descriptive or normative forms. That is, they may be interested in either describing that people do act in self-interest or prescribing that they should. Other definitions of egoism may instead emphasise action according to one's will rather than one's self-interest, and furthermore posit that this is a truer sense of egoism. The New Catholic Encyclopedia states of egoism that it "incorporates in itself certain basic truths: it is natural for man to love himself; he should moreover do so, since each one is ultimately responsible for himself; pleasure, the development of one's potentialities, and the acquisition of power are normally desirable." The moral censure of self-interest is a common subject of critique in egoist philosophy, with such judgments being examined as means of control and the result of power relations. Egoism may also reject that insight into one's internal motivation can arrive extrinsically, such as from psychology or sociology, though, for example, this is not present in the philosophy of Friedrich Nietzsche. Overview The term egoism is derived from the French , from the Latin (first person singular personal pronoun; "I") with the French ("-ism"). Descriptive theories The descriptive variants of egoism are concerned with self-regard as a factual description of human motivation and, in its furthest application, that all human motivation stems from the desires and interest of the ego. In these theories, action which is self-regarding may be simply termed egoistic. The position that people tend to act in their own self-interest is called default egoism, whereas psychological egoism is the position that all motivations are rooted in an ultimately self-serving psyche. That is, in its strong form, that even seemingly altruistic actions are only disguised as such and are always self-serving. Its weaker form instead holds that, even if altruistic motivation is possible, the willed action necessarily becomes egoistic in serving one's own will. In contrast to this and philosophical egoism, biological egoism (also called evolutionary egoism) describes motivations rooted solely in reproductive self-interest (i.e. reproductive fitness). Furthermore, selfish gene theory holds that it is the self-interest of genetic information that conditions human behaviour. Normative theories Theories which hold egoism to be normative stipulate that the ego ought to promote its own interests above other values. Where this ought is held to be a pragmatic judgment it is termed rational egoism and where it is held to be a moral judgment it is termed ethical egoism. The Stanford Encyclopedia of Philosophy states that "ethical egoism might also apply to things other than acts, such as rules or character traits" but that such variants are uncommon. Furthermore, conditional egoism is a consequentialist form of ethical egoism which holds that egoism is morally right if it leads to morally acceptable ends. John F. Welsh, in his work Max Stirner's Dialectical Egoism: A New Interpretation, coins the term dialectical egoism to describe an interpretation of the egoist philosophy of Max Stirner as being fundamentally dialectical. Normative egoism, as in the case of Stirner, need not reject that some modes of behavior are to be valued above others—such as Stirner's affirmation that non-restriction and autonomy are to be most highly valued. Contrary theories, however, may just as easily favour egoistic domination of others. Theoreticians Stirner Nietzsche The philosophy of Friedrich Nietzsche has been linked to forms of both descriptive and normative egoism. Nietzsche, in attacking the widely held moral abhorrence for egoistic action, seeks to free higher human beings from their belief that this morality is good for them. He rejects Christian and Kantian ethics as merely the disguised egoism of slave morality. In his On the Genealogy of Morals, Friedrich Nietzsche traces the origins of master–slave morality to fundamentally egoistic value judgments. In the aristocratic valuation, excellence and virtue come as a form of superiority over the common masses, which the priestly valuation, in ressentiment of power, seeks to invert—where the powerless and pitiable become the moral ideal. This upholding of unegoistic actions is therefore seen as stemming from a desire to reject the superiority or excellency of others. He holds that all normative systems which operate in the role often associated with morality favor the interests of some people, often, though not necessarily, at the expense of others. Nevertheless, Nietzsche also states in the same book that there is no 'doer' of any acts, be they selfish or not: {{quote frame|...there is no "being" behind doing, effecting, becoming; "the doer" is merely a fiction added to the deed—the deed is everything.(§13)|author=Friedrich Nietzsche|source=On the Genealogy of Morals}} Jonas Monte of Brigham Young University argues that Nietzsche doubted if any 'I' existed in the first place, which the former defined as "a conscious Ego who commands mental states". Other theoreticians Jeremy Bentham, who is attributed as an early proponent of psychological egoism Nikolai Gavrilovich Chernyshevskii, a Russian literary critic and philosopher of nihilism and rational egoism Aleister Crowley, who popularized the expression "Do what thou wilt" Arthur Desmond as Ragnar Redbeard (possibly, unproved) Thomas Hobbes, who is attributed as an early proponent of psychological egoism John Henry Mackay, a British-German egoist anarchist Bernard de Mandeville, whose materialism has been retroactively described as form of egoism Friedrich Nietzsche, whose concept of will to power has both descriptive and prescriptive interpretations Dmitry Ivanovich Pisarev, a Russian literary critic and philosopher of nihilism and rational egoism Ayn Rand, who supported an egoistic model of capitalist self-incentive and selfishness Max Stirner, whose views were described by John F. Welsh as "dialectical egoism" Benjamin Tucker, an American egoist anarchist James L. Walker, who independently formulated an egoist philosophy before himself discovering the work of Stirner John Fowles, British writer who laid out an individualist philosophy in his book The Aristos. Relation to altruism In 1851, French philosopher Auguste Comte coined the term altruism (; , ) as an antonym for egoism. In this sense, altruism defined Comte's position that all self-regard must be replaced with only the regard for others. While Friedrich Nietzsche does not view altruism as a suitable antonym for egoism, Comte instead states that only two human motivations exist, egoistic and altruistic, and that the two cannot be mediated; that is, one must always predominate the other. For Comte, the total subordination of the self to altruism is a necessary condition to both social and personal benefit. Nietzsche, rather than rejecting the practice of altruism, warns that despite there being neither much altruism nor equality in the world, there is almost universal endorsement of their value and, notoriously, even by those who are its worst enemies in practice. Egoist philosophy commonly views the subordination of the self to altruism as either a form of domination that limits freedom, an unethical or irrational principle, or an extension of some egoistic root cause. In evolutionary theory, biological altruism is the observed occurrence of an organism acting to the benefit of others at the cost of its own reproductive fitness. While biological egoism does grant that an organism may act to the benefit of others, it describes only such when in accordance with reproductive self-interest. Kin altruism and selfish gene theory are examples of this division. On biological altruism, the Stanford Encyclopedia of Philosophy states: "Contrary to what is often thought, an evolutionary approach to human behaviour does not imply that humans are likely to be motivated by self-interest alone. One strategy by which ‘selfish genes’ may increase their future representation is by causing humans to be non-selfish, in the psychological sense." This is a central topic within contemporary discourse of psychological egoism. Relation to nihilism The history of egoist thought has often overlapped with that of nihilism. For example, Max Stirner's rejection of absolutes and abstract concepts often places him among the first philosophical nihilists. The popular description of Stirner as a moral nihilist, however, may fail to encapsulate certain subtleties of his ethical thought. The Stanford Encyclopedia of Philosophy states, "Stirner is clearly committed to the non-nihilistic view that certain kinds of character and modes of behaviour (namely autonomous individuals and actions) are to be valued above all others. His conception of morality is, in this respect, a narrow one, and his rejection of the legitimacy of moral claims is not to be confused with a denial of the propriety of all normative or ethical judgement." Stirner's nihilism may instead be understood as cosmic nihilism. Likewise, both normative and descriptive theories of egoism further developed under Russian nihilism, shortly giving birth to rational egoism. Nihilist philosophers Dmitry Pisarev and Nikolay Chernyshevsky were influential in this regard, compounding such forms of egoism with hard determinism. Max Stirner's philosophy strongly rejects modernity and is highly critical of the increasing dogmatism and oppressive social institutions that embody it. In order that it might be surpassed, egoist principles are upheld as a necessary advancement beyond the modern world. The Stanford Encyclopedia states that Stirner's historical analyses serve to "undermine historical narratives which portray the modern development of humankind as the progressive realisation of freedom, but also to support an account of individuals in the modern world as increasingly oppressed". This critique of humanist discourses especially has linked Stirner to more contemporary poststructuralist thought. Political egoism Since normative egoism rejects the moral obligation to subordinate the ego to society-at-large or to a ruling class, it may be predisposed to certain political implications. The Internet Encyclopedia of Philosophy states: In contrast with this however, such an ethic may not morally obligate against the egoistic exercise of power over others. On these grounds, Friedrich Nietzsche criticizes egalitarian morality and political projects as unconducive to the development of human excellence. Max Stirner's own conception, the union of egoists as detailed in his work The Ego and Its Own'', saw a proposed form of societal relations whereby limitations on egoistic action are rejected. When posthumously adopted by the anarchist movement, this became the foundation for egoist anarchism. Stirner's variant of property theory is similarly dialectical, where the concept of ownership is only that personal distinction made between what is one's property and what is not. Consequentially, it is the exercise of control over property which constitutes the nonabstract possession of it. In contrast to this, Ayn Rand incorporates capitalist property rights into her egoist theory. Revolutionary politics Egoist philosopher Nikolai Gavrilovich Chernyshevskii was the dominant intellectual figure behind the 1860–1917 revolutionary movement in Russia, which resulted in the assassination of Tsar Alexander II eight years before his death in 1889. Dmitry Pisarev was a similarly radical influence within the movement, though he did not personally advocate political revolution. Philosophical egoism has also found wide appeal among anarchist revolutionaries and thinkers, such as John Henry Mackay, Benjamin Tucker, Émile Armand, Han Ryner Gérard de Lacaze-Duthiers, Renzo Novatore, Miguel Giménez Igualada, and Lev Chernyi. Though he did not involve in any revolutionary movements himself, the entire school of individualist anarchism owes much of its intellectual heritage to Max Stirner. Egoist philosophy may be misrepresented as a principally revolutionary field of thought. However, neither Hobbesian nor Nietzschean theories of egoism approve of political revolution. Anarchism and revolutionary socialism were also strongly rejected by Ayn Rand and her followers. Fascism The philosophies of both Nietzsche and Stirner were heavily appropriated (or possibly expropriated) by fascist and proto-fascist ideologies. Nietzsche in particular has infamously been represented as a predecessor to Nazism and a substantial academic effort was necessary to disassociate his ideas from their aforementioned appropriation. See also References Ethical schools and movements Individualism Consequentialism Philosophy of life
0.778609
0.995154
0.774836
Acclimatization
Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do. Names The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimation is less commonly encountered, and fewer dictionaries enter it. Methods Biochemical In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low temperatures display relatively high resting levels of heat shock proteins so that when they are exposed to even more extreme temperatures the proteins are readily available. Expression of heat shock proteins and regulation of membrane fluidity are just two of many biochemical methods organisms use to acclimate to novel environments. Morphological Organisms are able to change several characteristics relating to their morphology in order to maintain performance in novel environments. For example, birds often increase their organ size to increase their metabolism. This can take the form of an increase in the mass of nutritional organs or heat-producing organs, like the pectorals (with the latter being more consistent across species). The theory While the capacity for acclimatization has been documented in thousands of species, researchers still know very little about how and why organisms acclimate in the way that they do. Since researchers first began to study acclimation, the overwhelming hypothesis has been that all acclimation serves to enhance the performance of the organism. This idea has come to be known as the beneficial acclimation hypothesis. Despite such widespread support for the beneficial acclimation hypothesis, not all studies show that acclimation always serves to enhance performance (See beneficial acclimation hypothesis). One of the major objections to the beneficial acclimation hypothesis is that it assumes that there are no costs associated with acclimation. However, there are likely to be costs associated with acclimation. These include the cost of sensing the environmental conditions and regulating responses, producing structures required for plasticity (such as the energetic costs in expressing heat shock proteins), and genetic costs (such as linkage of plasticity-related genes with harmful genes). Given the shortcomings of the beneficial acclimation hypothesis, researchers are continuing to search for a theory that will be supported by empirical data. The degree to which organisms are able to acclimate is dictated by their phenotypic plasticity or the ability of an organism to change certain traits. Recent research in the study of acclimation capacity has focused more heavily on the evolution of phenotypic plasticity rather than acclimation responses. Scientists believe that when they understand more about how organisms evolved the capacity to acclimate, they will better understand acclimation. Examples Plants Many plants, such as maple trees, irises, and tomatoes, can survive freezing temperatures if the temperature gradually drops lower and lower each night over a period of days or weeks. The same drop might kill them if it occurred suddenly. Studies have shown that tomato plants that were acclimated to higher temperature over several days were more efficient at photosynthesis at relatively high temperatures than were plants that were not allowed to acclimate. In the orchid Phalaenopsis, phenylpropanoid enzymes are enhanced in the process of plant acclimatisation at different levels of photosynthetic photon flux. Animals Animals acclimatize in many ways. Sheep grow very thick wool in cold, damp climates. Fish are able to adjust only gradually to changes in water temperature and quality. Tropical fish sold at pet stores are often kept in acclimatization bags until this process is complete. Lowe & Vance (1995) were able to show that lizards acclimated to warm temperatures could maintain a higher running speed at warmer temperatures than lizards that were not acclimated to warm conditions. Fruit flies that develop at relatively cooler or warmer temperatures have increased cold or heat tolerance as adults, respectively (See Developmental plasticity). Humans The salt content of sweat and urine decreases as people acclimatize to hot conditions. Plasma volume, heart rate, and capillary activation are also affected. Acclimatization to high altitude continues for months or even years after initial ascent, and ultimately enables humans to survive in an environment that, without acclimatization, would kill them. Humans who migrate permanently to a higher altitude naturally acclimatize to their new environment by developing an increase in the number of red blood cells to increase the oxygen carrying capacity of the blood, in order to compensate for lower levels of oxygen intake. See also Acclimatisation society Beneficial acclimation hypothesis Heat index Introduced species Phenotypic plasticity Wind chill References Physiology Ecological processes Climate Biology terminology
0.779262
0.994293
0.774815
Psychological resilience
Psychological resilience is the ability to cope mentally and emotionally with a crisis, or to return to pre-crisis status quickly. The term was popularized in the 1970s and 1980s by psychologist Emmy Werner as she conducted a forty-year-long study of a cohort of Hawaiian children who came from low socioeconomic status backgrounds. Numerous factors influence a person's level of resilience. Internal factors include personal characteristics such as self-esteem, self-regulation, and a positive outlook on life. External factors include social support systems, including relationships with family, friends, and community, as well as access to resources and opportunities. People can leverage psychological interventions and other strategies to enhance their resilience and better cope with adversity. These include cognitive-behavioral techniques, mindfulness practices, building psychosocial factors, fostering positive emotions, and promoting self-compassion. Overview A resilient person uses "mental processes and behaviors in promoting personal assets and protecting self from the potential negative effects of stressors". Psychological resilience is an adaptation in a person's psychological traits and experiences that allows them to regain or remain in a healthy mental state during crises/chaos without long-term negative consequences. It is difficult to measure and test this psychological construct because resilience can be interpreted in a variety of ways. Most psychological paradigms (biomedical, cognitive-behavioral, sociocultural, etc.) have their own perspective of what resilience looks like, where it comes from, and how it can be developed. There are numerous definitions of psychological resilience, most of which center around two concepts: adversity and positive adaptation. Positive emotions, social support, and hardiness can influence a person to become more resilient. A psychologically resilient person can resist adverse mental conditions that are often associated with unfavorable life circumstances. This differs from psychological recovery which is associated with returning to those mental conditions that preceded a traumatic experience or personal loss. Research on psychological resilience has shown that it plays a crucial role in promoting mental health and well-being. Resilient people are better equipped to navigate life's challenges, maintain positive emotions, and recover from setbacks. They demonstrate higher levels of self-efficacy, optimism, and problem-solving skills, which contribute to their ability to adapt and thrive in adverse situations. Resilience is a "positive adaptation" after a stressful or adverse situation. When a person is "bombarded by daily stress, it disrupts their internal and external sense of balance, presenting challenges as well as opportunities." The routine stressors of daily life can have positive impacts which promote resilience. Some psychologists believe that it is not stress itself that promotes resilience but rather the person's perception of their stress and of their level of control. The presence of stress allows people to practice resilience. It is unknown what the correct level of stress is for each person. Some people can handle more stress than others. Stress is experienced in a person's life course at times of difficult life transitions, involving developmental and social change; traumatic life events, including grief and loss; and environmental pressures, encompassing poverty and community violence. Resilience is the integrated adaptation of physical, mental, and spiritual aspects to circumstances, and a coherent sense of self that is able to maintain normative developmental tasks that occur at various stages of life. The Children's Institute of the University of Rochester explains that "resilience research is focused on studying those who engage in life with hope and humor despite devastating losses". Resilience is not only about overcoming a deeply stressful situation, but also coming out of such a situation with "competent functioning". Resiliency allows a person to rebound from adversity as a strengthened and more resourceful person. Some characteristics associated with psychological resilience include: an easy temperament, good self-esteem, planning skills, and a supportive environment inside and outside of the family. When an event is appraised as comprehensible (predictable), manageable (controllable), and somehow meaningful (explainable) a resilient response is more likely. Process Psychological resilience is commonly understood as a process. It can also be characterized as a tool a person develops over time, or as a personal trait of the person ("resiliency"). Most research shows resilience as the result of people being able to interact with their environments and participate in processes that either promote well-being or protect them against the overwhelming influence of relative risk. This research supports the model in which psychological resilience is seen as a process rather than a trait—something to develop or pursue, rather than a static endowment or endpoint. When people are faced with an adverse condition, there are three ways in which they may approach the situation. respond with anger or aggression become overwhelmed and shut down feel the emotion about the situation and appropriately handle the emotion Resilience is promoted through the third approach, which is employed by individuals who adapt and change their current patterns to cope with disruptive states, thereby enhancing their well-being. In contrast, the first and second approaches lead individuals to adopt a victim mentality, blaming others and rejecting coping methods even after a crisis has passed. These individuals tend to react instinctively rather than respond thoughtfully, clinging to negative emotions such as fear, anger, anxiety, distress, helplessness, and hopelessness. Such emotions decrease problem-solving abilities and weaken resilience, making it harder to recover. Resilient people, on the other hand, actively cope, bounce back, and find solutions. Their resilience is further supported by protective environments, including good families, schools, communities, and social policies, which provide cumulative protective factors that bolster their ability to withstand and recover from exposure to risk factors. Resilience can be viewed as a developmental process (the process of developing resilience), or as indicated by a response process. In the latter approach, the effects of an event or stressor on a situationally relevant indicator variable are studied, distinguishing immediate responses, dynamic responses, and recovery patterns. In response to a stressor, more-resilient people show some (but less than less-resilient people) increase in stress. The speed with which this stress response returns to pre-stressor levels is also indicative of a person's resilience. Biological models From a scientific standpoint, resilience’s contested definition is multifaceted in relation to genetics, revealing a complex link between biological mechanisms and resilience "Resilience, conceptualized as a positive bio-psychological adaptation, has proven to be a useful theoretical context for understanding variables for predicting long-term health and well-being". Three notable bases for resilience—self-confidence, self-esteem and self-concept—each have roots in a different nervous system—respectively, the somatic nervous system, the autonomic nervous system, and the central nervous system. Research indicates that, like trauma, resilience is influenced by epigenetic modifications. Increased DNA methylation of the growth factor GDNF in certain brain regions promotes stress resilience, as do molecular adaptations of the blood–brain barrier. The two neurotransmitters primarily responsible for stress buffering within the brain are dopamine and endogenous opioids, as evidenced by research showing that dopamine and opioid antagonists increased stress response in both humans and animals. Primary and secondary rewards reduce negative reactivity of stress in the brain in both humans and animals. The relationship between social support and stress resilience is thought to be mediated by the oxytocin system's impact on the hypothalamic-pituitary-adrenal axis. Alongside such neurotransmitters, stress-induced alterations in brain structures, such as the prefrontal cortex (PFC) and hippocampus have been linked to mental health issues like depression and anxiety. The increased activation of the medial prefrontal cortex and glutamatergic circuits has emerged as a potential factor in enhancing resilience as “environmental enrichment… increases the complexity of… pyramidal neurons in hippocampus and PFC, suggesting… a shared feature of resilience under these two distinct condition[s]." History The first research on resilience was published in 1973. The study used epidemiology—the study of disease prevalence—to uncover the risks and the protective factors that now help define resilience. A year later, the same group of researchers created tools to look at systems that support development of resilience. Emmy Werner was one of the early scientists to use the term resilience. She studied a cohort of children from Kauai, Hawaii. Kauai was quite poor and many of the children in the study grew up with alcoholic or mentally ill parents. Many of the parents were also out of work. Werner noted that of the children who grew up in these detrimental situations, two-thirds exhibited destructive behaviors in their later-teen years, such as chronic unemployment, substance abuse, and out-of-wedlock births (in girls). However, one-third of these youngsters did not exhibit destructive behaviors. Werner called the latter group resilient. Thus, resilient children and their families were those who, by definition, demonstrated traits that allowed them to be more successful than non-resilient children and families. Resilience also emerged as a major theoretical and research topic in the 1980s in studies of children with mothers diagnosed with schizophrenia. A 1989 study showed that children with a schizophrenic parent may not obtain an appropriate level of comforting caregiving—compared to children with healthy parents—and that such situations often had a detrimental impact on children's development. On the other hand, some children of ill parents thrived and were competent in academic achievement, which led researchers to make efforts to understand such responses to adversity. Since the onset of the research on resilience, researchers have been devoted to discovering protective factors that explain people's adaptation to adverse conditions, such as maltreatment, catastrophic life events, or urban poverty. Researchers endeavor to uncover how some factors (e.g. connection to family) may contribute to positive outcomes. Trait resilience Temperamental and constitutional disposition is a major factor in resilience. It is one of the necessary precursors of resilience along with warmth in family cohesion and accessibility of prosocial support systems. There are three kinds of temperamental systems that play part in resilience: the appetitive system, defensive system, and attentional system. Trait resilience is negatively correlated with the personality traits of neuroticism and negative emotionality, which represent tendencies to see and react to the world as threatening, problematic, and distressing, and to view oneself as vulnerable. Trait resilience is positively correlated with the personality traits of openness and positive emotionality, that represent tendencies to engage with and confront the world with confidence self-directedness. Resilience traits are personal characteristics that express how people approach and react to events that they experience as negative. Trait resilience is generally considered via two methods: direct assessment of traits through resilience measures and proxy assessments of resilience in which existing cognate psychological constructs are used to explain resilient outcomes. Typically, trait resilience measures explore how individuals tend to react to and cope with adverse events. Proxy assessments of resilience, sometimes referred to as the buffering approach, view resilience as the antithesis of risk, focusing on how psychological processes interrelate with negative events to mitigate their effects. Possibly an individual perseverance trait, conceptually related to persistence and resilience, could also be measured behaviorally by means of arduous, difficult, or otherwise unpleasant tasks. Developing and sustaining resilience There are several theories or models that attempt to describe subcomponents, prerequisites, predictors, or correlates of resilience. Fletcher and Sarkar found five factors that develop and sustain a person's resilience: the ability to make realistic plans and being capable of taking the steps necessary to follow through with them confidence in one's strengths and abilities communication and problem-solving skills the ability to manage strong impulses and feelings having good self-esteem Among older adults, Kamalpour et al. found that the important factors are external connections, grit, independence, self-care, self-acceptance, altruism, hardship experience, health status, and positive perspective on life. Another study examined thirteen high-achieving professionals who seek challenging situations that require resilience, all of whom had experienced challenges in the workplace and negative life events over the course of their careers but who had also been recognized for their great achievements in their respective fields. Participants were interviewed about everyday life in the workplace as well as their experiences with resilience and thriving. The study found six main predictors of resilience: positive and proactive personality, experience and learning, sense of control, flexibility and adaptability, balance and perspective, and perceived social support. High achievers were also found to engage in many activities unrelated to their work such as engaging in hobbies, exercising, and organizing meetups with friends and loved ones. The American Psychological Association, in its popular psychology-oriented Psychology topics publication, suggests the following tactics people can use to build resilience: Prioritize relationships. Join a social group. Take care of your body. Practice mindfulness. Avoid negative coping outlets (like alcohol use). Help others. Be proactive; search for solutions. Make progress toward your goals. Look for opportunities for self-discovery. Keep things in perspective. Accept change. Maintain a hopeful outlook. Learn from your past. The idea that one can build one's resilience implies that resilience is a developable characteristic, and so is perhaps at odds with the theory that resilience is a process. Positive emotions The relationship between positive emotions and resilience has been extensively studied. People who maintain positive emotions while they face adversity are more flexibile in their thinking and problem solving. Positive emotions also help people recover from stressful experiences. People who maintain positive emotions are better-defended from the physiological effects of negative emotions, and are better-equipped to cope adaptively, to build enduring social resources, and to enhance their well-being. The ability to consciously monitor the factors that influence one's mood is correlated with a positive emotional state. This is not to say that positive emotions are merely a by-product of resilience, but rather that feeling positive emotions during stressful experiences may have adaptive benefits in the coping process. Resilient people who have a propensity for coping strategies that concretely elicit positive emotions—such as benefit-finding and cognitive reappraisal, humor, optimism, and goal-directed problem-focused coping—may strengthen their resistance to stress by allocating more access to these positive emotional resources. Social support from caring adults encouraged resilience among participants by providing them with access to conventional activities. Positive emotions have physiological consequences. For example, humor leads to improvements in immune system functioning and increases in levels of salivary immunoglobulin A, a vital system antibody, which serves as the body's first line of defense in respiratory illnesses. Other health outcomes include faster injury recovery rate and lower readmission rates to hospitals for the elderly, and reductions in the length of hospital stay. One study has found early indications that older adults who have increased levels of psychological resilience have decreased odds of death or inability to walk after recovering from hip fracture surgery. In another study, trait-resilient individuals experiencing positive emotions more quickly rebounded from cardiovascular activation that was initially generated by negative emotional arousal. Social support Social support is an important factor in the development of resilience. While many competing definitions of social support exist, they tend to concern one's degree of access to, and use of, strong ties to other people who are similar to oneself. Social support requires solidarity and trust, intimate communication, and mutual obligation both within and outside the family. Military studies have found that resilience is also dependent on group support: unit cohesion and morale is the best predictor of combat resiliency within a unit or organization. Resilience is highly correlated with peer support and group cohesion. Units with high cohesion tend to experience a lower rate of psychological breakdowns than units with low cohesion and morale. High cohesion and morale enhance adaptive stress reactions. War veterans who had more social support were less likely to develop post-traumatic stress disorder. Cognitive behavioral therapy A number of self-help approaches to resilience-building have been developed, drawing mainly on cognitive behavioral therapy (CBT) and rational emotive behavior therapy (REBT). For example, a group cognitive-behavioral intervention, called the Penn Resiliency Program (PRP), fosters aspects of resilience. A meta-analysis of 17 PRP studies showed that the intervention significantly reduces depressive symptoms over time. In CBT, building resilience is a matter of mindfully changing behaviors and thought patterns. The first step is to change the nature of self-talk—the internal monologue people have that reinforces beliefs about their self-efficacy and self-value. To build resilience, a person needs to replace negative self-talk, such as "I can't do this" and "I can't handle this", with positive self-talk. This helps to reduce psychological stress when a person faces a difficult challenge. The second step is to prepare for challenges, crises, and emergencies. Businesses prepare by creating emergency response plans, business continuity plans, and contingency plans. Similarly, an individual can create a financial cushion to help with economic stressors, maintain supportive social networks, and develop emergency response plans. Language learning and communication Language learning and communication help develop resilience in people who travel, study abroad, work internationally, or in those who find themselves as refugees in countries where their home language is not spoken. Research conducted by the British Council found a strong relationship between language and resilience in refugees. Providing adequate English-learning programs and support for Syrian refugees builds resilience not only in the individual, but also in the host community. Language builds resilience in five ways: home language and literacy development Development of home language and literacy helps create the foundation for a shared identity. By maintaining the home language, even when displaced, a person not only learns better in school, but enhances their ability to learn other languages. This improves resilience by providing a shared culture and sense of identity that allows refugees to maintain close relationships to others who share their identity and sets them up to possibly return one day. access to education, training, and employment This allows refugees to establish themselves in their host country and provides more ease when attempting to access information, apply to work or school, or obtain professional documentation. Securing access to education or employment is largely dependent on language competency, and both education and employment provide security and success that enhance resilience and confidence. learning together and social cohesion Learning together encourages resilience through social cohesion and networks. When refugees engage in language-learning activities with host communities, engagement and communication increases. Both refugee and host community are more likely to celebrate diversity, share their stories, build relationships, engage in the community, and provide each other with support. This creates a sense of belonging with the host communities alongside the sense of belonging established with other members of the refugee community through home language. addressing the effects of trauma on learning Additionally, language programs and language learning can help address the effects of trauma by providing a means to discuss and understand. Refugees are more capable of expressing their trauma, including the effects of loss, when they can effectively communicate with their host community. Especially in schools, language learning establishes safe spaces through storytelling, which further reinforces comfort with a new language, and can in turn lead to increased resilience. building inclusivity This is more focused on providing resources. By providing institutions or schools with more language-based learning and cultural material, the host community can learn how to better address the needs of the refugee community. This feeds back into the increased resilience of refugees by creating a sense of belonging and community. Another study shows the impacts of storytelling in building resilience. It aligns with many of the five factors identified by the study completed by the British Council, as it emphasizes the importance of sharing traumatic experiences through language. It showed that those who were exposed to more stories, from family or friends, had a more holistic view of life's struggles, and were thus more resilient, especially when surrounded by foreign languages or attempting to learn a new language. Development programs The Head Start program promotes resilience, as does the Big Brothers Big Sisters Programme, Centered Coaching & Consulting,, the Abecedarian Early Intervention Project, and social programs for youth with emotional or behavioral difficulties. The Positive Behavior Supports and Intervention program is a trauma-informed, resilience-based program for elementary age students. It has four components: positive reinforcements such as encouraging feedback; understanding that behavior is a response to unmet needs or a survival response; promoting belonging, mastery, and independence; and creating an environment to support the student through sensory tools, mental health breaks, and play. Tuesday's Children, a family service organization, works to build psychological resilience through programs such as Mentoring and Project Common Bond, an eight-day peace-building and leadership initiative for people aged 15–20, from around the world, who have been directly impacted by terrorism. Military organizations test personnel for the ability to function under stressful circumstances by deliberately subjecting them to stress during training. Those students who do not exhibit the necessary resilience can be screened out of the training. Those who remain can be given stress inoculation training. The process is repeated as personnel apply for increasingly demanding positions, such as special forces. Other factors Another protective factor involves external social support, which helps moderate the negative effects of environmental hazards or stressful situations and guides vulnerable individuals toward optimistic paths. One study distinguished three contexts for protective factors: Personal Attributes: Traits such as an outgoing personality, perceptiveness, and a positive self-concept. Family Environment: Close and supportive relationships with at least one family member or an emotionally stable parent. Community Support: Support and guidance from peers and community members. A study of the elderly in Zurich, Switzerland, illuminated the role humor plays to help people remain happy in the face of age-related adversity. Research has also been conducted into individual differences in resilience. Self-esteem, ego-control, and ego-resiliency are related to behavioral adaptation. Maltreated children who feel good about themselves may process risk situations differently by and, thereby, avoiding negative internalized self-perceptions. Ego-control is "the threshold or operating characteristics of an individual with regard to the expression or containment" of their impulses, feelings, and desires. Ego-resilience refers to the "dynamic capacity, to modify his or her model level of ego-control, in either direction, as a function of the demand characteristics of the environmental context" Demographic information (e.g., gender) and resources (e.g., social support) also predict resilience. After disaster women tend to show less resilience than men, and people who were less involved in affinity groups and organisations also showed less resilience. Certain aspects of religions, spirituality, or mindfulness could promote or hinder certain psychological virtues that increase resilience. However, the "there has not yet been much direct empirical research looking specifically at the association of religion and ordinary strengths and virtues". In a review of the literature on the relationship between religiosity/spirituality and PTSD, about half of the studies showed a positive relationship and half showed a negative relationship between measures of religiosity/spirituality and resilience. The United States Army was criticized for promoting spirituality in its Comprehensive Soldier Fitness program as a way to prevent PTSD, due to the lack of conclusive supporting data. Forgiveness plays a role in resilience among patients with chronic pain (but not in the severity of the pain). Resilience is also enhanced in people who develop effective coping skills for stress. Coping skills help people reduce stress levels, so they remain functional. Coping skills include using meditation, exercise, socialization, and self-care practices to maintain a healthy level of stress. Bibliotherapy, positive tracking of events, and enhancing psychosocial protective factors with positive psychological resources are other methods for resilience building. Increasing a person's arsenal of coping skills builds resilience. A study of 230 adults, diagnosed with depression and anxiety, showed that emotional regulation contributed to resilience in patients. The emotional regulation strategies focused on planning, positively reappraising events, and reducing rumination. Patients with improved resilience experienced better treatment outcomes than patients with non-resilience focused treatment plans. This suggests psychotherapeutic interventions may better handle mental disorders by focusing on psychological resilience. Other factors associated with resilience include the capacity to make realistic plans, self-confidence and a positive self image, communications skills, and the capacity to manage strong feelings and impulses. Children Adverse childhood experiences (ACEs) are events that occur in a child's life that could lead to maladaptive symptoms such as tension, low mood, repetitive and recurring thoughts, and . Maltreated children who experience some risk factors (e.g., single parenting, limited maternal education, or family unemployment), show lower ego-resilience and intelligence than children who were not maltreated. Maltreated children are also more likely to withdraw and demonstrate behavior problems. Ego-resiliency and positive self-esteem predict competent adaptation in maltreated children. Psychological resilience which helps overcome adverse events does not solely explain why some children experience post-traumatic growth and some do not. Resilience is the product of a number of developmental processes over time that allow children to experience small exposures to adversity or age appropriate challenges and develop skills to handle those challenges. This gives children a sense of pride and self-worth. Two "protective factors"—characteristics of children or situations that help children in the context of risk—are good cognitive functioning (like cognitive self-regulation and IQ) and positive relationships (especially with competent adults, like parents). Children who have protective factors in their lives tend to do better in some risky contexts. However, children do better when not exposed to high levels of risk or adversity. There are a few protective factors of young children that are consistent over differences in culture and stressors (poverty, war, divorce of parents, natural disasters, etc.): capable parenting other close relationships intelligence self-control motivation to succeed self-confidence and self-efficacy faith, hope, belief life has meaning effective schools effective communities effective cultural practices Ann Masten calls these protective factors "ordinary magic"—the ordinary human adaptive systems that are shaped by biological and cultural evolution. In her book, Ordinary Magic: Resilience in Development, she discusses the "immigrant paradox", the phenomenon that first-generation immigrant youth are more resilient than their children. Researchers hypothesize that "there may be culturally based resiliency that is lost with succeeding generations as they become distanced from their culture of origin." Another hypothesis is that those who choose to immigrate are more likely to be more resilient. Neurocognitive resilience Trauma is defined as an emotional response to distressing event, and PTSD is a mental disorder the develops after a person has experienced a dangerous event, for instance car accident or environmental disaster. The findings of a study conducted on a sample of 226 individuals who had experienced trauma indicate a positive association between resilience and enhanced nonverbal memory, as well as a measure of emotional learning. The findings of the study indicate that individuals who exhibited resilience demonstrated a lower incidence of depressed and post-traumatic stress disorder (PTSD) symptoms. Conversely, those who lacked resilience exhibited a higher likelihood of experiencing unemployment and having a history of suicide attempts. The research additionally revealed that the experience of severe childhood abuse or exposure to trauma was correlated with a lack of resilience. The results indicate that resilience could potentially serve as a substitute measure for emotional learning, a process that is frequently impaired in stress-related mental disorders. This finding has the potential to enhance our comprehension of resilience. Young adults Sports provide benefits such as social support or a boost in self confidence. The findings of a study investigating the correlation between resilience and symptom resolution in adolescents and young adults who have experienced sport-related concussions (SRC) indicate that individuals with lower initial resilience ratings tend to exhibit a higher number and severity of post-concussion symptoms (PCSS), elevated levels of anxiety and depression, and a delayed recovery process from SRC. Additionally, the research revealed that those who initially scored lower on resilience assessments were less inclined to describe a sense of returning to their pre-injury state and experienced more pronounced exacerbation of symptoms resulting from both physical and cognitive exertion, even after resuming sports or physical activity. This finding illustrates the significant impact that resilience can have on the process of physical and mental recovery. Role of the family Family environments that are caring and stable, hold high expectations for children's behavior, and encourage participation by children in the life of the family are environments that more successfully foster resilience in children. Most resilient children have a strong relationship with at least one adult (not always a parent), and this relationship helps to diminish risk associated with family discord. Parental resilience—the ability of parents to deliver competent high-quality parenting, despite the presence of risk factors—plays an important role in children's resilience. Understanding the characteristics of quality parenting is critical to the idea of parental resilience. However, resilience research has focused on the well-being of children, with limited academic attention paid to factors that may contribute to the resilience of parents. Even if divorce produces stress, the availability of social support from family and community can reduce this stress and yield positive outcomes. A family that emphasizes the value of assigned chores, caring for brothers or sisters, and the contribution of part-time work in supporting the family helps to foster resilience. Some practices that poor parents utilize help to promote resilience in families. These include frequent displays of warmth, affection, and emotional support; reasonable expectations for children combined with straightforward, not overly harsh discipline; family routines and celebrations; and the maintenance of common values regarding money and leisure. According to sociologist Christopher B. Doob, "Poor children growing up in resilient families have received significant support for doing well as they enter the social world—starting in daycare programs and then in schooling." The Besht model of natural resilience-building through parenting, in an ideal family with positive access and support from family and friends, has four key markers: realistic upbringing effective risk communications positivity and restructuring of demanding situations building self efficacy and hardiness In this model, self-efficacy is the belief in one's ability to organize and execute the courses of action required to achieve goals and hardiness is a composite of interrelated attitudes of commitment, control, and challenge. Role of school Resilient children in classroom environments work and play well, hold high expectations, and demonstrate locus of control, self-esteem, self-efficacy, and autonomy. These things work together to prevent the debilitating behaviors that are associated with learned helplessness. Research on Mexican–American high school students found that a sense of belonging to school was the only significant predictor of academic resilience, though a sense of belonging to family, a peer group, and a culture higher academic resilience. "Although cultural loyalty overall was not a significant predictor of resilience, certain cultural influences nonetheless contribute to resilient outcomes, like familism and cultural pride and awareness." The results "indicate a negative relationship between cultural pride and the ethnic homogeneity of a school." The researchers hypothesize that "ethnicity becomes a salient and important characteristic in more ethnically diverse settings". A strong connection with one's cultural identity is an important protective factor against stress and is indicative of increased resilience. While classroom resources have been created to promote resilience in students, the most effective ways to ensure resilience in children is by protecting their natural adaptive systems from breaking down or being hijacked. At home, resilience can be promoted through a positive home environment and emphasizing cultural practices and values. In school, this can be done by ensuring that each student develops and maintains a sense of belonging to the school through positive relationships with classroom peers and a caring teacher. A sense of belonging—whether it be in a culture, family, or another group—predicts resiliency against any given stressor. Role of the community Communities play a role in fostering resilience. The clearest sign of a cohesive and supportive community is the presence of social organizations that provide healthy human development. Services are unlikely to be used unless there is good communication about them. Children who are repeatedly relocated do not benefit from these resources, as their opportunities for resilience-building community participation are disrupted with every relocation. Outcomes in adulthood Patients who show resilience to adverse events in childhood may have worse outcomes later in life. A study in the American Journal of Psychiatry interviewed 1420 participants with a Child and Adolescent Psychiatric Assessment up to 8 times as children. Of those 1,266 were interviewed as adults, and this group had higher risks for anxiety, depression and problems with work or education. This was accompanied by worse physical health outcomes. The study authors posit that the goal of public health should be to reduce childhood trauma, and not promote resilience. Specific situations Divorce Cultivating resilience may be beneficial to all parties involved in divorce. The level of resilience a child will experience after their parents have split is dependent on both internal and external variables. Some of these variables include their psychological and physical state and the level of support they receive from their schools, friends, and family friends. Children differ by age, gender, and temperament in their capacity to cope with divorce. About 20–25% of children "demonstrate severe emotional and behavioral problems" when going through a divorce, compared to 10% of children exhibiting similar problems in married families. Despite this, approximately 75–80% of these children will "develop into well-adjusted adults with no lasting psychological or behavioral problems". This goes to show that most children have the resilience needed to endure their parents' divorce. The effects of the divorce extend past the separation of the parents. Residual conflict between parents, financial problems, and the re-partnering or remarriage of parents can cause stress. Studies have shown conflicting results about the effect of post-divorce conflict on a child's healthy adjustment. Divorce may reduce children's financial means and associated lifestyle. For example, economizing may mean a child cannot continue to participate in extracurricular activities such as sports and music lessons, which can be detrimental to their social lives. A parent's repartnering or remarrying can add conflict and anger to a child's home environment. One reason re-partnering causes additional stress is because of the lack of clarity in roles and relationships; the child may not know how to react and behave with this new quasi-parent figure in their life. Bringing in a new partner/spouse may be most stressful when done shortly after the divorce. Divorce is not a single event, but encompasses multiple changes and challenges. Internal factors promote resiliency in the child, as do external factors in the environment. Certain programs such as the 14-week Children's Support Group and the Children of Divorce Intervention Program may help a child cope with the changes that occur from a divorce. Bullying Beyond preventing bullying, it is also important to consider interventions based on emotional intelligence when bullying occurs. Emotional intelligence may foster resilience in victims. When a person faces stress and adversity, especially of a repetitive nature, their ability to adapt is an important factor in whether they have a more positive or negative outcome. One study examining adolescents who illustrated resilience to bullying found higher behavioral resilience in girls and higher emotional resilience in boys. The study's authors suggested the targeting of psychosocial skills as a form of intervention. Emotional intelligence promotes resilience to stress and the ability to manage stress and other negative emotions can restrain a victim from going on to perpetuate aggression. Emotion regulation is an important factor in resilience. Emotional perception significantly facilitates lower negative emotionality during stress, while emotional understanding facilitates resilience and correlates with positive affect. Natural disasters Resilience after a natural disaster can be gauged on an individual level (each person in the community), a community level (everyone collectively in the affected locality), and on a physical level (the locality's environment and infrastructure). UNESCAP-funded research on how communities show resiliency in the wake of natural disasters found that communities were more physically resilient if community members banded together and made resiliency a collective effort. Social support, especially the ability to pool resources, is key to resilience. Communities that pooled social, natural, and economic resources were more resilient and could overcome disasters more quickly than communities that took a more individualistic approach. The World Economic Forum met in 2014 to discuss resiliency after natural disasters. They concluded that countries that are more economically sound, and whose members can diversify their livelihoods, show higher levels of resiliency. this had not been studied in depth, but the ideas discussed in this forum appeared fairly consistent with existing research. Individual resilience in the wake of natural disasters can be predicted by the level of emotion the person experienced and was able to process during and following the disaster. Those who employ emotional styles of coping were able to grow from their experiences and to help others. In these instances, experiencing emotions was adaptive. Those who did not engage with their emotions and who employed avoidant and suppressive coping styles had poorer mental health outcomes following disaster. Death of a family member little research had been done on the topic of family resilience in the wake of the death of a family member. Clinical attention to bereavement has focused on the individual mourning process rather than on the family unit as a whole. Resiliency in this context is the "ability to maintain a stable equilibrium" that is conducive to balance, harmony, and recovery. Families manage familial distortions caused by the death of the family member by reorganizing relationships and changing patterns of functioning to adapt to their new situation. People who exhibiting resilience in the wake of trauma can successfully traverse the bereavement process without long-term negative consequences. One of the healthiest behaviors displayed by resilient families in the wake of a death is honest and open communication. This facilitates an understanding of the crisis. Sharing the experience of the death can promote immediate and long-term adaptation. Empathy is a crucial component in familial resilience because it allows mourners to understand other positions, tolerate conflict, and grapple with differences that may arise. Another crucial component to resilience is the maintenance of a routine that binds the family together through regular contact and order. The continuation of education and a connection with peers and teachers at school is an important support for children struggling with the death of a family member. Professional settings Resilience has been examined in the context of failure and setbacks in workplace settings. Psychological resilience is one of the core constructs of positive organizational behavior and has captured scholars' and practitioners' attention. Research has highlighted certain personality traits, personal resources (e.g., self-efficacy, work-life balance, social competencies), personal attitudes (e.g., sense of purpose, job commitment), positive emotions, and work resources (e.g., social support, positive organizational context) as potential facilitators of workplace resilience. Attention has also been directed to the role of resilience in innovative contexts. Due to high degrees of uncertainty and complexity in the innovation process, failure and setbacks happen frequently in this context. These can harm affected individuals' motivation and willingness to take risks, so their resilience is essential for them to productively engage in future innovative activities. A resilience construct specifically aligned to the peculiarities of the innovation context was needed to diagnose and develop innovators' resilience: Innovator Resilience Potential (IRP). Based on Bandura's social cognitive theory, IRP has six components: self-efficacy, outcome expectancy, optimism, hope, self-esteem, and risk propensity. It reflects a process perspective on resilience: IRP can be interpreted either as an antecedent of how a setback affects an innovator, or as an outcome of the process that is influenced by the setback situation. A measurement scale of IRP was developed and validated in 2018. Cultural differences There is controversy about the indicators of good psychological and social development when resilience is studied across different cultures and contexts. The American Psychological Association's Task Force on Resilience and Strength in Black Children and Adolescents, for example, notes that there may be special skills that these young people and families have that help them cope, including the ability to resist racial prejudice. Researchers of indigenous health have shown the impact of culture, history, community values, and geographical settings on resilience in indigenous communities. People who cope may also show "hidden resilience" when they do not conform with society's expectations for how someone is supposed to behave (for example, in some contexts aggression may aid resilience, or less emotional engagement may be protective in situations of abuse). Resilience in individualist and collectivist communities Individualist cultures, such as those of the U.S., Austria, Spain, and Canada, emphasize personal goals, initiatives, and achievements. Independence, self-reliance, and individual rights are highly valued by members of individualistic cultures. The ideal person in individualist societies is assertive, strong, and innovative. People in this culture tend to describe themselves in terms of their unique traits—"I am analytical and curious". Economic, political, and social policies reflect the culture's interest in individualism. Collectivist cultures, such as those of Japan, Sweden, Turkey, and Guatemala, emphasize family and group work goals. The rules of these societies promote unity, brotherhood, and selflessness. Families and communities practice cohesion and cooperation. The ideal person in collectivist societies is trustworthy, honest, sensitive, and generous—emphasizing intrapersonal skills. Collectivists tend to describe themselves in terms of their roles—"I am a good husband and a loyal friend". In a study on the consequences of disaster on a culture's individualism, researchers operationalized these cultures by identifying indicative phrases in a society's literature. Words that showed the theme of individualism include, "able, achieve, differ, own, personal, prefer, and special." Words that indicated collectivism include, "belong, duty, give, harmony, obey, share, together." Differences in response to natural disasters Natural disasters threaten to destroy communities, displace families, degrade cultural integrity, and diminish an individual's level of functioning. Comparing individualist community reactions to collectivist community responses after natural disasters illustrates their differences and respective strengths as tools of resilience. Some suggest that because disasters strengthen the need to rely on other people and social structures, they reduce individual agency and the sense of autonomy, and so regions with heightened exposure to disaster should cultivate collectivism. However, interviews with and experiments on disaster survivors indicate that disaster-induced anxiety and stress decrease one's focus on social-contextual information—a key component of collectivism. So disasters may increase individualism. In a study into the association between socio-ecological indicators and cultural-level change in individualism, for each socio-ecological indicator, frequency of disasters was associated with greater (rather than less) individualism. Supplementary analyses indicated that the frequency of disasters was more strongly correlated with individualism-related shifts than was the magnitude of disasters or the frequency of disasters qualified by the number of deaths. Baby-naming is one indicator of change. Urbanization was linked to preference for uniqueness in baby-naming practices at a one-year lag, secularism was linked to individualist shifts in interpersonal structure at both lags, and disaster prevalence was linked to more unique naming practices at both lags. Secularism and disaster prevalence contributed to shifts in naming practices. Disaster recovery research focuses on psychology and social systems but does not adequately address interpersonal networking or relationship formation and maintenance. One disaster response theory holds that people who use existing communication networks fare better during and after disasters. Moreover, they can play important roles in disaster recovery by organizing and helping others use communication networks and by coordinating with institutions. Building strong, self-reliant communities whose members know each other, know each other's needs, and are aware of existing communication networks, is a possible source of resilience in disasters. Individualist societies promote individual responsibility for self-sufficiency; collectivist culture defines self-sufficiency within an interdependent communal context. Even where individualism is salient, a group thrives when its members choose social over personal goals and seek to maintain harmony, and where they value collectivist over individualist behavior. The concept of resilience in language While not all languages have a direct translation for the English word "resilience", nearly every culture has a word that relates to a similar concept, suggesting a common understanding of what resilience is. Even if a word does not directly translate to "resilience" in English, it relays a meaning similar enough to the concept and is used as such within the language. If a specific word for resilience does not exist in a language, speakers of that language typically assign a similar word that insinuates resilience based on context. Many languages use words that translate to "elasticity" or "bounce", which are used in context to capture the meaning of resilience. For example, one of the main words for "resilience" in Chinese literally translates to "rebound", one of the main words for "resilience" in Greek translates to "bounce" (another translates to "cheerfulness"), and one of the main words for "resilience" in Russian translates to "elasticity," just as it does in German. However, this is not the case for all languages. For example, if a Spanish speaker wanted to say "resilience", their main two options translate to "resistance" and "defense against adversity". Many languages have words that translate better to "tenacity" or "grit" better than they do to "resilience". While these languages may not have a word that exactly translates to "resilience", English speakers often use the words tenacity or grit when referring to resilience. Arabic has a word solely for resilience, but also two other common expressions to relay the concept, which directly translate to "capacity on deflation" or "reactivity of the body", but are better translated as "impact strength" and "resilience of the body" respectively. A few languages, such as Finnish, have words that express resilience in a way that cannot be translated back to English. In Finnish, the word and concept "" has been recently studied by a designated Sisu Scale, which is composed of both beneficial and harmful sides of . , measured by the Sisu Scale, has correlations with English langugage equivalents, but the harmful side of does not seem to have any corresponding concept in English-language-based scales. Sometimes has been translated to "grit" in English; blends the concepts of resilience, tenacity, determination, perseverance, and courage into one word that has become a facet of Finnish culture. Measurement Direct measurement Resilience is measured by evaluating personal qualities that reflect people's approach and response to negative experiences. Trait resilience is typically assessed using two methods: direct evaluation of traits through resilience measures, and proxy assessment of resilience, in which related psychological constructs are used to explain resilient outcomes. There are more than 30 resilience measures that assess over 50 different variables related to resilience, but there is no universally accepted "gold standard" for measuring resilience. Five of the established self-report measures of psychological resilience are: Ego Resiliency Scale measures a person's ability to exercise control over their impulses or inhibition in response to environmental demands, with the aim of maintaining or enhancing their ego equilibrium. Hardiness Scale encompasses three main dimensions: (1) commitment (a conviction that life has purpose), (2) control (confidence in one's ability to navigate life), and (3) challenge (aptitude for and pleasure in adapting to change) Psychological Resilience Scale assesses a "resilience core" characterized by five traits (purposeful life, perseverance, self-reliance, equanimity, and existential aloneness) that reflect an individual's physical and mental resilience throughout their lifespan Connor-Davidson Resilience Scale developed in a clinical treatment setting that conceptualized resilience as arising from four factors: Brief Resilience Scale assesses resilience as the capacity to bounce back from unfavorable circumstances The Resilience Systems Scales was produced to investigate and measure the underlying structure of the 115 items from these five most-commonly cited trait resilience scales in the literature. Three strong latent factors account for most of the variance accounted for by the five most popular resilience scales, and replicated ecological systems theory: Engineering resilience The capability of a system to quickly and effortlessly restore itself to a stable equilibrium state after a disruption, as measured by its speed and ease of recovery. Ecological resilience The capacity of a system to endure or resist disruptions while preserving a steady state and adapting to necessary changes in its functioning. Adaptive capacity The ability to continuously adjust functions and processes in order to be ready to adapt to any disruption. 'Proxy' measurement Resilience literature identifies five main trait domains that serve as stress-buffers and can be used as proxies to describe resilience outcomes: personality A resilient personality includes positive expressions of the five-factor personality traits such as high emotional stability, extraversion, conscientiousness, openness, and agreeableness. cognitive abilities and executive functions Resilience is identified through effective use of executive functions and processing of experiential demands, or through an overarching cognitive mapping system that integrates information from current situations, prior experience, and goal-driven processes. affective systems, which include emotional regulation systems Emotion regulation systems are based on the broaden-and-build theory, in which there is . eudaimonic well-being resilience emerges from natural well-being processes (e.g. autonomy, purpose in life, environmental mastery) and underlying genetic and neural substrates and acts as a protective resilient factor across life-span transitions. health systems This also reflects the broaden-and-build theory, where there is a reciprocal relationship between trait resilience and positive health functioning through the promotion of feeling capable to deal with adverse health situations. Mixed model A mixed model of resilience can be derived from direct and proxy measures of resilience. A search for latent factors among 61 direct and proxy resilience assessments, suggested four main factors: recovery Resilience scales that focus on recovery, such as engineering resilience, align with reports of stability in emotional and health systems. The most fitting theoretical framework for this is the broaden-and-build theory of positive emotions. This theory highlights how positive emotions can foster resilient health systems and enable individuals to recover from setbacks. sustainability Resilience scales that reflect "sustainability," such as engineering resilience, align with conscientiousness, lower levels of dysexecutive functioning, and five dimensions of eudaimonic well-being. Theoretically, resilience is the effective use of executive functions and processing of experiential demands (also known as resilient functioning), where an overarching cognitive mapping system integrates information from current situations, prior experience, and goal-driven processes (known as the cognitive model of resilience). adaptability resilience Resilience scales that assess adaptability, such as adaptive capacity, are associated with higher levels of extraversion (such as being enthusiastic, talkative, assertive, and gregarious) and openness-to-experience (such as being intellectually curious, creative, and imaginative). These personality factors are often reported to form a higher-order factor known as "beta" or "plasticity", which reflects a drive for growth, agency, and reduced inhibition by preferring new and diverse experiences while reducing fixed patterns of behavior. These findings suggest that adaptability can be seen as a complement to growth, agency, and reduced inhibition. social cohesion Several resilience measures converge to suggest an underlying social cohesion factor, in which social support, care, and cohesion among family and friends (as featured in various scales within the literature) form a single latent factor. These findings point to the possibility of adopting a "mixed model" of resilience in which direct assessments of resilience could be employed alongside cognate psychological measures to improve the evaluation of resilience. Criticism As with other psychological phenomena, there is controversy about how resilience should be defined. Its definition affects research focuses; differing or imprecise definitions lead to inconsistent research. Research on resilience has become more heterogeneous in its outcomes and measures, convincing some researchers to abandon the term altogether due to it being attributed to all outcomes of research where results were more positive than expected. There is also disagreement among researchers as to whether psychological resilience is a character trait or state of being. Psychological resilience has also been referred to . However, it is generally agreed upon that resilience is a buildable resource. There is also evidence that resilience can indicate a capacity . Adolescents who have a high level of adaptation (i.e. resilience) tend to struggle with dealing with other psychological problems later on in life. This is due to an overload of their stress response systems. There is evidence that the higher one's resilience is, the lower one's vulnerability. Brad Evans and Julian Reid criticize resilience discourse and its rising popularity in their book, Resilient Life. The authors assert that can put the onus of disaster response on individuals rather than publicly coordinated efforts. Tied to the emergence of neoliberalism, climate change, third-world development, and other discourses, Evans and Reid argue that promoting resilience draws attention away from governmental responsibility and towards self-responsibility and healthy psychological effects such as post-traumatic growth. See also References Further reading External links National Resilience Resource Center Research on resilience at Dalhousie University Motivation Psychological adjustment Psychological resilience Self-sustainability
0.778749
0.994914
0.774788
Identity formation
Identity formation, also called identity development or identity construction, is a complex process in which humans develop a clear and unique view of themselves and of their identity. Self-concept, personality development, and values are all closely related to identity formation. Individuation is also a critical part of identity formation. Continuity and inner unity are healthy identity formation, while a disruption in either could be viewed and labeled as abnormal development; certain situations, like childhood trauma, can contribute to abnormal development. Specific factors also play a role in identity formation, such as race, ethnicity, and spirituality. The concept of personal continuity, or personal identity, refers to an individual posing questions about themselves that challenge their original perception, like "Who am I?" The process defines individuals to others and themselves. Various factors make up a person's actual identity, including a sense of continuity, a sense of uniqueness from others, and a sense of affiliation based on their membership in various groups like family, ethnicity, and occupation. These group identities demonstrate the human need for affiliation or for people to define themselves in the eyes of others and themselves. Identities are formed on many levels. The micro-level is self-definition, relations with people, and issues as seen from a personal or an individual perspective. The meso-level pertains to how identities are viewed, formed, and questioned by immediate communities and/or families. The macro-level are the connections among and individuals and issues from a national perspective. The global level connects individuals, issues, and groups at a worldwide level. Identity is often described as finite and consisting of separate and distinct parts (e.g., family, cultural, personal, professional). Theories Many theories of development have aspects of identity formation included in them. Two theories directly address the process of identity formation: Erik Erikson's stages of psychosocial development (specifically the Identity versus Role Confusion stage), James Marcia's identity status theory, and Jeffrey Arnett's theories of identity formation in emerging adulthood. Erikson's theory of identity vs. role confusion Erikson's theory is that people experience different crises or conflicts throughout their lives in eight stages. Each stage occurs at a certain point in life and must be successfully resolved to progress to the next stage. The particular stage relevant to identity formation takes place during adolescence: Identity versus Role Confusion. The Identity versus Role Confusion stage involves adolescents trying to figure out who they are in order to form a basic identity that they will build on throughout their life, especially concerning social and occupational identities. They ask themselves the existential questions: "Who am I?" and "What can I be?" They face the complexities of determining one's own identity. Erikson stated that this crisis is resolved with identity achievement, the point at which an individual has extensively considered various goals and values, accepting some and rejecting others, and understands who they are as a unique person. When an adolescent attains identity achievement, they are ready to enter the next stage of Erikson's theory, Intimacy versus Isolation, where they will form strong friendships and a sense of companionship with others. If the Identity versus Role Confusion crisis is not positively resolved, an adolescent will face confusion about future plans, particularly their roles in adulthood. Failure to form one's own identity leads to failure to form a shared identity with others, which can lead to instability in many areas as an adult. The identity formation stage of Erik Erikson's theory of psychosocial development is a crucial stage in life. Marcia's identity status theory Marcia created a structural interview designed to classify adolescents into one of four statuses of identity. The statuses are used to describe and pinpoint the progression of an adolescent's identity formation process. In Marcia's theory, identity is operationally defined as whether an individual has explored various alternatives and made firm commitments to an occupation, religion, sexual orientation, and a set of political values. The four identity statuses in James Marcia's theory are: Identity Diffusion (also known as Role Confusion): The opposite of identity achievement. The individual has not resolved their identity crisis yet by failing to commit to any goals or values and establish a future life direction. In adolescents, this stage is characterized by disorganized thinking, procrastination, and avoidance of issues and actions. Identity Foreclosure: This occurs when teenagers conform to an identity without exploring what suits them best. For instance, teenagers might follow the values and roles of their parents or cultural norms. They might also foreclose on a negative identity, or the direct opposite of their parents' values or cultural norms. Identity Moratorium: This postpones identity achievement by providing temporary shelter. This status provides opportunities for exploration, either in breadth or in-depth. Examples of moratoria common in American society include college or the military. Identity Achievement: This status is attained when the person has solved the identity issues by making commitments to goals, beliefs, and values after an extensive exploration of different areas. Jeffrey Arnett's Theories on Identity Formation in Emerging Adulthood Jeffrey Arnett's theory states that identity formation is most prominent in emerging adulthood, consisting of ages 18–25. Arnett holds that identity formation consists of indulging in different life opportunities and possibilities to eventually make important life decisions. He believes this phase of life includes a broad range of opportunities for identity formation, specifically in three different realms. These three realms of identity exploration are: Love: In emerging adulthood, individuals explore love to find a profound sense of intimacy. While trying to find love, individuals often explore their identity by focusing on questions such as: "Given the kind of person I am, what kind of person do I wish to have as a partner through life?" Work: Work opportunities that people get involved in are now centered around the idea that they are preparing for careers that they might have throughout adulthood. Individuals explore their identity by asking themselves questions such as: "What kind of work am I good at?", "What kind of work would I find satisfying for the long term", or "What are my chance of getting a job in the field that seems to suit me best?" Worldviews: It is common for those in the stage of emerging adulthood to attend college. There they may be exposed to different worldviews, compared to those they were raised in, and become open to altering their previous worldviews. Individuals who don't attend college also believe that as adult they should also decide what their beliefs and values are. Self-concept Self-concept, or self-identity, is the set of beliefs and ideas an individual has about themselves. Self-concept is different from self-consciousness, which is an awareness of one's self. Components of the self-concept include physical, psychological, and social attributes, which can be influenced by the individual's attitudes, habits, beliefs, and ideas; they cannot be condensed into the general concepts of self-image or self-esteem. Multiple types of identity come together within an individual and can be broken down into the following: cultural identity, professional identity, ethnic and national identity, religious identity, gender identity, and disability identity. Cultural identity Cultural identity is formation of ideas an individual takes based on the culture they belong to. Cultural identity relates to but is not synonymous with identity politics. There are modern questions of culture that are transferred into questions of identity. Historical culture also influences individual identity, and as with modern cultural identity, individuals may pick and choose aspects of cultural identity, while rejecting or disowning other associated ideas. Professional identity Professional identity is the identification with a profession, exhibited by an aligning of roles, responsibilities, values, and ethical standards as accepted by the profession. In business, professional identity is the professional self-concept that is founded upon attributes, values, and experiences. A professional identity is developed when there is a philosophy that is manifested in a distinct corporate culture – the corporate personality. A business professional is a person in a profession with certain types of skills that sometimes require formal training or education. Career development encompasses the total dimensions of psychological, sociological, educational, physical, economic, and chance that alter a person's career practice across the lifespan. Career development also refers to the practices from a company or organization that enhance someone's career or encourages them to make practical career choices. Training is a form of identity setting, since it not only affects knowledge but also affects a team member's self-concept. On the other hand, knowledge of the position introduces a new path of less effort to the trainee, which prolongs the effects of training and promotes a stronger self-concept. Other forms of identity setting in an organization include Business Cards, Specific Benefits by Role, and Task Forwarding. Ethnic and national identity An ethnic identity is an identification with a certain ethnicity, usually on the basis of a presumed common genealogy or ancestry. Recognition by others as a distinct ethnic group is often a contributing factor to developing this identity. Ethnic groups are also often united by common cultural, behavioral, linguistic, ritualistic, or religious traits. Processes that result in the emergence of such identification are summarized as ethnogenesis. Various cultural studies and social theory investigate the question of cultural and ethnic identities. Cultural identity adheres to location, gender, race, history, nationality, sexual orientation, religious beliefs, and ethnicity. National identity is an ethical and philosophical concept where all humans are divided into groups called nations. Members of a "nation" share a common identity and usually a common origin, in the sense of ancestry, parentage, or descent. Religious identity A religious identity is the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions, writings, history, mythology, and faith and mystical experience. Religious identity refers to the personal practices related to communal faith along with rituals and communication stemming from such conviction. This identity formation begins with an association in the parents' religious contacts, and individuation requires that the person chooses the same or different religious identity than that of their parents. Gender identity In sociology, gender identity describes the gender with which a person identifies (i.e., whether one perceives oneself to be a man, a woman, outside of the gender binary), but can also be used to refer to the gender that other people attribute to the individual on the basis of what they know from gender role indications (social behavior, clothing, hairstyle, etc.). Gender identity may be affected by a variety of social structures, including the person's ethnic group, employment status, religion or irreligion, and family. It can also be biological in the sense of puberty. Disability identity Disability identity refers to the particular disabilities that an individual identifies with. This may be something as obvious as a paraplegic person identifying as such, or something less prominent such as a deaf person regarding themselves as part of a local, national, or global community of Deaf People Culture. Disability identity is almost always determined by the particular disabilities that an individual is born with, though it may change later in life if an individual later becomes disabled or when an individual later discovers a previously overlooked disability (particularly applicable to mental disorders). In some rare cases, it may be influenced by exposure to disabled people as with body integrity dysphoria. Political identity Political identities often form the basis of public claims and mobilization of material and other resources for collective action. One theory that explores how this occurs is social movement theory. According to Charles Tilly, the interpretation of our relationship to others ("stories") create the rationale and construct of political identity. The capacity for action is constrained by material resources and sometimes perceptions that can be manipulated by using communication strategies that support the creation of illusory ties. Interpersonal identity development Interpersonal identity development comes from Marcia's Identity Status Theory, and refers to friendship, dating, gender roles, and recreation as tools to maturity in a psychosocial aspect of an individual. Social relation can refer to a multitude of social interactions regulated by social norms between two or more people, with each having a social position and performing a social role. In a sociological hierarchy, social relation is more advanced than behavior, action, social behavior, social action, social contact, and social interaction. It forms the basis of concepts like social organization, social structure, social movement, and social system. Interpersonal identity development is composed of three elements: Categorization: Assigning everyone into categories. Identification: Associating others with certain groups. Comparison: Comparing groups. Interpersonal identity development allows an individual to question and examine various personality elements, such as ideas, beliefs, and behaviors. The actions or thoughts of others create social influences that change an individual. Examples of social influence can be seen in socialization and peer pressure, which can affect a person's behavior, thinking about one's self, and subsequent acceptance or rejection of how other people attempt to influence the individual. Interpersonal identity development occurs during exploratory self-analysis and self-evaluation, and ends at various times to establish an easy-to-understand and consolidative sense of self or identity. Interaction During interpersonal identity development, an exchange of propositions and counter-propositions occurs, resulting in a qualitative transformation of the individual. The aim of interpersonal identity development is to resolve the undifferentiated facets of an individual, which are found to be indistinguishable from others. Given this, and with other admissions, the individual is led to a contradiction between the self and others, and forces the withdrawal of the undifferentiated self as truth. To resolve the incongruence, the person integrates or rejects the encountered elements, which results in a new identity. During each of these exchanges, the individual must resolve the exchange before facing future ones. The exchanges are endless since the changing world constantly presents exchanges between individuals and thus allows individuals to redefine themselves constantly. Collective identity Collective identity is a sense of belonging to a group (the collective). If it is strong, an individual who identifies with the group will dedicate their lives to the group over individual identity: they will defend the views of the group and take risks for the group, often with little to no incentive or coercion. Collective identity often forms through a shared sense of interest, affiliation, or adversity. The cohesiveness of the collective identity goes beyond the community, as the collective experiences grief from the loss of a member. Social support Individuals gain a social identity and group identity from their affiliations in various groups, which include: family, ethnicity, education and occupational status, friendship, dating, and religion. Family One of the most important affiliations is that of the family, whether they be biological, extended, or even adoptive families. Each has its own influence on identity through the interaction that takes place between the family members and with the individual. Researchers and theorists state that an individual's identity (more specifically an adolescent's identity) is influenced by the people around them and the environment in which they live. If a family does not have integration, it is likely to cause identity diffusion (one of James Marcia's four identity statuses, where an individual has not made commitments and does not try to make them), and applies to both males and females. Peer relationships Morgan and Korobov performed a study in order to analyze the influence of same-sex friendships in the development of one's identity. This study involved the use of 24 same-sex college student friendship triads, consisting of 12 males and 12 females, with a total of 72 participants. Each triad was required to have known each other for a minimum of six months. A qualitative method was chosen, as it is the most appropriate in assessing the development of identity. Semi-structured group interviews took place, where the students were asked to reflect on stories and experiences concerning relationship problems. The results showed five common responses when assessing these relationship problems: joking about the relationship's problems, providing support, offering advice, relating others' experiences to their own similar experiences, and providing encouragement. The results concluded that adolescents actively construct their identities through common themes of conversation between same-sex friendships; in this case, involving relationship issues. The common themes of conversation that close peers seem to engage in helping to further their identity formation in life. Influences on identity Cognitive influences Cognitive development influences identity formation. When adolescents are able to think abstractly and reason logically, they have an easier time exploring and contemplating possible identities. When an adolescent has advanced cognitive development and maturity, they tend to resolve identity issues more so than age mates that are less cognitively developed. When identity issues are solved quicker and better, there is more time and effort put into developing that identity. Scholastic influences Adolescents that have a post-secondary education tend to make more concrete goals and stable occupational commitments. Going to college or university can influence identity formation in a productive way. The opposite can also be true, where identity influences education and academics. Education's effect on identity can be beneficial for the individual's identity; the individual becomes educated on different approaches and paths to take in the process of identity formation. Sociocultural influences Sociocultural influences are those of a broader social and historical context. For example, in the past, adolescents would likely just adopt the job or religious beliefs that were expected of them or that were akin to their parents. Today, adolescents have more resources to explore identity choices and more options for commitments. This influence is becoming less significant due to the growing acceptance of identity options that were once less accepted. Many of the identity options from the past are becoming unrecognized and less popular today. The changing sociocultural situation is forcing individuals to develop a unique identity based on their own aspirations. Sociocultural influences play a different role in identity formation now than they have in the past. Parenting influences The type of relationship that adolescents have with their parents has a significant role in identity formation. For example, when there is a solid and positive relationship between parents and adolescents, they are more likely to feel freedom in exploring identity options for themselves. A study found that for boys and girls, identity formation is positively influenced by parental involvement, specifically in the areas of support, social monitoring, and school monitoring. In contrast, when the relationship is not as close and the fear of rejection or discontentment from the parent or other guardians is present, they are more likely to feel less confident in forming a separate identity from their parents. Cyber-socializing and the Internet The Internet is becoming an extension of the expressive dimension of adolescence. On the Internet, youth talk about their lives and concerns, design the content that they make available to others, and assess the reactions of others to it in the form of optimized and electronically mediated social approval. When connected, youth speak of their daily routines and lives. With each post, image or video they upload, they can ask themselves who they are and try out profiles that differ from the ones they practice in the "real" world. See also Otium Poverty Workism Self-Schema Social theory Social defeat Lev Vygotsky Social stigma Social identity Self-discovery Peer pressure Cultural identity Erving Goffman Religious Values Consumer culture Moral development Identity performance Wishful Identification George Herbert Mead In-group and out-group Symbolic interactionism Social comparison theory Identification (psychology) Identity crisis (psychology) Genealogical bewilderment Values (Western philosophy) Georg Wilhelm Friedrich Hegel References Sources Further reading A Erdman, A Study of Bisexual Identity Formation. 2006. A Portes, D MacLeod, What Shall I Call Myself? Hispanic Identity Formation in the Second Generation. Ethnic and Racial Studies, 1996. AS Waterman, Identity Formation: Discovery or Creation? The Journal of Early Adolescence, 1984. AS Waterman, Finding Someone to be: Studies on the Role of Intrinsic Motivation in Identity Formation. Identity: An International Journal of Theory and Research, 2004. A Warde, Consumption, Identity-Formation and Uncertainty. Sociology, 1994. A Wendt, Collective Identity Formation and the International State. The American Political Science Review, 1994. CA Willard, 1996 — Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy, Chicago: University of Chicago Press. ; OCLC 260223405 CG Levine, JE Côté, JE Cãotâ, Identity Formation, Agency, and Culture: a social psychological synthesis. 2002. G Robert, C Bate, C Pope, J Gabbay, A le May, Processes and dynamics of identity formation in professional organizations. 2007. HL Minton, GJ McDonald, Homosexual identity formation as a developmental process. MD Berzonsky, Self-construction over the life-span: A process perspective on identity formation. Advances in personal construct theory, 1990. RB Hall, (Reviewer) Uses of the Other: 'The East' in European Identity Formation (by IB Neumann) University of Minnesota Press, Minneapolis, 1999. 248 pages. International Studies Review Vol.3, Issue 1, Pages 101-111 VC Cass, Sexual orientation identity formation: A Western phenomenon. Textbook of homosexuality and mental health, 1996. External links A positive approach to the identity formation of biracial children ". ematusov.soe.udel.edu Identity: An International Journal of Theory and Research. "Identity" is the official journal of the Society for Research on Identity Formation. Social philosophy Conceptions of self Career development Identity (social science)
0.781474
0.99139
0.774746
Jungian cognitive functions
Psychological functions, as described by Carl Jung in his book Psychological Types, are particular mental processes within a person's psyche that are present regardless of common circumstances. This is a concept that serves as one of the foundations for his theory on personality type. In his book, he noted four main psychological functions: thinking, feeling, sensation, and intuition. He introduced them with having either an internally focused (introverted) or externally focused (extraverted) tendency which he called "attitude". He also categorizes the functions as either rational (thinking and feeling) or irrational (intuition and sensation). Psychological functions and attitudes The four psychological functions may be subjugated to the control of consciousness, which can take two attitudes: Extraversion: "a strong, if not exclusive, determination by the object." Consciously, in an extravert, the four basic cognitive functions follow the extraverted 'general attitude of consciousness': "Now, when the orientation to the object and to objective facts is so predominant that the most frequent and essential decisions and actions are determined, not by subjective values but by objective relations, one speaks of an extraverted attitude. When this is habitual, one speaks of an extraverted type. If a man so thinks, feels, and acts, in a word so lives, as to correspond directly with objective conditions and their claims, whether in a good sense or ill, he is extraverted." Introversion: "a turning inwards of the libido, whereby a negative relation of subject to object is expressed. Interest does not move towards the object, but recedes towards the subject." Consciously, in an introvert, the four basic cognitive functions follow the introverted 'general attitude of consciousness'. "Everyone whose attitude is introverted thinks, feels, and acts in a way that clearly demonstrates that the subject is the chief factor of motivation while the object at most receives only a secondary value." The difference between extraversion and introversion comes from the source of the decisive factor in forming motivation and developing ideas, whether it is objective (i.e., the external environment) or subjective (experienced within the mind, or "processes inherent in the psyche"). When discussing function types, Jung ascribed movements of the libido in both directions for each function in each function type, with one direction being that final judge. To summarize Jung's views, as discussed in Psychological Types and maintained until his death, Jung posited that each individual follows a "general attitude of consciousness" where every conscious act is directed by the tendency to follow introversion for introverts and extraversion for extraverts. Jung's definition of the general attitude was not to limit the individual from experiencing the opposing attitude but to offer "decisive determination". The primary, or most developed, differentiated, and conscious function, is entirely positioned in the service of the conscious attitude of introversion or extraversion, but even if all other functions can be conscious and made to follow the general attitude, they are of less differentiation and are hence strongly affected by the opposing attitude of the unconscious. Later in the book, Jung describes the auxiliary function as being capable of some significant development or differentiation if it remains less differentiated from that of the primary. His views on the primary and auxiliary functions, both being of enough differentiation to be considered conscious and set aside with the primary as opposed to the most inferior two functions, can be noted as early as psychological Types. The four basic psychological functions—thinking, feeling, sensation, and intuition—are "basic functions" that can be briefly defined as follows. Thinking According to Jung, thinking is "that psychological function which, in accordance with its own laws, brings given presentations into conceptual connection". Jung said that the thinking function should be delegated solely to 'active thinking' in contrast to 'passive thinking'. According to him, active thinking uses concepts to connect information, which is considered judgement as a result. He writes that passive thinking "lacks any sense of direction", since it is not in accordance with an aim. He refers to it as 'intuitive thinking' instead. Later, some interpreted Jung's extraverted thinking and introverted thinking to mean something other than the function of thought as represented by extraverts and introverts respectively. In Adler and Hull's translation of Jung's Psychological Types, Jung states: "Apart from the qualities I have mentioned, the undeveloped functions possess the further peculiarity that, when the conscious attitude is introverted, they are extraverted and vice versa. One could therefore expect to find extraverted feelings in an introverted intellectual..." Extraverted thinking Extraverted thinking is a thinking function that is objective (being extraverted). Extraverted thinking often places information, such as facts in high order; it is a process that is concerned with organisation and hierarchy of phenomena. "In accordance with his definition, we must picture a man whose constant aim—in so far, of course, as he is a [p. 435] pure type—is to bring his total life activities into relation to intellectual conclusions, which in the last resort are always oriented by objective data, whether objective facts or generally valid ideas. This type of man gives the deciding voice—not merely for himself alone but also on behalf of his entourage—either to the actual objective reality or to its objectively oriented, intellectual formula. By this formula, good and evil are measured, and beauty and ugliness determined. All is right that corresponds with this formula; all is wrong that contradicts it; and everything that is neutral to it is purely accidental." Introverted thinking Introverted thinking is the thinking function that is subjective (being introverted). The nature of introverted thinking means that it is primarily concerned with its "subjective idea" and insights gained by formulation over facts and objective data. Whereas extraverted thinking is most like Empiricism, introverted Thinking is most similar to Rationalism. "Just as Darwin might possibly represent the normal extraverted thinking type, we might point to Kant as a counter-example of the normal introverted thinking type. The former speaks with facts; the latter appeals to the subjective factor. Darwin ranges over the wide fields of objective facts, while Kant restricts himself to a critique of knowledge in general. But suppose a Cuvier be contrasted with a Nietzsche: the antithesis becomes even sharper." "The introverted thinking type is characterized by a priority of the thinking I have just described. Like his [p. 485] extraverted parallel, he is decisively influenced by ideas; these, however, have their origins not in the objective data but in the subjective foundation. Like the extravert, he too will follow his ideas, but in the reverse direction: inwardly, not outwardly. Intensity is his aim, not extensity. In these fundamental characters, he differs markedly, indeed quite unmistakably, from his extraverted parallel. Like every introverted type, he is almost completely lacking in that which distinguishes his counter type, namely, the intensive relatedness to the object." Feeling Jung defined feeling as "primarily a process that takes place between the ego and a given content, a process, moreover, that imparts to the content a definite value in the sense of acceptance or rejection [...] Hence, feeling is also a kind of judging, differing, however, from an intellectual judgment in that it does not aim at establishing an intellectual connection but is solely concerned with the setting up of a subjective criterion of acceptance or rejection." Also, Jung made distinctions between feeling as a judging function and emotions (affect): "Feeling is distinguished from affect by the fact that it gives rise to no perceptible physical innervation's." Von Franz wrote that there are "clichés" with regard to the feeling function, which are that musicians and people with "good eros" are feeling types. She also wrote that another cliché was the notion that women are better at feeling "just because they are women". Later, some interpreted Jung's extraverted feeling and introverted feeling to mean something other than the function of feeling as represented in extraverts and introverts respectively. Extraverted feeling Overall, extraverted feeling is concerned with phenomena being harmonious with their external environment. Jung writes of extraverted feelers as those where feeling "loses its personal character—it becomes feeling per se; it almost seems as though the personality were wholly dissolved in the feeling of the moment. Now, since actual life situations constantly and successively alternate, in which the feeling-tones released are not only different but are actually mutually contrasting, the personality inevitably becomes dissipated in just so many different feelings." Introverted feeling Introverted feeling is "very hard to elucidate since so little of it is openly displayed." Jung writes of feeling in introverted feelers: "[Introverted feeling] is continually seeking an image which has no existence in reality but which it has seen in a kind of vision. It glides over all objects that do not fit in with its aim. It strives for inner intensity, for which the objects serve at most as a stimulus. The depth of this feeling can only be guessed—it can never be clearly grasped. It makes people silent and difficult to access; it shrinks back like a violet from the brute nature of the object in order to fill the depths of the subject. It comes out with negative judgments or assumes an air of profound indifference as a means of defense." Introverted feelings can therefore be thought of as subjective, personal ideals and values, that the person protects and defends against the thoughts and judgements of others. Sensation Jung presented sensation as "that psychological function that transmits a physical stimulus to perception. [...] not only to the outer stimuli but also to the inner, i.e., to changes in the internal organs. Primarily, therefore, sensation is sense-perception, i.e., perception transmitted via the sense organs and 'bodily senses' (kinaesthetic, vaso-motor sensation, etc.)." Also, since the process of conscious perception is a psychological phenomenon representing a physical phenomenon, and not the physical phenomenon itself, he adds: "On the one hand, it is an element of presentation, since it transmits to the presenting function the perceived image of the outer object; on the other hand, it is an element of feeling, because through the perception of bodily changes it lends the character of affect to feeling." Extraverted sensation Extraverted sensation is the sensing function that perceives sensations from the external world in an objective manner. For example, since an extraverted sensor type's source of reward gravitates around perceiving and feeling external phenomena, he often has a good sense of aesthetics—whether this be the taste of food or a new trend in clothing. Extraverted sensors may be more attuned to spatial awareness and physical reality. Note that a bodily sensation is still considered extraverted sensing, as the sensation is being perceived in objective reality. For example, drinking caffeine will objectively create a stimulating sensation in the person's physiology. This is contrasted to the subjective sensor, who may be concerned with a subjective response to the same drink (e.g., nostalgia that is tied to that specific cup of coffee, or whether or not they prefer the flavor). Introverted sensation Introverted sensation is the sensing function that perceives phenomena in such a way as extraverted sensation does above, but in a subjective manner. Jung wrote that "the subject perceives the same things as everybody else; he never stops at the purely objective effect but concerns himself with the subjective perception released by the objective stimulus. Subjective perception differs remarkably from the objective. It is either not found at all in the object or, at most, merely suggested by it[...] Subjective sensation apprehends the background of the physical world rather than its surface. The decisive thing is not the reality of the object but the reality of the subjective factor, i.e., the primordial images, which in their totality represent a psychic mirror world. It is a mirror, however, with the peculiar capacity of representing the present contents of consciousness not in their known and customary form but in a certain sense sub specie aeternitatis, somewhat as a million-year old consciousness might see them. Such a consciousness would see the becoming and the passing of things beside their present and momentary existence, and not only that, but at the same time it would also see that Other, which was before their becoming and will be after their passing hence." Introverted sensation also perceives things in a very detailed manner, as per Emma Jung. Intuition Intuition is also presented as a basic psychological function, as hunches and visions provide an alternative means of perception to sensation. "It is that psychological function that transmits perceptions in an unconscious way. Everything, whether outer or inner objects or their associations, can be the object of this perception. Intuition has this peculiar quality: it is neither sensation nor feeling, nor intellectual conclusion, although it may appear in any of these forms." Extraverted intuition Extraverted intuition takes in intuitive information from the world around. Whereas introverted intuition refers to Jung's idea of the collective unconscious, extraverted intuition is concerned with the collective conscious. People with high extraverted intuition are attuned to current events, media, trends, and developments. The collective unconscious sees the world in terms of primordial archetypes such as The Hero, The Sage, the outlaw, etc. The collective conscious used by the Extraverted Intuitive, however, sees archetypes reflected through the subcultures, celebrities, organizations, events, and ideas of their times. Introverted intuition Introverted intuition is the intuition that acts in an introverted and, thus, subjective manner. Jung wrote: "Intuition, in the introverted attitude, is directed upon the inner object, a term we might justly apply to the elements of the unconscious. The relation of inner objects to consciousness is entirely analogous to that of outer objects, although theirs is a psychological and not a physical reality. Inner objects appear to the intuitive perception as subjective images of things, which, though not met with in external experience, really determine the contents of the unconscious, i.e., the collective unconscious, in the last resort. [...] Although this intuition may receive its impetus from outer objects, it is never arrested by external possibilities but stays with the factor that the outer object releases within. [...] Introverted intuition apprehends the images that arise a priori, i.e., the inherited foundations of the unconscious mind. These archetypes, whose innermost nature is inaccessible to experience, represent the precipitate of psychic functioning of the whole ancestral line, i.e., the heaped-up, or pooled, experiences of organic existence in general, a million times repeated and condensed into types. Hence, in these archetypes all experiences are represented, which since ancient times have happened on this planet. Their archetypal distinctness is more marked, the more frequently and intensely they have been experienced. The archetype would be—to borrow from Kant—the noumenon of the image which intuition perceives and, in perceiving, creates." Jung differentiates between introverted intuition and introverted sensation by writing that introverted sensation is 'confined' to the perception of events, while introverted intuition instead perceives "the image that has really occasioned the innervation", repressing its actual qualities. He uses the example of "a psychogenic attack of giddiness," writing that the sensation will perceive the qualities and sensations of the giddiness without paying attention to the image that caused it. Intuition, on the other hand, does perceive the image that caused it, perceiving it and its course in a very detailed manner rather than the giddiness itself, which is "the image of a tottering man pierced through the heart by an arrow". "For intuition, therefore, the unconscious images attain the dignity of things or objects. But, because intuition excludes the cooperation of sensation, it obtains either no knowledge at all or, at best, a very inadequate awareness of the innervation disturbances or of the physical effects produced by the unconscious images. Accordingly, the images appear as though detached from the subject, as though existing in themselves without relation to the person. Consequently, in the above-mentioned example, the introverted intuitive, when affected by the giddiness, would not imagine that the perceived image might also in some way refer to himself. Naturally, to one who is rationally orientated, such a thing seems almost unthinkable, but it is none the less a fact, and I have often experienced it in my dealings with this type." Myers-Briggs Type Indicator Isabel Myers, an early pioneer of psychometrics, formalized these ideas and proposed that the mixture of types in an individual's personality could be measured through responses to a personality test she devised along with her mother, Katharine Cook Briggs, the Myers-Briggs Type Indicator. In this model, four "dichotomies" are defined, each labelled by two letters (one for each of the opposites in question), as shown by the emboldened letters in the table. Individuals' personalities fall into sixteen different categories depending on which side of each dichotomy they belong to, labelled by the four applicable letters (for example, an "ENTP" type is someone whose preferences are extraversion, intuition, thinking and perceiving). Controversy over attitudes Myers interpreted Jung as saying that the auxiliary, tertiary, and inferior functions are always in the opposite attitude of the dominant, though some views differ. In support of Myers' (and/or Briggs') interpretation, in one sentence Jung seems to state that the "three inferior" functions of an (extreme) extravert are introverted. The "most differentiated function is always employed in an extraverted way, whereas the inferior functions are introverted". More recently, typologists such as John Beebe and Linda Berens have introduced theoretical systems in which all people possess eight functions—equivalent to the four functions as defined by Jung and Myers but in each of the two possible attitudes—with the four in the opposite attitude to that measured, known as the "shadow functions", residing largely in the unconscious. Furthermore, the evidence given by Myers for the orientation of the auxiliary function relies on the sentence from Jung: "For all the types met with in practice, the rule holds good that besides the conscious, primary function, there is a relatively unconscious, auxiliary function which is in every respect different from the nature of the primary function." But the sentence justifying this interpretation is in fact a mistranslation. Thus rendering this interpretation obsolete. "Unconscious" being in fact "conscious" makes a significant difference, given the importance of these two notions in psychological types. The correct translation is: "For all the types met with in practice, the rule holds good that besides the conscious, primary function, there is a relatively conscious, auxiliary function which is in every respect different from the nature of the primary function." Different models The tables below give different theorists' ideas about personality types in terms of "cognitive functions". Carl Jung Carl Jung developed the theory of cognitive processes in his book Psychological Types, in which he defined only four psychological functions, which can take introverted or extraverted attitudes, as well as a judging (rational) or perceiving (irrational) attitude determined by the primary function (judging if thinking or feeling, and perceiving if sensation or intuition). He used the terms dominant, auxiliary, and inferior, in which there is one dominant function, two auxiliary functions, and one inferior function. Each individual follows a "general attitude of consciousness" in which the function is conscious. The more conscious a function is, the higher the tendency and potential it has to develop. The less differentiation is hence strongly affected by the opposing attitude of the unconscious and manifest in "happening" to the person and not under conscious control. Therefore, there is a significant difference between Jung and the MBTI regarding the orientation of the functions. The following table is a summarized model of Jung's conception of personality types based on the four functions of introversion, and extraversion. Myers-Briggs Type Indicator The third edition of the MBTI Manual lists the types function order according to the table below: John Beebe Though John Beebe has not published a type table, the format that Isabel Myers devised can also be applied to his theory. Beebe describes the different cognitive functions' role in the overall personality in terms of various mythic archetypes. John Beebe's model is based on Jung's theory of the collective unconscious, which is not part of the current scientific consensus and may be unfalsifiable. Linda Berens The layout of Linda Berens's type table is unique, and her terminology differs from that of Beebe; however, the ordering of cognitive processes in her and Beebe's models are the same. Lenore Thomson Lenore Thomson offers yet another model of cognitive functions. In her book, Personality Type: An Owners Manual, Thomson advances the hypothesis of a modular relationship between the cognitive functions paralleling left-right brain lateralization. In this approach, the judging functions are in the front-left and back-right brains, and the perception functions are in the back-left and front-right brains. The extraverted functions are in the front of the brain, while the introverted functions are in the back of the brain. The order of the cognitive functions are then determined not by an archetypal hierarchy (as supposed by Beebe) but by an innate brain lateralization preference. References Further reading Myers, Isabel Myers (1995) [1980]. Gifts Differing, Palo Alto, CA: Davies-Black Publishing. . Thompson, Henry L (1996). Jung's Function-Attitudes Explained, Watkinsville, GA.: Wormhole Publishing. . Nardi, Dario (2005). "8 Keys to Self-Leadership: From Awareness to Action." Huntington Beach, CA: :Unite Business Press. . Thomson, Lenore (1998). Personality Type: An Owners Manual, Boston & London: Shambhala Publications, Inc. . External links Center for Applications of Psychological Type website 8 Cognitive Processes website Translation of Psychological Types by H. Godwin Baynes Personality tests Cognitive Functions Carl Jung
0.7793
0.994149
0.77474
Psychological drama
Psychological drama, or psychodrama, is a subgenre of drama and psychological fiction literatures that generally focuses upon the emotional, mental, and psychological development of the protagonists and other characters within the narrative, which is highlighted by the drama. It is widely known as one of the main subgenres of psychological fiction; the subgenre is commonly used for films and television series. Discussion of the subgenre The roots of the subgenre can be traced back to the early 20th century, emerging from a rich tapestry of literature that focused on the inner workings of the mind. As cinema evolved, filmmakers began to see the potential for the medium to explore complex psychological themes and narratives. Characteristics Similar with these psychological genres, but rather than using imagery to provoke fear, suspense or terror, they utilize dramatic settings to elicit a strong, emotional value from audiences. Psychological dramas commonly deal directly with the psychological state and mental health, emphasize on emotional conflicts and often serve as a portrait of introspective personal struggle. It can be also characterized as primary character-driven, in which attention will be particularly paid to the psychology of the characters, to their intimate problems more than to the storyline context. The characters are confronted with doubts, dilemmas or inner personality conflicts. The challenges they encounter will often force them to react, making them go through a whole psychological process during the film, even a metamorphosis. Related with genres It often overlaps with other genres such as crime, fantasy, dark comedy, mystery and science fiction, and it is closely related with the psychological horror and psychological thriller genres. Psychological dramas use these genres' tropes to focus on the human condition and psychological effects, usually in a mature and serious tone, nearly similar to melodrama. The difference between "drama" and "psychological drama" that places emphasis is that in the latter scenario, the focus is more on the psychological character of the characters and on existentialism in general, and not on the context of the narrative itself. So, the end is not necessarily tragic: the main character can doubt himself and sometimes overcome his intimate problems. Psychological drama can be very clearly distinguished from dramedy, as there is no to minimal humor in it like Good Will Hunting (1997) and The Truman Show (1998) but since the subgenre is rather devoid of humor. Techniques Each films utilize a range of techniques to mirror the psychological landscape of their characters. Close-ups and subjective camera angles invite viewers into the character’s personal space, while disjointed editing and surreal imagery can reflect fragmented states of mind. The use of symbolism is also prevalent, with objects, settings, and colors imbued with psychological significance. Music and sound design play crucial roles, often used to heighten the emotional intensity and draw audiences deeper into the psychological experience. Themes These primary themes in the subgenre related to depiction of mental illness, psychological trauma, and society, but not limited to; other themes like alienation, self-doubt, and the quest for identity are common, with narratives often blurring the lines between reality and illusion to reflect the turmoil within the characters’ minds. It can be explore thematic elements include: denialism, depression, disability, distorted sequences, dysfunctional relationships, existential crisis, human sexuality, identity crisis, mass hysteria, mood swings, odd behaviors, post-traumatic stress disorder, psychological abuse, psychedelic art, and social issues. Examples Films Psychological drama films have generally rooted with traditional drama genre in the earliest years of 20th century, with these examples cited The Whispering Chorus (1918) and Greed (1924). Additionally, early examples of popular subgenre films in 1930s to 1950s include La vuelta al nido (1938), Death of a Salesman (1951), Johnny Belinda (1948), A Place in the Sun (1951), and The Snake Pit (1948) Several films generally used the subgenre have employed controversially social issues and/or psychosexual themes, most notably Stanley Kubrick's Lolita (1962), A Clockwork Orange (1971), and Eyes Wide Shut (1999). Other acclaimed films with similar themes including Last Tango in Paris (1972), One Flew Over the Cuckoo's Nest (1975), The Ninth Configuration (1980), Pink Floyd - The Wall, Sophie's Choice (both 1982) Heavenly Creatures (1994), Breaking the Waves (1996), I Stand Alone (1998), Magnolia (1999), Requiem for a Dream (2000), The Piano Teacher (2001), Elephant (2003), Enter the Void (2009), Biutiful (2010), Shame (2011), Jagten and The Master (both 2012), Nymphomaniac (2013), Whiplash (2014), The Power of the Dog (2021), and Blonde and The Whale (both 2022). Films have some thematically-linked franchises or trilogies to focus on aspects of human condition and psychological elements, notably Iñárritu's Death trilogy (consists Amores perros (2000), 21 Grams (2003) and Babel (2006)) and Krzysztof Kieślowski's Three Colours trilogy. Asian films have contributed the subgenre, often employs several psychological and social elements. For example: Akira Kurosawa, a Japanese renowned filmmaker, known for his landmark filmography with the subgenre, notably Drunken Angel (1948) and Ikiru (1952). Each films including The Demon (1978), Batch '81 (1982), Silip (1986), Taare Zameen Par (2007), Himizu (2011), Aparisyon (2012), Like Father, Like Son and Norte, the End of History (both 2013), Black Stone (2015), Last Night (2017), and Family History and John Denver Trending (both 2019). Television The Affair The Bear Beef Criminal The Cry A Death in California The Handmaid's Tale Maniac Ray Donovan Riget This Is Us Yellowjackets Animated series Animated series are examples of this subgenre only focuses on characters' experience with mental health and psychological trauma; these included Bojack Horseman, Morel Orel, Steven Universe Future, and Undone.Japanese filmmaker and animator Hideaki Anno, who is best known of creating anime series Neon Genesis Evangelion, a notorious example of the subgenre, delves into heavy psychological elements in its latter half of the entire series. The anime series was the subject to acclaim and controversy, especially for the latter centered on its final two episodes; this was resulted to reboot a feature film as an alternative ending. Additionally, some anime series employed in psychological elements including Akagi, The Flowers of Evil, The Fruit of Grisaia, Rascal Does Not Dream of Bunny Girl Senpai, Scum's Wish, The Tatami Galaxy, Welcome to the N.H.K., and Wonder Egg Priority. Animated films A Silent Voice, Anomalisa, It's Such a Beautiful Day, Inside Out (and its sequel), The Missing, Puss in Boots: The Last Wish, and When Marnie Was There are among the examples of animated films used in the subgenre, usually having the characters' portrayal dealt with several themes such as anxiety attack, fear of abandonment and death, and society. Adam Elliot is the most notable example of animated psychological drama films, confronted with bitterness and human conditions. His films included Harvie Krumpet (2003), Mary and Max (2009), and Memoir of a Snail (2024). Literature Chicago, Yumi Tamura (2000) The Winter Wives, Linden MacIntyre (2021) Final Psychodrama, Davide Roccamo (2021) Saint X, Alexis Schaitkin (2022) Trish, James C. Bennett (2022) Theater The Chinese Lady, Lloyd Suh (2021) Filmmakers Paul Thomas Anderson - An American filmmaker known for his depictions of flawed characters and exploration of subjects and themes such as dysfunctional families, alienation, loneliness and redemption. Sofia Coppola - An American filmmaker, daughter of Francis Ford Coppola, known for psychological drama films about isolation and society. Examples: The Virgin Suicides (1999) and Lost in Translation (2003). Todd Field - An American filmmaker well known of his filmography introduced with psychologically-mind characters and themes. Examples: In the Bedroom (2001), Little Children (2006), and Tár (2022). Alejandro González Iñárritu - A Mexican filmmaker who has directed numerous films that focus on aspects of the human condition. Charlie Kaufman - An American director and screenwriter, whose work explores universal themes like identity crisis, mortality, and the meaning of life through a metaphysical or parapsychological framework. Examples: Eternal Sunshine of the Spotless Mind (2004). Andrei Tarkovsky - A Soviet and Russian filmmaker, known for his philosophical and psychological movies dealing with existence, faith and dreamlike memories. Examples: Solaris (1972), Mirror (1975) and Stalker (1979). See also List of drama films Cult film Hyperlink cinema Postmodernist film Realism Social issues Slow cinema Tragedy Tragicomedy References Psychological drama films Psychological fiction Drama films Drama television series Drama genres Drama Film genres Television genres
0.778221
0.995526
0.774739
Displacement (psychology)
In psychology, displacement is an unconscious defence mechanism whereby the mind substitutes either a new aim or a new object for things felt in their original form to be dangerous or unacceptable. Example: If your boss criticizes you at work, you might feel angry but can't express it directly to your boss. Instead, when you get home, you take out your frustration by yelling at a family member or slamming a door. Here, the family member or the door is a safer target for your anger than your boss. Freud The concept of displacement originated with Sigmund Freud. Initially he saw it as a means of dream-distortion, involving a shift of emphasis from important to unimportant elements, or the replacement of something by a mere illusion. Freud called this “displacement of accent.” Displacement of object: Feelings that are connected with one person are displaced onto another person. A man who has had a bad day at the office, comes home and yells at his wife and children, is displacing his anger from the workplace onto his family. Freud thought that when children have animal phobias, they may be displacing fears of their parents onto an animal. Displacement of attribution: A characteristic that one perceives in oneself but seems unacceptable is instead attributed to another person. This is essentially the mechanism of psychological projection; an aspect of the self is projected (displaced) onto someone else. Freud wrote that people commonly displace their own desires onto God’s will. Bodily displacements: A genital sensation may be experienced in the mouth (displacement upward) or an oral sensation may be experienced in the genitals (displacement downward). Novelist John Cleland in ‘’Fanny Hill’’ referred to the vagina as “the nethermouth.” Sexual attraction toward a human body can be displaced in sexual fetishism, sometimes onto a particular body part like the foot, or at other times onto an inanimate fetish object. Freud also saw displacement as occurring in jokes, as well as in neuroses – the obsessional neurotic being especially prone to the technique of displacement onto the minute. When two or more displacements occurs towards the same idea, the phenomenon is called condensation (from the German Verdichtung). Phobia displacement or repression: Humans were able to express specific unconscious needs through phobias. These needs that were suppressed deep within themselves created anxiety and tension. The stress, fear, and anxiety that characterize a phobic disorder were the discharge. Reaction Formation: Cognizant practices are embraced to overcompensate for the nervousness an individual feels in regards to their socially inadmissible oblivious considerations or feelings. Typically, a response arrangement is set apart by misrepresented conduct, like garishness and urgency. An illustration of reaction formation incorporates the loyal little girl who adores her mom is responding to her Oedipus scorn of her mom. The psychoanalytic mainstream Among Freud's mainstream followers, Otto Fenichel highlighted the displacement of affect, either through postponement or by redirection, or both. More broadly, he considered that "in part the paths of displacement depend on the nature of the drives that are warded off". Freud's daughter, Anna Freud, also played an important role in the upbringing of these defense mechanisms by the twentieth century. She introduced and analyzed ten of her own defense mechanisms and her work has been used and increased through the years by newer psychoanalysts. Eric Berne in his first, psychoanalytic work, maintained that "some of the most interesting and socially useful displacements of libido occur when both the aim and the object are partial substitutions for the biological aim and object...sublimation". Lacan In 1957, psychoanalyst Jacques Lacan, inspired by an article by linguist Roman Jakobson on metaphor and metonymy, argued that the unconscious has the structure of a language, linking displacement to the poetic function of metonymy, and condensation to that of metaphor. As Lacan put it, "in the case of , 'displacement', the German term is closer to the idea of that veering off of signification that we see in metonymy, and which from its first appearance in Freud is represented as the most appropriate means used by the unconscious to foil censorship". Aggression Within the Freudian psychoanalytic framework, the aggressive drives may be displaced in a similar manner to the libidinal drives. Business or athletic competition, or hunting, for instance, may offer opportunities for the expression of displaced aggression. In such scapegoating behavior, aggression may be displaced onto items or people with little to no connection to the cause of the aggressors frustration. Displacement can also act in what looks like a 'chain-reaction,' with people unwittingly becoming both victims and perpetrators of displacement. For example, a man is angry with his boss, but he cannot express this properly, so he hits his wife. The wife, in turn, hits one of the children, possibly disguising this as a "punishment." (rationalization) Ego psychology sought to use displacement in child rearing, a dummy being used as a displaced target for toddler sibling rivalry. With a purpose to apprehend how the ego uses defense mechanisms, it is important to apprehend the defense mechanisms themselves and the way they function. A few defense mechanisms are visible as protecting us from the internal impulses (e.g., repression); other defense mechanism guard us from external threats (e.g., denial). Transferential displacement The displacement of feelings and attitudes from past significant others onto the present-day ones constitutes a central aspect of the transference, particularly in the case of the neurotic. A subsidiary form of displacement within the transference occurs when the patient disguises transference references by applying them to an apparent third party or to themself. As of now encoded in subcortical neural pathways, material from our oblivious brain is pushed into our cognizant psyche as we attempt to manage mental wonders – typically agonizing – that we are encountering. With the "help" of mind movement, we unknowingly re-surface and re-order struggle ridden encounters as though the past were the present and one setting were another. We move contemplations, sentiments, and perspectives, particularly about individuals who take after others. We allocate them jobs once played by others. We take on old jobs ourselves. All unwittingly. Criticism Later writers have objected that whereas Freud only described the displacement of sex into culture, for example, the converse – social conflict being displaced into sexuality – is also true. Freud's hypothesis is acceptable at clarifying however not at anticipating conduct. Therefore, Freud's hypothesis is unfalsifiable - it cannot be demonstrated valid or invalidated. Freud may likewise have shown research predisposition in his understandings - he may just focused on data which upheld his hypotheses, and overlooked data and different clarifications that didn't fit them. See also References Further reading Arthur J. Clark, Defense Mechanisms in the Counselling Process (1998), Chap. 3: "Displacement" Mark Krupnick, Displacement: Derrida and After (1983) External links Elsa Schmidt-Kilsikis, "Displacement" Defence mechanisms Psychoanalytic terminology Freudian psychology
0.781293
0.991601
0.774731
Mentalization
In psychology, mentalization is the ability to understand the mental state – of oneself or others – that underlies overt behaviour. Mentalization can be seen as a form of imaginative mental activity that lets us perceive and interpret human behaviour in terms of intentional mental states (e.g., needs, desires, feelings, beliefs, goals, purposes, and reasons). It is sometimes described as "understanding misunderstanding." Another term that David Wallin has used for mentalization is "Thinking about thinking". Mentalization can occur either automatically or consciously. Background While the broader concept of theory of mind has been explored at least since Descartes, the specific term 'mentalization' emerged in psychoanalytic literature in the late 1960s, and became empirically tested in 1983 when Heinz Wimmer and Josef Perner ran the first experiment to investigate when children can understand false belief, inspired by Daniel Dennett's interpretation of a Punch and Judy scene. The field diversified in the early 1990s when Simon Baron-Cohen and Uta Frith, building on the Wimmer and Perner study, and others merged it with research on the psychological and biological mechanisms underlying autism and schizophrenia. Concomitantly, Peter Fonagy and colleagues applied it to developmental psychopathology in the context of attachment relationships gone awry. More recently, several child mental health researchers such as Arietta Slade, John Grienenberger, Alicia Lieberman, Daniel Schechter, and Susan Coates have applied mentalization both to research on parenting and to clinical interventions with parents, infants, and young children. Implications Mentalization has implications for attachment theory and self-development. According to Peter Fonagy, individuals with disorganized attachment style (e.g., due to physical, psychological, or sexual abuse) can have greater difficulty developing the ability to mentalize. Attachment history partially determines the strength of mentalizing capacity of individuals. Securely attached individuals tend to have had a primary caregiver that has more complex and sophisticated mentalizing abilities. As a consequence, these children possess more robust capacities to represent the states of their own and other people's minds. Early childhood exposure to mentalization can protect the individual from psychosocial adversity. This early childhood exposure to genuine parental mentalization fosters development of mentalizing capabilities in the child themselves. There is also suggestion that genuine parental mentalization is beneficial to child learning; when a child feels they are being viewed as an intentional agent, they feel contingently responded to, which promotes epistemic trust and triggers learning in the form of natural pedagogy - this increases the quality of learning in the child. This theory needs further empirical support. Research Mentalization or better mentalizing, has a number of different facets which can be measured with various methods. A prominent method of assessment of Parental Mentalization is the Parental Development Interview (PDI), a 45-question semi-structured interview, investigating parents’ representations of their children, themselves as parents, and their relationships with their children. An efficient self-report measure of Parental Mentalization is the Parental Reflective Functioning Questionnaire (PRFQ) created by Patrick Luyten and colleagues. The PRFQ is a brief, multidimensional assessment of parental reflective functioning (mentalization), aimed to be easy to administer to parents in a wide range of socioeconomic populations. The PRFQ is recommended for use as a screening tool for studies with large populations and does not aim to replace more comprehensive measures, such as the PDI or observer-based measures. A 2024 study investigated the longitudinal impact of mentalizing on well-being and emotion regulation strategies in a non-clinical sample, finding that impairments in mentalizing negatively predicted well-being and positively predicted emotional suppression over one year. Research has also found a link between dopamine levels and the ability to mentalize. In particular, reducing dopamine activity in healthy individuals using the drug haloperidol impaired their mentalizing abilities, suggesting that dopamine plays a direct role in these social cognitive processes. Fourfold dimensions According to the American Psychiatric Association's Handbook of Mentalizing in Mental Health Practice, mentalization takes place along a series of four parameters or dimensions: Automatic/Controlled, Self/Other, Inner/Outer, and Cognitive/Affective. Each dimension can be exercised in either a balanced or unbalanced way, while effective mentalization also requires a balanced perspective across all four dimensions. Automatic/Controlled. Automatic (or implicit) mentalizing is a fast-processing unreflective process, calling for little conscious effort or input; whereas controlled mentalization (explicit) is slow, effortful, and demanding of full awareness. In a balanced personality, shifts from automatic to controlled smoothly occur when misunderstandings arise in a conversation or social setting, to put things right. Inability to shift from automatic mentalization can lead to a simplistic, one-sided view of the world, especially when emotions run high; while conversely inability to leave controlled mentalization leaves one trapped in a 'heavy', endlessly ruminative thought-mode. Self/Other involves the ability to mentalize about one's own state of mind, as well as about that of another. Lack of balance means an overemphasis on either self or other. Inner/Outer: Here problems can arise from an over-emphasis on external conditions, and a neglect of one's own feelings and experience. Cognitive/Affective are in balance when both dimensions are engaged, as opposed to either an excessive certainty about one's own one-sided ideas, or an overwhelming of thought by floods of emotion. See also References Further reading Apperly, I. (2010). Mindreaders: The Cognitive Basis of "Theory of Mind". Hove, UK: Psychology Press. Doherty, M.J. (2009). Theory of Mind: How Children Understand Others' Thoughts and Feelings. Hove, UK: Psychology Press. External links Anthony Bateman's homepage. Mentalization factoids – compiled by Frederick Leonhardt. A summary of mentalization. Developmental psychology Psychological concepts
0.78406
0.988098
0.774728
Egosyntonic and egodystonic
In psychoanalysis, egosyntonic refers to the behaviors, values, and feelings that are in harmony with or acceptable to the needs and goals of the ego, or consistent with one's ideal self-image. Egodystonic (or ego alien) is the opposite, referring to thoughts and behaviors (dreams, compulsions, desires, etc.) that are conflicting or dissonant with the needs and goals of the ego, or further, in conflict with a person's ideal self-image. Applicability Abnormal psychology has studied egosyntonic and egodystonic concepts in some detail. Many personality disorders are egosyntonic, which makes their treatment difficult as the patients may not perceive anything wrong and view their perceptions and behavior as reasonable and appropriate. For example, a person with narcissistic personality disorder has an excessively positive self-regard and rejects suggestions that challenge this viewpoint. This corresponds to the general concept in psychiatry of poor insight. Anorexia nervosa, a difficult-to-treat disorder (formerly considered an Axis I disorder before the release of the DSM-5) characterized by a distorted body image and fear of gaining weight, is also considered egosyntonic because many of its sufferers deny that they have a problem. Problem gambling, however, is only sometimes seen as egosyntonic, depending partly on the reactions of the individual involved and whether they know that their gambling is problematic. An illustration of the differences between an egodystonic and egosyntonic mental disorder is in comparing obsessive–compulsive disorder (OCD) and obsessive–compulsive personality disorder. OCD is considered to be egodystonic as the thoughts and compulsions experienced or expressed are not consistent with the individual's self-perception, meaning the thoughts are unwanted, distressing, and reflect the opposite of their values, desires, and self-construct. In contrast, obsessive–compulsive personality disorder is egosyntonic, as the patient generally perceives their obsession with orderliness, perfectionism, and control, as reasonable and even desirable. Freudian heritage The words "egosyntonic" and "egodystonic" originated as early-1920s translations of the German words "ichgerecht" and "nicht ichgerecht", "ichfremd", or "ichwidrig", which were introduced in 1914 by Freud in his book On Narcissism and remained an important part of his conceptual inventory. Freud applied these words to the relationship between a person's "instincts" and their "ego." Freud saw psychic conflict arising when "the original lagging instincts ... come into conflict with the ego (or ego-syntonic instincts)". According to him, "ego-dystonic" sexual instincts were bound to be "repressed." Anna Freud stated that psychological "defences" which were "ego-syntonic" were harder to expose than ego-dystonic impulses, because the former are 'familiar' and taken for granted. Later psychoanalytic writers emphasised how direct expression of the repressed was ego-dystonic, and indirect expression more ego-syntonic. Otto Fenichel distinguished between morbid impulses, which he saw as ego-syntonic, and compulsive symptoms which struck their possessors as ego-alien. Heinz Hartmann, and after him ego psychology, also made central use of the twin concepts. See also References Ego psychology Narcissism Personality disorders
0.777409
0.99654
0.774719
Fashion psychology
Fashion psychology, as a branch of applied psychology, applies psychological theories and principles to understand and explain the relationship between fashion and human behavior, including how fashion affects emotions, self-esteem, and identity. It also examines how fashion choices are influenced by factors such as culture, social norms, personal values, and individual differences. Fashion psychologists may use their knowledge and skills to advise individuals, organizations, or the fashion industry on a variety of issues, including consumer behavior, marketing strategies, design, and sustainability. Significance Fashion psychology is an interdisciplinary field that examines the interaction between human behavior, individual psychology, and fashion, as well as the various factors that impact an individual's clothing choice. The fashion industry is actively seeking to establish a connection with fashion psychology, with a focus on areas such as trend prediction and comprehension of consumer behavior. It is important to acknowledge the significance of clothing choices, irrespective of gender. Fashion choices can have a profound impact on self-perception, the image a person projects to others, and consequently, the way people interact. In fact, they can influence a wide range of scenarios, from the result of a sporting event to how an interviewer perceives capability to perform well in a job role. Fashion psychology holds significant relevance for marketers as they strive to comprehend the variables that enhance the likelihood of a product's adoption by a consumer group. Additionally, marketers must predict the duration for which the product remains fashionable. Hence, a segment of fashion psychology is dedicated to analyzing the shifts in acceptance of fashion trends over time. Clothing Clothing serves as an extension of identity and provides a tangible reflection of a person's perceptions, dissatisfactions, and desires. The terms "clothing" and "dress" typically denote a type of body covering that can be worn, which is commonly made of textile material but may also utilize other materials or substances to be fashioned and secured in place. Clothing primarily served the purpose of providing warmth and protection against the elements. However, in modern times, it is important to note that clothing serves multiple functions beyond just protection, including identification, modesty, status, and adornment. Clothing is used to identify group membership, cover the body appropriately, indicate rank or position within a group, and facilitate self-expression and creativity. The clothing a person chooses to wear is significant in terms of their image and reputation, as it sends out messages to both familiar and unfamiliar people, showcasing the person's image. When an object is worn on the body, it takes on the social significance in relation to the person wearing it. Fashion The prevalent understanding of fashion refers to the prevailing style that is adopted by a significant portion of a particular group, at a given time and location. For example, during the era of cave dwellers, animal skins were considered fashionable, while the sari is a popular style among Indian women, and the miniskirt has become a trend among women in Western cultures. Fashion psychology is typically characterized as the examination of how the selections of attire effects perceptions and peoples' evaluations of one another. Psychology of clothing Throughout history, clothing has not held the same degree of importance in conveying personality as it does in present times. Technological advancements over the centuries have resulted in fashion choices becoming a significant aspect of identity. During early civilizations, clothing served the primary purpose of keeping us warm and dry. Today, with the advent of technological facilities such as central heating, we have become less reliant on clothing as a means of survival. Clothes have evolved from being merely a practical necessity to becoming a social marker, influencing self-perception and allowing people to present themselves in the desired light while also showcasing their personalities and social status. In numerous societies, one's dress sense is considered a reflection of personal wealth and taste, as highlighted by Economist George Taylor through the Hemline index. The fashion impulse is a highly influential and potent social phenomenon owing to its pervasive and expeditious character, its capacity to influence an individual's conduct, and its close association with the societal and economic fabric of a nation. The phrase "You Are What You Wear" implies that people can be judged based on their clothing choices. It suggests that clothing is not just a means of covering the body, but a reflection of a person's identity, values, and social status. The garments we choose to wear serve as a representation of our current thoughts and emotions. Frequently, instances of clothing mishaps can be attributed to underlying internal conflicts manifesting themselves outwardly. Choosing clothing that provides comfort, joy, and a positive self-image can genuinely enhance one's quality of life. Even the slightest modification in one's wardrobe can trigger a sequence of events that leads to new experiences, self-discovery, and cherished moments. Socio-psychological Impact The clothing a person chooses can reflect mental and emotional state, making clothing mishaps a visible manifestation of internal struggles. According to Mary Lynn Damhorst, a researcher in this field, clothing is a systematic method of conveying information about the person who wears it. This suggests that an individual's selection of attire can significantly impact the impression they convey and, consequently, serves as a potent means of communication. Upbringing and fashion choice Madonna describes her upbringing in a strict Catholic family, where wearing pants to church was strongly discouraged by her father. Reflecting on this experience, she acknowledges the powerful influence that clothing can have and how it inspired her to incorporate a mix of conservative and daring elements in her personal style. She refers to this combination as "combinations of strictness and rebelliousness." Madonna's fashion choices, including her crucifix earrings and rosary bead necklaces, were influenced by this realization. Body image Clothing can be perceived as an extension of an individual's physical self and serves the purpose of modifying the body's appearance. The way in which a person perceives their own physical appearance has a significant impact on their attitudes and preferences towards clothing. Millennial females, also known as Generation Y, are being socialized to begin their fashion consumption at an earlier age than their predecessors, resulting in a shift in the typical starting point of fashion consumption. Even though Generation Y consumers play a crucial role in the decision-making process of the market, retailers are finding it increasingly difficult to comprehend the behavior and psychology of these consumers. Brand Consumers purchase fashion-branded products not only to meet their functional requirements but also to fulfill their desires for social recognition, self-image projection, and a desirable lifestyle. The implementation of effective branding strategies is a crucial determinant of success for all types of fashion brands, as it has a direct impact on the welfare of consumers. Marketing strategies The fashion industry is currently shifting towards a data-driven approach, where brands are leveraging analytical services to formulate innovative marketing strategies. The impact of artificial intelligence on marketing strategies is expected to extend to various areas, such as business models, sales processes, customer service options, and even consumer behaviours. Impact of clothing color Psychologists hold the belief that the color of apparel can have an impact on emotional states and stress levels. The presence of color has the potential to augment an individual's perception of their environment. Design Fashion psychology concerns itself with examining the ways in which fashion design can influence a positive body image, utilizing psychological insights to foster a sustainable approach towards clothing production and disposal, and understanding the underlying reasons behind the development of specific shopping behaviors. Men's fashion insecurities Research has shown that the conventional gender stereotype suggesting that females are more fashion-conscious and observant of others' clothing and makeup choices than males is not completely accurate. Instead, these studies have highlighted that men also encounter insecurities linked to their clothing decisions. In fact, research has shown that men often exhibit higher levels of self-consciousness than women when it comes to their personal sense of style and the public perception of their appearance. Dress to impress In research conducted by Joseph Benz from the University of Nebraska, over 90 men and women were surveyed to investigate their behavior of deceiving potential partners during dates. The study revealed that both sexes engage in deceptive behaviors while dating, albeit for distinct reasons. The study findings suggest that men engage in deceptive behavior to create a positive impression on their romantic partners. This can include highlighting their financial resources or showing willingness to provide security and stability in the relationship. Similarly, women tend to exhibit deceptive conduct concerning their physical appearance, amplifying specific bodily attributes to enhance their appeal to their romantic partner. Shopping behavior Compulsive buying disorder CBD or Compulsive buying disorder is a condition in which an individual experiences distress or impairment due to their excessive shopping thoughts and buying behavior. According to Bleuler, Kraepelin identifies a final category of individuals known as "buying maniacs" or "oniomaniacs." These individuals experience compulsive buying behavior, leading to the accumulation of debt that is often left unpaid and can ultimately result in a catastrophic situation. The oniomaniacs never fully acknowledge their debts and therefore continue to struggle with them. In contemporary consumer-oriented societies, the act of purchasing branded fashion apparel has become a significant aspect of our daily routines and economy. It is often regarded as a source of entertainment and a means of rewarding oneself. However, when this behavior is overindulged, it may lead to a serious psychological condition known as compulsive buying behavior. Revenge buying and panic buying In April 2020, when the lockdown restrictions were largely lifted and markets resumed operation in China, a phenomenon known as "revenge buying" took place. During this time, the renowned French luxury brand Hermès achieved exceptional sales of $2.7 million in a single day. Sociologists posit that compulsive and impulsive purchasing tendencies, including panic buying and revenge buying, function as coping mechanisms that alleviate negative emotions. The phenomena of panic buying, and revenge buying are essentially attempts by consumers to compensate for a situation that is beyond their personal control. These actions serve as a therapeutic means of exerting control over external circumstances, while also offering a sense of comfort, security, and an overall improvement in well-being. Fast fashion The emergence of fast fashion has had a significant impact on the fashion industry, altering the ways in which fashion is conceptualized, manufactured, and consumed, resulting in negative consequences across all three domains. The popularity of fast fashion among consumers can be attributed to its capability of appealing to their emotional, financial, and psychological needs by tapping into their desire for self-expression, social status, and immediate satisfaction. See also Attitude (psychology) Cognitive dissonance Feeling Neuromarketing Retail marketing Semiotics of dress Semiotics of fashion Sensory branding References Fashion Psychology of art
0.794483
0.975029
0.774645
Pastoral care
Pastoral care, or cure of souls, refers to emotional, social and spiritual support. The term is considered inclusive of distinctly religious and non-religious forms of support, including atheist and religious communities. It is also an important form of support found in many spiritual and religious traditions. Definition Modern context Pastoral care as a contemporary term is distinguished from traditional pastoral ministry, which is primarily Christian and tied to Christian beliefs. Institutional pastoral care departments in Europe are increasingly multi-faith and inclusive of non-religious, humanist approaches to providing support and comfort. Just as the theory and philosophy behind modern pastoral care are not dependent on any one set of beliefs or traditions, pastoral care itself is guided by a broad framework. This involves personal support and outreach and is rooted in a practice of relating with the inner world of individuals from all walks of life. Pastoral care is usually provided in the form of the practitioner and client sitting with each other and the client shares personal details while the practitioner keeps it private and offers guidance and counsel. In many private schools in Australia, usually Catholic schools, homeroom is referred to as "PCG" (pastoral care group), "pastoral period", or simply "pastoral", where the teacher is called a "PCA" (pastoral care advisor). As in Romania, a 'PCA' also performs the role of a counsellor. In Christianity Definition Pastoral Care is a Christian approach to improve mental distress and has been practiced since the formation of the Christian Church. By offering guidance and counsel, it is an easy and often preferred contact point for religious people seeking help with psychological problems or personal issues. The model for pastoral care is based on the stories about how Jesus was healing people. In the early church the term 'Poimenic' was used to describe this task of soul-care. In the New Testament, the interactions that are described with the term "pastoral care" are also described with Paraklesis (Greek: παράκλησις paráklēsis) which broadly means "accompaniment", "encouragement", "admonition" and "consolation" (e.g. Rom 12:8; Phil 2:1; 1 Tim 4:13; 1 Thess 5:14). Pastoral care occurs in various contexts, including congregations, hospital chaplaincy, crisis intervention, prison chaplaincy, psychiatry, telephone helplines, counseling centers, senior care facilities, disability work, hospices, end-of-life care, grief support, and more. The term pastoral ministry relates to shepherds and their role caring for sheep. Christians were the first to adopt the term for metaphorical usage, although many religions and non-religious traditions place an emphasis on care and social responsibility. In the West, pastoral ministry has since expanded into pastoral care embracing many different religions and non-religious beliefs. The Bible does not explicitly define the role of a pastor but associates it with teaching. Pastoral ministry involves shepherding the flock. …Shepherding involves protection, tending to needs, strengthening the weak, encouragement, feeding the flock, making provision, shielding, refreshing, restoring, leading by example to move people on in their pursuit of holiness, comforting, guiding (Ps 78:52; 23). History In the ancient church, pastoral care primarily revolved around the Christian's struggle against sin, which jeopardized their ultimate salvation. The theologians Clement of Alexandria, Origen and Eusebius of Caesarea mainly understood this as the concern of individuals for their own souls. Increasingly, the role of pastoral caregivers was seen as assisting individual Christians in this endeavor. The first pastoral movement emerged among the Desert Fathers, who were often visited by Christians seeking advice; however, this was not yet referred to as pastoral care. Similarly, the early monastic-like communities served as such pastoral care centers. The letters of Basil of Ancyra, Gregory of Nazianzus, and John Chrysostom contain numerous examples of pastoral counsel; the term "pastoral care" shifted towards a concern for the souls of others At the transition to the Middle Ages, Gregory the Great composed the "Liber Regulae Pastoris", directed towards the Pope, one of the most influential books on pastoral care (cura) ever written. During the Middle Ages, pastoral care was closely tied to the practice of the sacrament of penance, which included confession of sins, making amends, and absolution by the priest. Against the often mechanized routine, particularly from the monastic tradition, efforts were made to address this, such as by Bernard of Clairvaux. The Latin term "cura animarum" (care of souls) emerged as the proper responsibility of the bishop as the pastor responsible for individual Christians, which he usually delegated to a priest, typically the parish priest. In this sense, "cura animarum" is also used in today's canon law of the Roman Catholic Church. Among the Reformers, the emphasis shifted from the focus on sin to the emphasis on God's forgiveness and comfort, particularly evident in the works of Martin Luther and Heinrich Bullinger. In many cases, however, church discipline soon replaced pastoral care. In the 19th century, the Protestant theologian Friedrich Schleiermacher established Practical Theology. He emphasized that pastoral care should strengthen the freedom and autonomy of individual members within a congregation. As early as 1777, the field of Pastoral Theology was introduced into the curriculum of the University of Vienna (Austria) under Franz Stephan Rautenstrauch, and was taught in the national language rather than Latin. In Germany, it was further developed and disseminated primarily by Johann Michael Sailer, and is considered a precursor to modern pastoral care. In the United States, Anton Theophilus Boisen, one of the key figures in the American pastoral care movement, developed the concept of "Clinical Pastoral Training" in the 1920s. This concept integrated pastoral care, psychology, and education. In the mid-1960s, the pastoral care movement spread to Germany through the Netherlands, leading to the development of Pastoral Psychology. In the theology of the regional churches (Landeskirchen), pastoral care with a focus on pastoral psychology remains a standard practice to this day. Modern context The field of pastoral care is nowadays very specialized. Browning (1993) divided Christian care giving practices into three different categories which are pastoral care, pastoral counseling, and pastoral psychotherapy. This distinction can still be found nowadays, especially in written English papers. According to this definition, pastoral care describes the general work of the clergy of taking care of the people in their community. This comprises funerals, hospital visits, birthday visits or dialogues that do not focus only on a specific problem. Nowadays, there exist many approaches to pastoral care which vary according to their religious denomination. Many protestant christian approaches to pastoral care include contemporary psychological knowledge, which is reflected in the training of pastoral care practitioners. For example, in Germany, the distinctions and the curricula of the different pastoral care training approaches, are provided by the German Society for Pastoral Psychology (Deutsche Gesellschaft für Pastoralpsychologie – DGfP). The five approaches are clinical pastoral care (Klinische Seelsorge Ausbildung - KSA), the group-organisation-system approach (Gruppe-Organisation, System), the Gestalt and psychodrama approach (Gestalt und Psychodrama), the person-centric approach (Personenzentriert) and the depth psychology approach (Tiefenpsychologie). Humanist and non-religious Humanist groups, which act on behalf of non-religious people, have developed pastoral care offerings in response to growing demand for the provision of like-minded support from populations undergoing rapid secularisation, such as the UK. Humanists UK, for example, manages the Non-Religious Pastoral Support Network, a network of trained and accredited volunteers and professionals who operate throughout prisons, hospitals, and universities in the UK. The terms pastoral care and pastoral support are preferred because these sound less religious than terms such as chaplaincy. Surveys have shown that more than two thirds of patients support non-religious pastoral care being available in British institutions. Similar offerings are available from humanist groups around Europe and North America. Pastoral care vs pastoral ministry Pastoral ministry Catholicism In Catholic theology, pastoral ministry for the sick and infirm is one of the most significant ways that members of the Body of Christ continue the ministry and mission of Jesus. Pastoral ministry is considered to be the responsibility of all the baptized. Understood in the broad sense of "helping others", pastoral ministry is the responsibility of all Christians. Sacramental pastoral ministry is the administration of the sacraments (Baptism, Confirmation, Eucharist, Penance, Extreme Unction, Holy Orders, Matrimony) that is reserved to consecrated priests except for Baptism (in an emergency, anyone can baptize) and marriage, where the spouses are the ministers and the priest is the witness. Pastoral ministry was understood differently at different times in history. A significant development occurred after the Fourth Lateran Council in 1215 (more on this in the link to Father Boyle's lecture below). The Second Vatican Council (Vatican II) applied the word "pastoral" to a variety of situations involving care of souls; on this point, go to the link to Monsignor Gherardini's lecture). Many Catholic parishes employ lay ecclesial ministers as "pastoral associates" or "pastoral assistants", lay people who serve in ministerial or administrative roles, assisting the priest in his work, but who are not ordained clerics. They are responsible, among other things, for the spiritual care of frail and housebound as well as for running a multitude of tasks associated with the sacramental life of the Church. If priests have the necessary qualifications in counseling or in psychotherapy, they may offer professional psychological services when they give pastoral counseling as part of their pastoral ministry of souls. However, the church hierarchy under John Paul II and Benedict XVI has emphasized that the Sacrament of Penance, or Reconciliation, is for the forgiveness of sins and not counseling and as such should not be confused with or incorporated into the therapy given to a person by a priest, even if the therapist priest is also their confessor. The two processes, both of which are privileged and confidential under civil and canon law, are separate by nature. Youth workers and youth ministers are also finding a place within parishes, and this involves their spirituality. It is common for Youth workers/ministers to be involved in pastoral ministry and are required to have a qualification in counseling before entering into this arm of ministry. Orthodoxy The priesthood obligations of Orthodox clergymen are outlined by John Chrysostom (347–407) in his treatise On the Priesthood. It is perhaps the first pastoral work written, although he was only a deacon when he penned it. It stresses the dignity of the priesthood. The priest, it says, is greater than kings, angels, or parents, but priests are for that reason most tempted to pride and ambition. They, more than anyone else, need clear and unshakable wisdom, patience that disarms pride, and exceptional prudence in dealing with souls. Protestantism There are many assumptions about what a pastor's ministry is. The core practices of a pastor's ministry in mainline Protestant churches include leading worship, preaching, pastoral care, outreach, and supporting the work of the congregation. Theological Seminaries provide a curriculum that supports these key facets of ministry. Pastors are often expected to also be involved in local ministries, such as hospital chaplaincy, visitation, funerals, weddings and organizing religious activities. "Pastoral ministry" includes outreach, encouragement, support, counseling and other care for members and friends of the congregation. In many churches, there are groups like deacons that provide outreach and support, often led and supported by the pastor. For example, the Evangelical Wesleyan Church instructs clergy with the following words: "We should endeavor to assist those under our ministry, and to aid in the salvation of souls by instructing them in their homes. ... Family religion is waning in many branches. And what avails public preaching alone, though we could preach like angels? We must, yea, every traveling preacher must instruct the people from house to house." The Presbyterian Church (USA) is structured so that there is parity between lay leaders and pastors. Deacons and elders are ordained, with specific duties. See also Clearness committee Clinical pastoral education Faith healing Holistic health References Bibliography Arnold, Bruce Makoto, "Shepherding a Flock of a Different Fleece: A Historical and Social Analysis of the Unique Attributes of the African American Pastoral Caregiver". The Journal of Pastoral Care and Counseling, Vol. 66, No. 2. (June 2012 Multi-faith Centre, University of Canberra, 2013 Henri Nouwen, Spiritual Direction (San Francisco, HarperOne, 2006). Emmanuel Yartekwei Lartey, Pastoral Theology in an Intercultural World (Cleveland, (OH), Pilgrim Press, 2006). Neil Pembroke, Renewing Pastoral Practice: Trinitarian Perspectives on Pastoral Care and Counselling (Ashgate, Aldershot, 2006) (Explorations in Practical, Pastoral and Empirical Theology). Beth Allison Barr, The Pastoral Care of Women in Late Medieval England (Rochester, NY: Boydell Press, 2008) (Gender in the Middle Ages, 3). George R. Ross, Evaluating Models of Christian Counseling (Eugene (OR), Wipf and Stock, 2011). Hamer, Dean (2004). The God Gene: How Faith is Hardwired into Our Genes. New York: Doubleday. . External links St. Thomas Aquinas and the Third Millennium, by Leonard Boyle. The Pastoral Nature of Vatican II: An Evaluation, by Brunero Gherardini. Translation of: Sull'indole pastorale del Vaticano II: una valutazione in Concilio Vaticano II, un concilio pastorale (Frigento, Italy: Casa Mariana Editrice, 2011). Christian religious occupations Christian terminology Religion and health
0.778903
0.99428
0.774448
Education
Education is the transmission of knowledge, skills, and character traits and manifests in various forms. Formal education occurs within a structured institutional framework, such as public schools, following a curriculum. Non-formal education also follows a structured approach but occurs outside the formal schooling system, while informal education entails unstructured learning through daily experiences. Formal and non-formal education are categorized into levels, including early childhood education, primary education, secondary education, and tertiary education. Other classifications focus on teaching methods, such as teacher-centered and student-centered education, and on subjects, such as science education, language education, and physical education. Additionally, the term "education" can denote the mental states and qualities of educated individuals and the academic field studying educational phenomena. The precise definition of education is disputed, and there are disagreements about the aims of education and the extent to which education differs from indoctrination by fostering critical thinking. These disagreements impact how to identify, measure, and enhance various forms of education. Essentially, education socializes children into society by instilling cultural values and norms, equipping them with the skills necessary to become productive members of society. In doing so, it stimulates economic growth and raises awareness of local and global problems. Organized institutions play a significant role in education. For instance, governments establish education policies to determine the timing of school classes, the curriculum, and attendance requirements. International organizations, such as UNESCO, have been influential in promoting primary education for all children. Many factors influence the success of education. Psychological factors include motivation, intelligence, and personality. Social factors, such as socioeconomic status, ethnicity, and gender, are often associated with discrimination. Other factors encompass access to educational technology, teacher quality, and parental involvement. The primary academic field examining education is known as education studies. It delves into the nature of education, its objectives, impacts, and methods for enhancement. Education studies encompasses various subfields, including philosophy, psychology, sociology, and economics of education. Additionally, it explores topics such as comparative education, pedagogy, and the history of education. In prehistory, education primarily occurred informally through oral communication and imitation. With the emergence of ancient civilizations, the invention of writing led to an expansion of knowledge, prompting a transition from informal to formal education. Initially, formal education was largely accessible to elites and religious groups. The advent of the printing press in the 15th century facilitated widespread access to books, thus increasing general literacy. In the 18th and 19th centuries, public education gained significance, paving the way for the global movement to provide primary education to all, free of charge, and compulsory up to a certain age. Presently, over 90% of primary-school-age children worldwide attend primary school. Definitions The term "education" originates from the Latin words , meaning "to bring up," and , meaning "to bring forth." The definition of education has been explored by theorists from various fields. Many agree that education is a purposeful activity aimed at achieving goals like the transmission of knowledge, skills, and character traits. However, extensive debate surrounds its precise nature beyond these general features. One approach views education as a process occurring during events such as schooling, teaching, and learning. Another perspective perceives education not as a process but as the mental states and dispositions of educated individuals resulting from this process. Furthermore, the term may also refer to the academic field that studies the methods, processes, and social institutions involved in teaching and learning. Having a clear understanding of the term is crucial when attempting to identify educational phenomena, measure educational success, and improve educational practices. Some theorists provide precise definitions by identifying specific features exclusive to all forms of education. Education theorist R. S. Peters, for instance, outlines three essential features of education, including imparting knowledge and understanding to the student, ensuring the process is beneficial, and conducting it in a morally appropriate manner. While such precise definitions often characterize the most typical forms of education effectively, they face criticism because less common types of education may occasionally fall outside their parameters. Dealing with counterexamples not covered by precise definitions can be challenging, which is why some theorists prefer offering less exact definitions based on family resemblance instead. This approach suggests that all forms of education are similar to each other but need not share a set of essential features common to all. Some education theorists, such as Keira Sewell and Stephen Newman, argue that the term "education" is context-dependent. Evaluative or thick conceptions of education assert that it is inherent in the nature of education to lead to some form of improvement. They contrast with thin conceptions, which offer a value-neutral explanation. Some theorists provide a descriptive conception of education by observing how the term is commonly used in ordinary language. Prescriptive conceptions, on the other hand, define what constitutes good education or how education should be practiced. Many thick and prescriptive conceptions view education as an endeavor that strives to achieve specific objectives, which may encompass acquiring knowledge, learning to think rationally, and cultivating character traits such as kindness and honesty. Various scholars emphasize the importance of critical thinking in distinguishing education from indoctrination. They argue that indoctrination focuses solely on instilling beliefs in students, regardless of their rationality; whereas education also encourages the rational ability to critically examine and question those beliefs. However, it is not universally accepted that these two phenomena can be clearly distinguished, as some forms of indoctrination may be necessary in the early stages of education when the child's mind is not yet fully developed. This is particularly relevant in cases where young children must learn certain things without comprehending the underlying reasons, such as specific safety rules and hygiene practices. Education can be characterized from both the teacher's and the student's perspectives. Teacher-centered definitions emphasize the perspective and role of the teacher in transmitting knowledge and skills in a morally appropriate manner. On the other hand, student-centered definitions analyze education based on the student's involvement in the learning process, suggesting that this process transforms and enriches their subsequent experiences. It's also possible to consider definitions that incorporate both perspectives. In this approach, education is seen as a process of shared experience, involving the discovery of a common world and the collaborative solving of problems. Types There are several classifications of education. One classification depends on the institutional framework, distinguishing between formal, non-formal, and informal education. Another classification involves different levels of education based on factors such as the student's age and the complexity of the content. Further categories focus on the topic, teaching method, medium used, and funding. Formal, non-formal, and informal The most common division is between formal, non-formal, and informal education. Formal education occurs within a structured institutional framework, typically with a chronological and hierarchical order. The modern schooling system organizes classes based on the student's age and progress, ranging from primary school to university. Formal education is usually overseen and regulated by the government and often mandated up to a certain age. Non-formal and informal education occur outside the formal schooling system, with non-formal education serving as a middle ground. Like formal education, non-formal education is organized, systematic, and pursued with a clear purpose, as seen in activities such as tutoring, fitness classes, and participation in the scouting movement. Informal education, on the other hand, occurs in an unsystematic manner through daily experiences and exposure to the environment. Unlike formal and non-formal education, there is typically no designated authority figure responsible for teaching. Informal education unfolds in various settings and situations throughout one's life, often spontaneously, such as children learning their first language from their parents or individuals mastering cooking skills by preparing a dish together. Some theorists differentiate between the three types based on the learning environment: formal education occurs within schools, non-formal education takes place in settings not regularly frequented, such as museums, and informal education unfolds in the context of everyday routines. Additionally, there are disparities in the source of motivation. Formal education tends to be propelled by extrinsic motivation, driven by external rewards. Conversely, in non-formal and informal education, intrinsic motivation, stemming from the enjoyment of the learning process, typically prevails. While the differentiation among the three types is generally clear, certain forms of education may not neatly fit into a single category. In primitive cultures, education predominantly occurred informally, with little distinction between educational activities and other daily endeavors. Instead, the entire environment served as a classroom, and adults commonly assumed the role of educators. However, informal education often proves insufficient for imparting large quantities of knowledge. To address this limitation, formal educational settings and trained instructors are typically necessary. This necessity contributed to the increasing significance of formal education throughout history. Over time, formal education led to a shift towards more abstract learning experiences and topics, distancing itself from daily life. There was a greater emphasis on understanding general principles and concepts rather than simply observing and imitating specific behaviors. Levels Types of education are often categorized into different levels or stages. One influential framework is the International Standard Classification of Education, maintained by the United Nations Educational, Scientific and Cultural Organization (UNESCO). This classification encompasses both formal and non-formal education and distinguishes levels based on factors such as the student's age, the duration of learning, and the complexity of the content covered. Additional criteria include entry requirements, teacher qualifications, and the intended outcome of successful completion. The levels are grouped into early childhood education (level 0), primary education (level 1), secondary education (levels 2–3), post-secondary non-tertiary education (level 4), and tertiary education (levels 5–8). Early childhood education, also referred to as preschool education or nursery education, encompasses the period from birth until the commencement of primary school. It is designed to facilitate holistic child development, addressing physical, mental, and social aspects. Early childhood education is pivotal in fostering socialization and personality development, while also imparting fundamental skills in communication, learning, and problem-solving. Its overarching goal is to prepare children for the transition to primary education. While preschool education is typically optional, in certain countries such as Brazil, it is mandatory starting from the age of four. Primary (or elementary) education usually begins between the ages of five and seven and spans four to seven years. It has no additional entry requirements and aims to impart fundamental skills in reading, writing, and mathematics. Additionally, it provides essential knowledge in subjects such as history, geography, the sciences, music, and art. Another objective is to facilitate personal development. Presently, primary education is compulsory in nearly all nations, with over 90% of primary-school-age children worldwide attending such schools. Secondary education succeeds primary education and typically spans the ages of 12 to 18 years. It is normally divided into lower secondary education (such as middle school or junior high school) and upper secondary education (like high school, senior high school, or college, depending on the country). Lower secondary education usually requires the completion of primary school as its entry prerequisite. It aims to expand and deepen learning outcomes, with a greater focus on subject-specific curricula, and teachers often specialize in one or a few specific subjects. One of its goals is to acquaint students with fundamental theoretical concepts across various subjects, laying a strong foundation for lifelong learning. In certain instances, it may also incorporate rudimentary forms of vocational training. Lower secondary education is compulsory in numerous countries across Central and East Asia, Europe, and the Americas. In some nations, it represents the final phase of compulsory education. However, mandatory lower secondary education is less common in Arab states, sub-Saharan Africa, and South and West Asia. Upper secondary education typically commences around the age of 15, aiming to equip students with the necessary skills and knowledge for employment or tertiary education. Completion of lower secondary education is normally a prerequisite. The curriculum encompasses a broader range of subjects, often affording students the opportunity to select from various options. Attainment of a formal qualification, such as a high school diploma, is frequently linked to successful completion of upper secondary education. Education beyond the secondary level may fall under the category of post-secondary non-tertiary education, which is akin to secondary education in complexity but places greater emphasis on vocational training to ready students for the workforce. In some countries, tertiary education is synonymous with higher education, while in others, tertiary education encompasses a broader spectrum. Tertiary education builds upon the foundation laid in secondary education but delves deeper into specific fields or subjects. Its culmination results in an academic degree. Tertiary education comprises four levels: short-cycle tertiary, bachelor's, master's, and doctoral education. These levels often form a hierarchical structure, with the attainment of earlier levels serving as a prerequisite for higher ones. Short-cycle tertiary education concentrates on practical aspects, providing advanced vocational and professional training tailored to specialized professions. Bachelor's level education, also known as undergraduate education, is typically longer than short-cycle tertiary education. It is commonly offered by universities and culminates in an intermediary academic credential known as a bachelor's degree. Master's level education is more specialized than undergraduate education and often involves independent research, normally in the form of a master's thesis. Doctoral level education leads to an advanced research qualification, usually a doctor's degree, such as a Doctor of Philosophy (PhD). It usually involves the submission of a substantial academic work, such as a dissertation. More advanced levels include post-doctoral studies and habilitation. Successful completion of formal education typically leads to certification, a prerequisite for advancing to higher levels of education and entering certain professions. Undetected cheating during exams, such as utilizing a cheat sheet, poses a threat to this system by potentially certifying unqualified students. In most countries, primary and secondary education is provided free of charge. However, there are significant global disparities in the cost of tertiary education. Some countries, such as Sweden, Finland, Poland, and Mexico, offer tertiary education for free or at a low cost. Conversely, in nations like the United States and Singapore, tertiary education often comes with high tuition fees, leading students to rely on substantial loans to finance their studies. High education costs can pose a significant barrier for students in developing countries, as their families may struggle to cover school fees, purchase uniforms, and buy textbooks. Others The academic literature explores various types of education, including traditional and alternative approaches. Traditional education encompasses long-standing and conventional schooling methods, characterized by teacher-centered instruction within a structured school environment. Regulations govern various aspects, such as the curriculum and class schedules. Alternative education serves as an umbrella term for schooling methods that diverge from the conventional traditional approach. These variances might encompass differences in the learning environment, curriculum content, or the dynamics of the teacher-student relationship. Characteristics of alternative schooling include voluntary enrollment, relatively modest class and school sizes, and customized instruction, fostering a more inclusive and emotionally supportive environment. This category encompasses various forms, such as charter schools and specialized programs catering to challenging or exceptionally talented students, alongside homeschooling and unschooling. Alternative education incorporates diverse educational philosophies, including Montessori schools, Waldorf education, Round Square schools, Escuela Nueva schools, free schools, and democratic schools. Alternative education encompasses indigenous education, which emphasizes the preservation and transmission of knowledge and skills rooted in indigenous heritage. This approach often employs traditional methods such as oral narration and storytelling. Other forms of alternative schooling include gurukul schools in India, madrasa schools in the Middle East, and yeshivas in Jewish tradition. Some distinctions revolve around the recipients of education. Categories based on the age of the learner are childhood education, adolescent education, adult education, and elderly education. Categories based on the biological sex of students include single-sex education and mixed-sex education. Special education is tailored to meet the unique needs of students with disabilities, addressing various impairments on intellectual, social, communicative, and physical levels. Its goal is to overcome the challenges posed by these impairments, providing affected students with access to an appropriate educational structure. In the broadest sense, special education also encompasses education for intellectually gifted children, who require adjusted curricula to reach their fullest potential. Classifications based on the teaching method include teacher-centered education, where the teacher plays a central role in imparting information to students, and student-centered education, where students take on a more active and responsible role in shaping classroom activities. In conscious education, learning and teaching occur with a clear purpose in mind. Unconscious education unfolds spontaneously without conscious planning or guidance. This may occur, in part, through the influence of teachers' and adults' personalities, which can indirectly impact the development of students' personalities. Evidence-based education employs scientific studies to determine the most effective educational methods. Its aim is to optimize the effectiveness of educational practices and policies by ensuring they are grounded in the best available empirical evidence. This encompasses evidence-based teaching, evidence-based learning, and school effectiveness research. Autodidacticism, or self-education, occurs independently of teachers and institutions. Primarily observed in adult education, it offers the freedom to choose what and when to study, making it a potentially more fulfilling learning experience. However, the lack of structure and guidance may lead to aimless learning, while the absence of external feedback could result in autodidacts developing misconceptions and inaccurately assessing their learning progress. Autodidacticism is closely associated with lifelong education, which entails continuous learning throughout one's life. Categories of education based on the subject encompass science education, language education, art education, religious education, physical education, and sex education. Special mediums such as radio or websites are utilized in distance education, including e-learning (use of computers), m-learning (use of mobile devices), and online education. Often, these take the form of open education, wherein courses and materials are accessible with minimal barriers, contrasting with traditional classroom or onsite education. However, not all forms of online education are open; for instance, some universities offer full online degree programs that are not part of open education initiatives. State education, also known as public education, is funded and controlled by the government and available to the general public. It typically does not require tuition fees and is therefore a form of free education. In contrast, private education is funded and managed by private institutions. Private schools often have a more selective admission process and offer paid education by charging tuition fees. A more detailed classification focuses on the social institutions responsible for education, such as family, school, civil society, state, and church. Compulsory education refers to education that individuals are legally mandated to receive, primarily affecting children who must attend school up to a certain age. This stands in contrast to voluntary education, which individuals pursue based on personal choice rather than legal obligation. Role in society Education serves various roles in society, spanning social, economic, and personal domains. Socially, education establishes and maintains a stable society by imparting fundamental skills necessary for interacting with the environment and fulfilling individual needs and aspirations. In contemporary society, these skills encompass speaking, reading, writing, arithmetic, and proficiency in information and communications technology. Additionally, education facilitates socialization by instilling awareness of dominant social and cultural norms, shaping appropriate behavior across diverse contexts. It fosters social cohesion, stability, and peace, fostering productive engagement in daily activities. While socialization occurs throughout life, early childhood education holds particular significance. Moreover, education plays a pivotal role in democracies by enhancing civic participation through voting and organizing, while also promoting equal opportunities for all. On an economic level, individuals become productive members of society through education, acquiring the technical and analytical skills necessary for their professions, as well as for producing goods and providing services to others. In early societies, there was minimal specialization, with children typically learning a broad range of skills essential for community functioning. However, modern societies are increasingly complex, with many professions requiring specialized training alongside general education. Consequently, only a relatively small number of individuals master certain professions. Additionally, skills and tendencies acquired for societal functioning may sometimes conflict, with their value dependent on context. For instance, fostering curiosity and questioning established teachings promotes critical thinking and innovation, while at times, obedience to authority is necessary to maintain social stability. By facilitating individuals' integration into society, education fosters economic growth and diminishes poverty. It enables workers to enhance their skills, thereby improving the quality of goods and services produced, which ultimately fosters prosperity and enhances competitiveness. Public education is widely regarded as a long-term investment that benefits society as a whole, with primary education showing particularly high rates of return. Additionally, besides bolstering economic prosperity, education contributes to technological and scientific advancements, reduces unemployment, and promotes social equity. Moreover, increased education is associated with lower birth rates, partly due to heightened awareness of family planning, expanded opportunities for women, and delayed marriage. Education plays a pivotal role in equipping a country to adapt to changes and effectively confront new challenges. It raises awareness and contributes to addressing contemporary global issues, including climate change, sustainability, and the widening disparities between the rich and the poor. By instilling in students an understanding of how their lives and actions impact others, education can inspire individuals to strive towards realizing a more sustainable and equitable world. Thus, education not only serves to maintain societal norms but also acts as a catalyst for social development. This extends to evolving economic circumstances, where technological advancements, notably increased automation, impose new demands on the workforce that education can help meet. As circumstances evolve, skills and knowledge taught may become outdated, necessitating curriculum adjustments to include subjects like digital literacy, and promote proficiency in handling new technologies. Moreover, education can embrace innovative forms such as massive open online courses to prepare individuals for emerging challenges and opportunities. On a more individual level, education fosters personal development, encompassing learning new skills, honing talents, nurturing creativity, enhancing self-knowledge, and refining problem-solving and decision-making abilities. Moreover, education contributes positively to health and well-being. Educated individuals are often better informed about health issues and adjust their behavior accordingly, benefit from stronger social support networks and coping strategies, and enjoy higher incomes, granting them access to superior healthcare services. The social significance of education is underscored by the annual International Day of Education on January 24, established by the United Nations, which designated 1970 as the International Education Year. Role of institutions Organized institutions play a pivotal role in multiple facets of education. Entities such as schools, universities, teacher training institutions, and ministries of education comprise the education sector. They interact not only with one another but also with various stakeholders, including parents, local communities, religious groups, non-governmental organizations, healthcare professionals, law enforcement agencies, media platforms, and political leaders. Numerous individuals are directly engaged in the education sector, such as students, teachers, school principals, as well as school nurses and curriculum developers. Various aspects of formal education are regulated by the policies of governmental institutions. These policies determine at what age children need to attend school and at what times classes are held, as well as issues pertaining to the school environment, such as infrastructure. Regulations also cover the exact qualifications and requirements that teachers need to fulfill. An important aspect of education policy concerns the curriculum used for teaching at schools, colleges, and universities. A curriculum is a plan of instruction or a program of learning that guides students to achieve their educational goals. The topics are usually selected based on their importance and depend on the type of school. The goals of public school curricula are usually to offer a comprehensive and well-rounded education, while vocational training focuses more on specific practical skills within a field. The curricula also cover various aspects besides the topic to be discussed, such as the teaching method, the objectives to be reached, and the standards for assessing progress. By determining the curricula, governmental institutions have a strong impact on what knowledge and skills are transmitted to the students. Examples of governmental institutions include the Ministry of Education in India, the Department of Basic Education in South Africa, and the Secretariat of Public Education in Mexico. International organizations also play a pivotal role in education. For example, UNESCO is an intergovernmental organization that promotes education through various means. One of its activities is advocating for education policies, such as the treaty Convention on the Rights of the Child, which declares education as a fundamental human right for all children and young people. The Education for All initiative aimed to provide basic education to all children, adolescents, and adults by 2015, later succeeded by the Sustainable Development Goals initiative, particularly goal 4. Related policies include the Convention against Discrimination in Education and the Futures of Education initiative. Some influential organizations are non-governmental rather than intergovernmental. For instance, the International Association of Universities promotes collaboration and knowledge exchange among colleges and universities worldwide, while the International Baccalaureate offers international diploma programs. Institutions like the Erasmus Programme facilitate student exchanges between countries, while initiatives such as the Fulbright Program provide similar services for teachers. Factors of educational success Educational success, also referred to as student and academic achievement, pertains to the extent to which educational objectives are met, such as the acquisition of knowledge and skills by students. For practical purposes, it is often primarily measured in terms of official exam scores, but numerous additional indicators exist, including attendance rates, graduation rates, dropout rates, student attitudes, and post-school indicators such as later income and incarceration rates. Several factors influence educational achievement, such as psychological factors related to the individual student, and sociological factors associated with the student's social environment. Additional factors encompass access to educational technology, teacher quality, and parental involvement. Many of these factors overlap and mutually influence each other. Psychological On a psychological level, relevant factors include motivation, intelligence, and personality. Motivation is the internal force propelling people to engage in learning. Motivated students are more likely to interact with the content to be learned by participating in classroom activities like discussions, resulting in a deeper understanding of the subject. Motivation can also help students overcome difficulties and setbacks. An important distinction lies between intrinsic and extrinsic motivation. Intrinsically motivated students are driven by an interest in the subject and the learning experience itself. Extrinsically motivated students seek external rewards such as good grades and recognition from peers. Intrinsic motivation tends to be more beneficial, leading to increased creativity, engagement, and long-term commitment. Educational psychologists aim to discover methods to increase motivation, such as encouraging healthy competition among students while maintaining a balance of positive and negative feedback through praise and constructive criticism. Intelligence significantly influences individuals' responses to education. It is a cognitive trait associated with the capacity to learn from experience, comprehend, and apply knowledge and skills to solve problems. Individuals with higher scores in intelligence metrics typically perform better academically and pursue higher levels of education. Intelligence is often closely associated with the concept of IQ, a standardized numerical measure assessing intelligence based on mathematical-logical and verbal abilities. However, it has been argued that intelligence encompasses various types beyond IQ. Psychologist Howard Gardner posited distinct forms of intelligence in domains such as mathematics, logic, spatial cognition, language, and music. Additional types of intelligence influence interpersonal and intrapersonal interactions. These intelligences are largely autonomous, meaning that an individual may excel in one type while performing less well in another. The learner's personality may also influence educational achievement. For instance, characteristics such as conscientiousness and openness to experience, identified in the Big Five personality traits, are associated with academic success. Other mental factors include self-efficacy, self-esteem, and metacognitive abilities. Sociological Sociological factors center not on the psychological attributes of learners but on their environment and societal position. These factors encompass socioeconomic status, ethnicity, cultural background, and gender, drawing significant interest from researchers due to their association with inequality and discrimination. Consequently, they play a pivotal role in policy-making endeavors aimed at mitigating their impact. Socioeconomic status is influenced by factors beyond just income, including financial security, social status, social class, and various attributes related to quality of life. Low socioeconomic status impacts educational success in several ways. It correlates with slower cognitive development in language and memory, as well as higher dropout rates. Families with limited financial means may struggle to meet their children's basic nutritional needs, hindering their development. Additionally, they may lack resources to invest in educational materials such as stimulating toys, books, and computers. Financial constraints may also prevent attendance at prestigious schools, leading to enrollment in institutions located in economically disadvantaged areas. Such schools often face challenges such as teacher shortages and inadequate educational materials and facilities like libraries, resulting in lower teaching standards. Moreover, parents may be unable to afford private lessons for children falling behind academically. In some cases, students from economically disadvantaged backgrounds are compelled to drop out of school to contribute to family income. Limited access to information about higher education and challenges in securing and repaying student loans further exacerbate the situation. Low socioeconomic status is also associated with poorer physical and mental health, contributing to a cycle of social inequality that persists across generations. Ethnic background correlates with cultural distinctions and language barriers, which can pose challenges for students in adapting to the school environment and comprehending classes. Moreover, explicit and implicit biases and discrimination against ethnic minorities further compound these difficulties. Such biases can impact students' self-esteem, motivation, and access to educational opportunities. For instance, teachers may harbor stereotypical perceptions, albeit not overtly racist, leading to differential grading of comparable performances based on a child's ethnicity. Historically, gender has played a pivotal role in education as societal norms dictated distinct roles for men and women. Education traditionally favored men, who were tasked with providing for the family, while women were expected to manage households and care for children, often limiting their access to education. Although these disparities have improved in many modern societies, gender differences persist in education. This includes biases and stereotypes related to gender roles in various academic domains, notably in fields such as science, technology, engineering, and mathematics (STEM), which are often portrayed as male-dominated. Such perceptions can deter female students from pursuing these subjects. In various instances, discrimination based on gender and social factors occurs openly as part of official educational policies, such as the severe restrictions imposed on female education by the Taliban in Afghanistan, and the school segregation of migrants and locals in urban China under the hukou system. One facet of several social factors is characterized by the expectations linked to stereotypes. These expectations operate externally, influenced by how others respond to individuals belonging to specific groups, and internally, shaped by how individuals internalize and conform to them. In this regard, these expectations can manifest as self-fulfilling prophecies by affecting the educational outcomes they predict. Such outcomes may be influenced by both positive and negative stereotypes. Technology and others Technology plays a crucial role in educational success. While educational technology is often linked with modern digital devices such as computers, its scope extends far beyond that. It encompasses a diverse array of resources and tools for learning, including traditional aids like books and worksheets, in addition to digital devices. Educational technology can enhance learning in various ways. In the form of media, it often serves as the primary source of information in the classroom, allowing teachers to allocate their time and energy to other tasks such as lesson planning, student guidance, and performance assessment. By presenting information using graphics, audio, and video instead of mere text, educational technology can also enhance comprehension. Interactive elements, such as educational games, further engage learners in the learning process. Moreover, technology facilitates the accessibility of educational materials to a wide audience, particularly through online resources, while also promoting collaboration among students and communication with teachers. The integration of artificial intelligence in education holds promise for providing new learning experiences to students and supporting teachers in their work. However, it also introduces new risks related to data privacy, misinformation, and manipulation. Various organizations advocate for student access to educational technologies, including initiatives such as the One Laptop per Child initiative, the African Library Project, and Pratham. School infrastructure also plays a crucial role in educational success. It encompasses physical aspects such as the school's location, size, and available facilities and equipment. A healthy and safe environment, well-maintained classrooms, appropriate classroom furniture, as well as access to a library and a canteen, all contribute to fostering educational success. Additionally, the quality of teachers significantly impacts student achievement. Skilled teachers possess the ability to motivate and inspire students, and tailor instructions to individual abilities and needs. Their skills depend on their own education, training, and teaching experience. A meta-analysis by Engin Karadağ et al. concludes that, compared to other influences, factors related to the school and the teacher have the greatest impact on educational success. Parent involvement also enhances achievement and can increase children's motivation and commitment when they know their parents are invested in their educational endeavors. This often results in heightened self-esteem, improved attendance rates, and more positive behavior at school. Parent involvement covers communication with teachers and other school staff to raise awareness of current issues and explore potential resolutions. Other relevant factors, occasionally addressed in academic literature, encompass historical, political, demographic, religious, and legal aspects. Education studies The primary field exploring education is known as education studies, also termed education sciences. It seeks to understand how knowledge is transmitted and acquired by examining various methods and forms of education. This discipline delves into the goals, impacts, and significance of education, along with the cultural, societal, governmental, and historical contexts that influence it. Education theorists draw insights from various disciplines, including philosophy, psychology, sociology, economics, history, politics, and international relations. Consequently, some argue that education studies lacks the clear methodological and subject delineations found in disciplines like physics or history. Education studies focuses on academic analysis and critical reflection and differs in this respect from teacher training programs, which show participants how to become effective teachers. Furthermore, it encompasses not only formal education but also explores all forms and facets of educational processes. Various research methods are utilized to investigate educational phenomena, broadly categorized into quantitative, qualitative, and mixed-methods approaches. Quantitative research mirrors the methodologies of the natural sciences, employing precise numerical measurements to collect data from numerous observations and utilizing statistical tools for analysis. Its goal is to attain an objective and impartial understanding. Conversely, qualitative research typically involves a smaller sample size and seeks to gain a nuanced insight into subjective and personal factors, such as individuals' experiences within the educational process. Mixed-methods research aims to integrate data gathered from both approaches to achieve a balanced and comprehensive understanding. Data collection methods vary and may include direct observation, test scores, interviews, and questionnaires. Research projects may investigate fundamental factors influencing all forms of education or focus on specific applications, seek solutions to particular problems, or evaluate the effectiveness of educational initiatives and policies. Subfields Education studies encompasses various subfields such as pedagogy, educational research, comparative education, and the philosophy, psychology, sociology, economics, and history of education. The philosophy of education is the branch of applied philosophy that examines many of the fundamental assumptions underlying the theory and practice of education. It explores education both as a process and a discipline while seeking to provide precise definitions of its nature and distinctions from other phenomena. Additionally, it delves into the purpose of education, its various types, and the conceptualization of teachers, students, and their relationship. Furthermore, it encompasses educational ethics, which examines the moral implications of education, such as the ethical principles guiding it and how teachers should apply them to specific situations. The philosophy of education boasts a long history and was a subject of discourse in ancient Greek philosophy. The term "pedagogy" is sometimes used interchangeably with education studies, but in a more specific sense, it refers to the subfield focused on teaching methods. It investigates how educational objectives, such as knowledge transmission or the development of skills and character traits, can be achieved. Pedagogy is concerned with the methods and techniques employed in teaching within conventional educational settings. While some definitions confine it to this context, in a broader sense, it encompasses all forms of education, including teaching methods beyond traditional school environments. In this broader context, it explores how teachers can facilitate learning experiences for students to enhance their understanding of the subject matter and how learning itself occurs. The psychology of education delves into the mental processes underlying learning, focusing on how individuals acquire new knowledge and skills and experience personal development. It investigates the various factors influencing educational outcomes, how these factors vary among individuals, and the extent to which nature or nurture contribute to these outcomes. Key psychological theories shaping education encompass behaviorism, cognitivism, and constructivism. Related disciplines include educational neuroscience and the neurology of education, which explore the neuropsychological processes and changes associated with learning. The field of sociology of education delves into how education shapes socialization, examining how social factors and ideologies influence access to education and individual success within it. It explores the impact of education on different societal groups and its role in shaping personal identity. Specifically, the sociology of education focuses on understanding the root causes of inequalities, offering insights relevant to education policy aimed at identifying and addressing factors contributing to inequality. Two prominent perspectives within this field are consensus theory and conflict theory. Consensus theorists posit that education benefits society by preparing individuals for their societal roles, while conflict theorists view education as a tool employed by the ruling class to perpetuate inequalities. The field of economics of education investigates the production, distribution, and consumption of education. It seeks to optimize resource allocation to enhance education, such as assessing the impact of increased teacher salaries on teacher quality. Additionally, it explores the effects of smaller class sizes and investments in new educational technologies. By providing insights into resource allocation, the economics of education aids policymakers in making decisions that maximize societal benefits. Furthermore, it examines the long-term economic implications of education, including its role in fostering a highly skilled workforce and enhancing national competitiveness. A related area of interest involves analyzing the economic advantages and disadvantages of different educational systems. Comparative education is the discipline that examines and contrasts education systems. Comparisons can occur from a general perspective or focus on specific factors like social, political, or economic aspects. Often applied to different countries, comparative education assesses the similarities and differences of their educational institutions and practices, evaluating the consequences of distinct approaches. It can be used to glean insights from other countries on effective education policies and how one's own system may be improved. This practice, known as policy borrowing, presents challenges as policy success can hinge on the social and cultural context of students and teachers. A related and contentious topic concerns whether the educational systems of developed countries are superior and should be exported to less developed ones. Other key topics include the internationalization of education and the role of education in transitioning from authoritarian regimes to democracies. The history of education delves into the evolution of educational practices, systems, and institutions. It explores various key processes, their potential causes and effects, and their interrelations. Aims and ideologies A central topic in education studies revolves around how people should be educated and what goals should guide this process. Various aims have been proposed, including the acquisition of knowledge and skills, personal development, and the cultivation of character traits. Commonly suggested attributes encompass qualities like curiosity, creativity, rationality, and critical thinking, along with tendencies to think, feel, and act morally. Scholars diverge on whether to prioritize liberal values such as freedom, autonomy, and open-mindedness, or qualities like obedience to authority, ideological purity, piety, and religious faith. Some education theorists concentrate on a single overarching purpose of education, viewing more specific aims as means to this end. At a personal level, this purpose is often equated with assisting the student in leading a good life. Societally, education aims to cultivate individuals into productive members of society. There is debate regarding whether the primary aim of education is to benefit the educated individual or society as a whole. Educational ideologies encompass systems of fundamental philosophical assumptions and principles utilized to interpret, understand, and assess existing educational practices and policies. They address various aspects beyond the aims of education, including the subjects taught, the structure of learning activities, the role of teachers, methods for assessing educational progress, and the design of institutional frameworks and policies. These ideologies are diverse and often interrelated. Teacher-centered ideologies prioritize the role of teachers in imparting knowledge to students, while student-centered ideologies afford students a more active role in the learning process. Process-based ideologies focus on the methods of teaching and learning, contrasting with product-based ideologies, which consider education in terms of the desired outcomes. Conservative ideologies uphold traditional practices, whereas Progressive ideologies advocate for innovation and creativity. Additional categories are humanism, romanticism, essentialism, encyclopaedism, pragmatism, as well as authoritarian and democratic ideologies. Learning theories Learning theories attempt to elucidate the mechanisms underlying learning. Influential theories include behaviorism, cognitivism, and constructivism. Behaviorism posits that learning entails a modification in behavior in response to environmental stimuli. This occurs through the presentation of a stimulus, the association of this stimulus with the desired response, and the reinforcement of this stimulus-response connection. Cognitivism views learning as a transformation in cognitive structures and emphasizes the mental processes involved in encoding, retrieving, and processing information. Constructivism asserts that learning is grounded in the individual's personal experiences and places greater emphasis on social interactions and their interpretation by the learner. These theories carry significant implications for instructional practices. For instance, behaviorists often emphasize repetitive drills, cognitivists may advocate for mnemonic techniques, and constructivists typically employ collaborative learning strategies. Various theories suggest that learning is more effective when it is based on personal experience. Additionally, aiming for a deeper understanding by connecting new information to pre-existing knowledge is considered more beneficial than simply memorizing a list of unrelated facts. An influential developmental theory of learning is proposed by psychologist Jean Piaget, who outlines four stages of learning through which children progress on their way to adulthood: the sensorimotor, pre-operational, concrete operational, and formal operational stages. These stages correspond to different levels of abstraction, with early stages focusing more on simple sensory and motor activities, while later stages involve more complex internal representations and information processing, such as logical reasoning. Teaching methods The teaching method pertains to how the content is delivered by the teacher, such as whether group work is employed rather than focusing on individual learning. There is a wide array of teaching methods available, and the most effective one in a given scenario depends on factors like the subject matter and the learner's age and level of competence. This is reflected in modern school systems, which organize students into different classes based on age, competence, specialization, and native language to ensure an effective learning process. Different subjects often employ distinct approaches; for example, language education frequently emphasizes verbal learning, while mathematical education focuses on abstract and symbolic thinking alongside deductive reasoning. One crucial aspect of teaching methodologies is ensuring that learners remain motivated, either through intrinsic factors like interest and curiosity or through external rewards. The teaching method also includes the utilization of instructional media, such as books, worksheets, and audio-visual recordings, as well as implementing some form of test or evaluation to gauge learning progress. Educational assessment is the process of documenting the student's knowledge and skills, which can happen formally or informally and may take place before, during, or after the learning activity. Another significant pedagogical element in many modern educational approaches is that each lesson is part of a broader educational framework governed by a syllabus, which often spans several months or years. According to Herbartianism, teaching is broken down into phases. The initial phase involves preparing the student's mind for new information. Subsequently, new ideas are introduced to the learner and then linked to concepts already familiar to them. In later phases, understanding transitions to a more general level beyond specific instances, and the ideas are then applied in practical contexts. History The history of education delves into the processes, methods, and institutions entwined with teaching and learning, aiming to elucidate their interplay and influence on educational practices over time. Prehistory Education during prehistory primarily facilitated enculturation, emphasizing practical knowledge and skills essential for daily life, such as food production, clothing, shelter, and safety. Formal schools and specialized instructors were absent, with adults in the community assuming teaching roles, and learning transpiring informally through daily activities, including observation and imitation of elders. In oral societies, storytelling served as a pivotal means of transmitting cultural and religious beliefs across generations. With the advent of agriculture during the Neolithic Revolution around 9000 BCE, a gradual educational shift toward specialization ensued, driven by the formation of larger communities and the demand for increasingly intricate artisanal and technical skills. Ancient era Commencing in the 4th millennium BCE and spanning subsequent eras, a pivotal transformation in educational methodologies unfolded with the advent of writing in regions such as Mesopotamia, ancient Egypt, the Indus Valley, and ancient China. This breakthrough profoundly influenced the trajectory of education. Writing facilitated the storage, preservation, and dissemination of information, ushering in subsequent advancements such as the creation of educational aids like textbooks and the establishment of institutions such as schools. Another significant aspect of ancient education was the establishment of formal education. This became necessary as civilizations evolved and the volume of knowledge expanded, surpassing what informal education could effectively transmit across generations. Teachers assumed specialized roles to impart knowledge, leading to a more abstract educational approach less tied to daily life. Formal education remained relatively rare in ancient societies, primarily accessible to the intellectual elite. It covered fields like reading and writing, record keeping, leadership, civic and political life, religion, and technical skills associated with specific professions. Formal education introduced a new teaching paradigm that emphasized discipline and drills over the informal methods prevalent earlier. Two notable achievements of ancient education include the founding of Plato's Academy in Ancient Greece, often regarded as the earliest institution of higher learning, and the establishment of the Great Library of Alexandria in Ancient Egypt, renowned as one of the ancient world's premier libraries. Medieval era Many facets of education during the medieval period were profoundly influenced by religious traditions. In Europe, the Catholic Church wielded considerable authority over formal education. In the Arab world, the rapid spread of Islam led to various educational advancements during the Islamic Golden Age, integrating classical and religious knowledge and establishing madrasa schools. In Jewish communities, yeshivas emerged as institutions dedicated to the study of religious texts and Jewish law. In China, an expansive state educational and examination system, shaped by Confucian teachings, was instituted. As new complex societies emerged in regions like Africa, the Americas, Northern Europe, and Japan, some adopted existing educational practices, while others developed new traditions. Additionally, this era witnessed the establishment of various institutes of higher education and research. Prominent among these were the University of Bologna (the world's oldest university in continuous operation), the University of Paris, and Oxford University in Europe. Other influential centers included the Al-Qarawiyyin University in Morocco, Al-Azhar University in Egypt, and the House of Wisdom in Iraq. Another significant development was the formation of guilds, associations of skilled craftsmen and merchants who regulated their trades and provided vocational education. Prospective members underwent various stages of training on their journey to mastery. Modern era Starting in the early modern period, education in Europe during the Renaissance slowly began to shift from a religious approach towards one that was more secular. This development was tied to an increased appreciation of the importance of education and a broadened range of topics, including a revived interest in ancient literary texts and educational programs. The turn toward secularization was accelerated during the Age of Enlightenment starting in the 17th century, which emphasized the role of reason and the empirical sciences. European colonization affected education in the Americas through Christian missionary initiatives. In China, the state educational system was further expanded and focused more on the teachings of neo-Confucianism. In the Islamic world, the outreach of formal education increased and remained under the influence of religion. A key development in the early modern period was the invention and popularization of the printing press in the middle of the 15th century, which had a profound impact on general education. It significantly reduced the cost of producing books, which were hand-written before, and thereby augmented the dissemination of written documents, including new forms like newspapers and pamphlets. The increased availability of written media had a major influence on the general literacy of the population. These alterations paved the way for the advancement of public education during the 18th and 19th centuries. This era witnessed the establishment of publicly funded schools with the goal of providing education for all, in contrast to previous periods when formal education was primarily delivered by private schools, religious institutions, and individual tutors. An exception to this trend was the Aztec civilization, where formal education was compulsory for youth across social classes as early as the 14th century. Closely related changes were to make education compulsory and free of charge for all children up to a certain age. Contemporary era The promotion of public education and universal access to education gained momentum in the 20th and 21st centuries, endorsed by intergovernmental organizations such as the UN. Key initiatives included the Universal Declaration of Human Rights, the Convention on the Rights of the Child, the Education for All initiative, the Millennium Development Goals, and the Sustainable Development Goals. These endeavors led to a consistent increase in all forms of education, particularly impacting primary education. In 1970, 28% of all primary-school-age children worldwide were not enrolled in school; by 2015, this figure had decreased to 9%. The establishment of public education was accompanied by the introduction of standardized curricula for public schools as well as standardized tests to assess the progress of students. Contemporary examples are the Test of English as a Foreign Language, which is a globally used test to assess language proficiency in non-native English speakers, and the Programme for International Student Assessment, which evaluates education systems across the world based on the performance of 15-year-old students in reading, mathematics, and science. Similar shifts impacted teachers, with the establishment of institutions and norms to regulate and oversee teacher training, including certification mandates for teaching in public schools. Emerging educational technologies have significantly influenced modern education. The widespread availability of computers and the internet has notably expanded access to educational resources and facilitated new forms of learning, such as online education. This became particularly pertinent during the COVID-19 pandemic when schools worldwide closed for prolonged periods, prompting many to adopt remote learning methods through video conferencing or pre-recorded video lessons to sustain instruction. Additionally, contemporary education is impacted by the increasing globalization and internationalization of educational practices. See also References Notes Citations Sources External links Education – OECD Education – UNESCO Education – World Bank Main topic articles
0.774624
0.999724
0.77441
Communication studies
Communication studies (or communication science) is an academic discipline that deals with processes of human communication and behavior, patterns of communication in interpersonal relationships, social interactions and communication in different cultures. Communication is commonly defined as giving, receiving or exchanging ideas, information, signals or messages through appropriate media, enabling individuals or groups to persuade, to seek information, to give information or to express emotions effectively. Communication studies is a social science that uses various methods of empirical investigation and critical analysis to develop a body of knowledge that encompasses a range of topics, from face-to-face conversation at a level of individual agency and interaction to social and cultural communication systems at a macro level. Scholarly communication theorists focus primarily on refining the theoretical understanding of communication, examining statistics in order to help substantiate claims. The range of social scientific methods to study communication has been expanding. Communication researchers draw upon a variety of qualitative and quantitative techniques. The linguistic and cultural turns of the mid-20th century led to increasingly interpretative, hermeneutic, and philosophic approaches towards the analysis of communication. Conversely, the end of the 1990s and the beginning of the 2000s have seen the rise of new analytically, mathematically, and computationally focused techniques. As a field of study, communication is applied to journalism, business, mass media, public relations, marketing, news and television broadcasting, interpersonal and intercultural communication, education, public administration, the problem of media-adequacy—and beyond. As all spheres of human activity and conveyance are affected by the interplay between social communication structure and individual agency, communication studies has gradually expanded its focus to other domains, such as health, medicine, economy, military and penal institutions, the Internet, social capital, and the role of communicative activity in the development of scientific knowledge. History Origins Communication, a natural human behavior, became a topic of study in the 20th century. As communication technologies developed, so did the serious study of communication. During this time, a renewed interest in the studies of rhetoric, such as persuasion and public address, was created, which ultimately laid the foundation for several of the forms of communication studies that we know of today. The focus of communication studies developed further in the 20th century, eventually including means of communication such as mass communication, interpersonal communication, and oral interpretation. When World War I ended, the interest in studying communication intensified. The methods of communication that had been used during the war had challenged the beliefs many people had on the limits of it that existed prior to these events. Innovations were invented during this period of time that no one had ever seen before, like the aircraft telephones and throat microphones. However, new ways of communicating that had been discovered, especially the use of morse code through portable morse code machines, helped troops to communicate in a much more rapid pace than ever before. This then sparked ideas for even more advanced ways of communication to later be created and discovered. The social science study was fully recognized as a legitimate discipline after World War II. Prior to being established as its own discipline, communication studies, was formed from three other major studies no: psychology, sociology, and political science. Communication studies focus on communication as central to the human experience, which involves understanding how people behave in creating, exchanging, and interpreting messages. Today, this accepted discipline now also encompasses more modern forms of communication studies as well, such as gender and communication, intercultural communication, political communication, health communication, and organizational communication. Foundations of the academic discipline The institutionalization of communication studies in U.S. higher education and research has often been traced to Columbia University, the University of Chicago, and the University of Illinois Urbana-Champaign, where early pioneers of the field worked after the Second World War. Wilbur Schramm is considered the founder of the field of communication studies in the United States. Schramm was hugely influential in establishing communication as a field of study and in forming departments of communication studies across universities in the United States. He was the first individual to identify himself as a communication scholar; he created the first academic degree-granting programs with communication in their name; and he trained the first generation of communication scholars. Schramm had a background in English literature and developed communication studies partly by merging existing programs in speech communication, rhetoric, and journalism. He also edited a textbook The Process and Effects of Mass Communication (1954) that helped define the field, partly by claiming Paul Lazarsfeld, Harold Lasswell, Carl Hovland, and Kurt Lewin as its founding forefathers. Schramm established three important communication institutes: the Institute of Communications Research (University of Illinois at Urbana-Champaign), the Institute for Communication Research (Stanford University), and the East-West Communication Institute (Honolulu). The patterns of scholarly work in communication studies that were set in motion at these institutes continue to this day. Many of Schramm's students, such as Everett Rogers and David Berlo went on to make important contributions of their own. The first college of communication was founded at Michigan State University in 1958, led by scholars from Schramm's original ICR and dedicated to studying communication scientifically using a quantitative approach. MSU was soon followed by important departments of communication at Purdue University, University of Texas-Austin, Stanford University, University of Iowa, University of Illinois, University of Pennsylvania, The University of Southern California, and Northwestern University. Associations related to Communication Studies were founded or expanded during the 1950s. The National Society for the Study of Communication (NSSC) was founded in 1950 to encourage scholars to pursue communication research as a social science. This Association launched the Journal of Communication in the same year as its founding. Like many communication associations founded around this decade, the name of the association changed with the field. In 1968 the name changed to the International Communication Association (ICA). In the United States Undergraduate curricula aim to prepare students to interrogate the nature of communication in society, and the development of communication as a specific field. The National Communication Association (NCA) recognizes several distinct but often overlapping specializations within the broader communication discipline including: technology, critical-cultural, health, intercultural, interpersonal-small group, mass communication, organizational, political, rhetorical, and environmental communication. Students take courses in these subject areas. Other programs and courses often integrated in communication programs include journalism, rhetoric, film criticism, theatre, public relations, political science (e.g., political campaign strategies, public speaking, effects of media on elections), as well as radio, television, computer-mediated communication, film production, and new media. Many colleges in the United States offer a variety of different majors within the realm of communication studies, consisting of programs of study in the areas mentioned above. Communication studies is often perceived by many in society as being primarily centered around the media arts, however, those that become communication studies graduates could move on to have careers in areas ranging from media arts to public advocacy to marketing to non-profit organizations and even more. In Canada With the early influence of federal institutional inquiries, notably the 1951 Massey Commission, which "investigated the overall state of culture in Canada", the study of communication in Canada has frequently focused on the development of a cohesive national culture, and on infrastructural empires of social and material circulation. Although influenced by the American Communication tradition and British Cultural Studies, Communication studies in Canada has been more directly oriented toward the state and the policy apparatus, for example the Canadian Radio-television and Telecommunications Commission. Influential thinkers from the Canadian communication tradition include Harold Innis, Marshall McLuhan, Florian Sauvageau, Gertrude Robinson, Marc Raboy, Dallas Smythe, James R. Taylor, François Cooren, Gail Guthrie Valaskakis and George Grant. Communication studies within Canada are a relatively new discipline, however, there are programs and departments to support and teach this topic in about 13 Canadian universities and many colleges as well. The Communication et information from Laval, and the Canadian Journal of Communication from McGill University in Montréal, are two journals that exist in Canada. There are also organizations and associations, both national and in Québec, that appeal to the specific interests that are targeted towards these academics. These specific journals consist of representatives from the industry of communication, the government, and members of the public as a whole. Scope and topics Communication studies integrates aspects of both social sciences and the humanities. As a social science, the discipline overlaps with sociology, psychology, anthropology, biology, political science, economics, and public policy. From a humanities perspective, communication is concerned with rhetoric and persuasion (traditional graduate programs in communication studies trace their history to the rhetoricians of Ancient Greece). Humanities approaches to communication often overlap with history, philosophy, English, and cultural studies. Communication research informs politicians and policy makers, educators, strategists, legislators, business magnates, managers, social workers, non-governmental organizations, non-profit organizations, and people interested in resolving communication issues in general. There is often a great deal of crossover between social research, cultural research, market research, and other statistical fields. Recent critiques have been made about the homogeneity of communication scholarship. For example, Chakravartty, et al. (2018) find that white scholars comprise the vast majority of publications, citations, and editorial positions. From a post-colonial point of view, this state is problematic because communication studies engages with a wide range of social justice concerns. Business Business communication emerged as a field of study in the late 20th century, due to the centrality of communication within business relationships. The scope of the field is difficult to define because of the various ways in which communication is used between employers, employees, consumers, and brands. Because of this, the focus of the field is usually placed on the demands of employers, which is more universally understood by the revision of the American Assembly of Collegiate Schools of business standards to emphasize written and oral communication as an important characteristic in the curriculum. Business communication studies, therefore, revolve around the, ever changing, written and oral communication aspects directly related to the field of business. Implementation of modern business communication curriculums are enhancing the study of business communication as a whole, while further preparing those to be able to effectively communicate in the business community. Healthcare Health communication is a multidisciplinary field that practices the application of "communication evidence, strategy, theory, and creativity" in order to advance the well-being of people and populations. The term was first coined in 1975 by the International Communication Association and, in 1997, Health communication was officially recognized in the broader fields of Public Health Education and Health Promotion by the American Public Health Association. The discipline integrates components of various theories and models, with a focus on social marketing. It uses marketing to develop "activities and interventions designed to positively change behaviors." This emergence affected several dynamics of the healthcare system. It brought elevated awareness to different avenues including promotional activities and communication between heath professionals and their employees, patients, and constituents. "Efforts to create marketing-oriented organizations called for the widespread dissemination of information", putting a spotlight on theories of "communication, the communication process, and the techniques that were being utilized to communicate in other settings." Now, health care organizations of all types are using things like social media. "Uses include communicating with the community and patients; enhancing organizational visibility; marketing products and services; establishing a venue for acquiring news about activities, promotions, and fund-raising; providing a channel for patient resources and education; and providing customer service and support." Professional associations American Journalism Historians Association (AJHA) Association for Education in Journalism and Mass Communication (AEJMC) Association for Teachers of Technical Writing (ATTW) Black College Communication Association (BCCA) Broadcast Education Association (BEA) Central States Communication Association (CSCA) Council of Communication Associations (CCA) European Association for the Teaching of Academic Writing (EATAW) European Communication Research and Education Association IEEE Professional Communication Society International Association for Media and Communications Research International Association of Business Communicators (IABC) International Communication Association (ICA), an international, academic association for communication studies concerned with all aspects of human and mediated communication National Association of Black Journalists: NABJ National Association for Media Literacy Education (NAMLE) National Communication Association (NCA), professional organization concerned with various aspects of communication studies in the United States Public Relations Society of America (PRSA) Rhetoric Society of America (RSA) Society for Cinema and Media Studies, organization for communication research pertaining to film studies Society for Technical Communication (STC) University Film and Video Association, organization for the study of motion-picture production See also References Bibliography Carey, James. 1988 Communication as Culture. Cohen, Herman. 1994. The History of Speech Communication: The Emergence of a Discipline, 1914-1945. Annandale, VA: Speech Communication Association. Gehrke, Pat J. 2009. The Ethics and Politics of Speech: Communication and Rhetoric in the Twentieth Century. Carbondale, IL: Southern Illinois University Press. Gehrke, Pat J. and William M. Keith, eds. 2014. A Century of Communication Studies: The Unfinished Conversation. New York: Routledge. Packer, J. & Robertson, C, eds. 2006. Thinking with James Carey: Essays on Communications, Transportation, History. Peters, John Durham and Peter Simonson, eds. 2004. Mass Communication and American Social Thought: Key Texts 1919-1968. Wahl-Jorgensen, Karin 2004, 'How Not to Found a Field: New Evidence on the Origins of Mass Communication Research', Journal of Communication, September 2004.
0.777758
0.995695
0.77441
Mentalism (psychology)
In psychology, mentalism refers to those branches of study that concentrate on perception and thought processes, for example: mental imagery, consciousness and cognition, as in cognitive psychology. The term mentalism has been used primarily by behaviorists who believe that scientific psychology should focus on the structure of causal relationships to reflexes and operant responses or on the functions of behavior. Neither mentalism nor behaviorism are mutually exclusive fields; elements of one can be seen in the other, perhaps more so in modern times compared to the advent of psychology over a century ago. Classical mentalism Psychologist Allan Paivio used the term classical mentalism to refer to the introspective psychologies of Edward Titchener and William James. Despite Titchener being concerned with structure and James with function, both agreed that consciousness was the subject matter of psychology, making psychology an inherently subjective field. The rise of behaviorism Concurrently thriving alongside mentalism since the inception of psychology was the functional perspective of behaviorism. However, it was not until 1913, when psychologist John B. Watson published his article "Psychology as the Behaviorist Views It" that behaviorism began to have a dominant influence. Watson's ideas sparked what some have called a paradigm shift in American psychology, emphasizing the objective and experimental study of human behavior, rather than subjective, introspective study of human consciousness. Behaviorists considered that the study of consciousness was impossible to do, or unnecessary, and that the focus on it to that point had only been a hindrance to the field reaching its full potential. For a time, behaviorism would go on to be a dominant force driving psychological research, advanced by the work of scholars including Ivan Pavlov, Edward Thorndike, Watson, and especially B.F. Skinner. The new mentalism Critical to the successful revival of the mind or consciousness as a primary focus of study in psychology (and in related fields such as cognitive neuroscience) were technological and methodological advances, which eventually allowed for brain mapping, among other new techniques. These advances provided an experimental way to begin to study perception and consciousness. However, the cognitive revolution did not kill behaviorism as a research program; in fact, research on operant conditioning actually grew at a rapid pace during the cognitive revolution. In 1994, scholar Terry L. Smith surveyed the history of radical behaviorism and concluded that "even though radical behaviorism may have been a failure, the operant program of research has been a success. Furthermore, operant psychology and cognitive psychology complement one another, each having its own domain within which it contributes something valuable to, but beyond the reach of, the other." See also Cartesianism Cognitivism (psychology) Dualism (philosophy of mind) Property dualism References Further reading See also the six responses to Burgos in volume 44 of Behavior & Philosophy. Cognitive psychology Philosophy of psychology Psychological theories
0.791233
0.978732
0.774405
Earth science
Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history. Geology Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time. Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks. Earth's interior Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes. Atmospheric science Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change. The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. Earth's magnetic field Hydrology Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. Ecology Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature. Physical geography Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment. Methodology Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains). A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history. Earth's spheres In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere. The following fields of science are generally categorized within the Earth sciences: Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. Geochemistry is defined as the study of the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the composition, structure, processes, and other physical aspects of the Earth. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology. Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from the study of other planets in the Solar System, Earth being its only planet teeming with life. Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involves all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry." Glaciology covers the icy parts of the Earth (or cryosphere). Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. Earth science breakup Atmosphere Atmospheric chemistry Geography Climatology Meteorology Hydrometeorology Paleoclimatology Biosphere Biogeochemistry Biogeography Ecology Landscape ecology Geoarchaeology Geomicrobiology Paleontology Palynology Micropaleontology Hydrosphere Hydrology Hydrogeology Limnology (freshwater science) Oceanography (marine science) Chemical oceanography Physical oceanography Biological oceanography (marine biology) Geological oceanography (marine geology) Paleoceanography Lithosphere (geosphere) Geology Economic geology Engineering geology Environmental geology Forensic geology Historical geology Quaternary geology Planetary geology and planetary geography Sedimentology Stratigraphy Structural geology Geography Human geography Physical geography Geochemistry Geomorphology Geophysics Geochronology Geodynamics (see also Tectonics) Geomagnetism Gravimetry (also part of Geodesy) Seismology Glaciology Hydrogeology Mineralogy Crystallography Gemology Petrology Petrophysics Speleology Volcanology Pedosphere Geography Soil science Edaphology Pedology Systems Earth system science Environmental science Geography Human geography Physical geography Gaia hypothesis Systems ecology Systems geology Others Geography Cartography Geoinformatics (GIScience) Geostatistics Geodesy and Surveying Remote Sensing Hydrography Nanogeoscience See also American Geosciences Institute Earth sciences graphics software Four traditions of geography Glossary of geology terms List of Earth scientists List of geoscience organizations List of unsolved problems in geoscience Making North America National Association of Geoscience Teachers Solid-earth science Science tourism Structure of the Earth References Sources Further reading Allaby M., 2008. Dictionary of Earth Sciences, Oxford University Press, Korvin G., 1998. Fractal Models in the Earth Sciences, Elsvier, Tarbuck E. J., Lutgens F. K., and Tasa D., 2002. Earth Science, Prentice Hall, External links Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center. Geoethics in Planetary and Space Exploration. Geology Buzz: Earth Science Planetary science Science-related lists
0.776337
0.997397
0.774316
Physical education
Physical education, often abbreviated to Phys. Ed. or PE, and sometimes informally referred to as gym class or simply just gym, is a subject taught in schools around the world. PE is taught during primary and secondary education and encourages psychomotor, cognitive, and effective learning through physical activity and movement exploration to promote health and physical fitness. When taught correctly and in a positive manner, children and teens can receive a storm of health benefits. These include reduced metabolic disease risk, improved cardiorespiratory fitness, and better mental health. In addition, PE classes can produce positive effects on students' behavior and academic performance. Research has shown that there is a positive correlation between brain development and exercising. Researchers in 2007 found a profound gain in English Arts standardized test scores among students who had 56 hours of physical education in a year, compared to those who had 28 hours of physical education a year. Many physical education programs also include health education as part of the curriculum. Health education is the teaching of information on the prevention, control, and treatment of diseases. Curriculum in physical education A highly effective physical education program aims to develop physical literacy through the acquisition of skills, knowledge, physical fitness, and confidence. Physical education curricula promote healthy development of children, encourage interest in physical activity and sport, improve learning of health and physical education concepts, and accommodate for differences in student populations to ensure that every child receives health benefits. These core principles are implemented through sport participation, sports skill development, knowledge of physical fitness and health, as well as mental health and social adaptation. Physical education curriculum at the secondary level includes a variety of team and individual sports, as well as leisure activities. Some examples of physical activities include basketball, soccer, volleyball, track and field, badminton, tennis, walking, cycling, and swimming. Chess is another activity that is included in the PE curriculum in some parts of the world. Chess helps students to develop their cognitive thinking skills and improves focus, while also teaching about sportsmanship and fair play. Gymnastics and wrestling activities offer additional opportunities for students to improve the different areas of physical fitness including flexibility, strength, aerobic endurance, balance, and coordination. Additional activities in PE include football, netball, hockey, rounders, cricket, four square, racing, and numerous other children's games. Physical education also teaches nutrition, healthy habits, and individuality of needs. Pedagogy The main goals in teaching modern physical education are: To expose children and teens to a wide variety of exercise and healthy activities. Because P.E. can be accessible to nearly all children, it is one of the only opportunities that can guarantee beneficial and healthy activity in children. To teach skills to maintain a lifetime of fitness as well as health. To encourage self-reporting and monitoring of exercise. To individualize duration, intensity, and type of activity. To focus feedback on the work, rather than the result. To provide active role models. It is critical for physical educators to foster and strengthen developing motor skills and to provide children and teens with a basic skill set that builds their movement repertoire, which allows students to engage in various forms of games, sports, and other physical activities throughout their lifetime. These goals can be achieved in a variety of ways. National, state, and local guidelines often dictate which standards must be taught in regards to physical education. These standards determine what content is covered, the qualifications educators must meet, and the textbooks and materials which must be used. These various standards include teaching sports education, or the use of sports as exercise; fitness education, relating to overall health and fitness; and movement education, which deals with movement in a non-sport context. These approaches and curricula are based on pioneers in PE, namely, Francois Delsarte, Liselott Diem, and Rudolf von Laban, who, in the 1800s focused on using a child's ability to use their body for self-expression. This, in combination with approaches in the 1960s, (which featured the use of the body, spatial awareness, effort, and relationships) gave birth to the modern teaching of physical education. Recent research has also explored the role of physical education for moral development in support of social inclusion and social justice agendas, where it is under-researched, especially in the context of disability, and the social inclusion of disabled people. Technology use in physical education Many physical education classes utilize technology to assist their pupils in effective exercise. One of the most affordable and popular tools is a simple video recorder. With this, students record themselves, and, upon playback, can see mistakes they are making in activities like throwing or swinging. Studies show that students find this more effective than having someone try to explain what they are doing wrong, and then trying to correct it. Educators may also use technology such as pedometers and heart rate monitors to make step and heart rate goals for students. Implementing pedometers in physical education can improve physical activity participation, motivation and enjoyment. Other technologies that can be used in a physical education setting include video projectors and GPS systems. Gaming systems and their associated games, such as the Kinect, Wii, and Wii Fit can also be used. Projectors are used to show students proper form or how to play certain games. GPS systems can be used to get students active in an outdoor setting, and active exergames can be used by teachers to show students a good way to stay fit in and out of a classroom setting. Exergames, or digital games that require the use of physical movement to participate, can be used as a tool to encourage physical activity and health in young children. Technology integration can increase student motivation and engagement in the Physical Education setting. However, the ability of educators to effectively use technology in the classroom is reliant on a teacher's perceived competence in their ability to integrate technology into the curriculum. Beyond traditional tools, recent AI advancements are introducing new methods for personalizing physical education, especially for adolescents. AI applications like adaptive coaching are starting to show promise in enhancing student motivation and program effectiveness in physical education settings. By location According to the World Health Organization (WHO), it is suggested that young children should be participating in 60-minutes of exercise per day at least 3 times per week in order to maintain a healthy body. This 60-minute recommendation can be achieved by completing different forms of physical activity, including participation in physical education programs at school. A majority of children around the world participate in Physical Education programs in general education settings. According to data collected from a worldwide survey, 79% of countries require legal implementation of PE in school programming. Physical education programming can vary all over the world. Asia Philippines In the Philippines, P.E. is mandatory for all years in school, unless the school gives the option for a student to do the Leaving Certificate Vocational Programme instead for their fifth and sixth year. Some schools have integrated martial arts training into their physical education curriculum. Singapore A Biennial compulsory fitness exam, NAPFA, is conducted in every school to assess pupils' physical fitness in Singapore. This includes a series of fitness tests. Students are graded by a system of gold, silver, bronze, or as a fail. NAPFA for pre-enlistees serves as an indicator for an additional two months in the country's compulsory national service training if they attain bronze or fail. Europe Ireland In Ireland, one is expected to do two semesters worth of 80-minute PE classes. This also includes showering and changing times. So, on average, classes are composed of 60–65 minutes of activity. Poland In Poland, pupils are expected to do at least three hours of PE a week during primary and secondary education. Universities must also organise at least 60 hours of physical education classes in undergraduate courses. Sweden In Sweden, the time school students spend in P.E. lessons per week varies between municipalities, but generally, years 0 to 2 have 55 minutes of PE a week; years 3 to 6 have 110 minutes a week, and years 7 to 9 have 220 minutes. In upper secondary school, all national programs have an obligatory course, containing 100 points of PE, which corresponds to 90–100 hours of PE during the course (one point per hour). Schools can regulate these hours as they like during the three years of school students attend. Most schools have students take part in this course during the first year and offer a follow-up course, which also contains 100 points/hours. United Kingdom In England, pupils in years 7, 8, and 9 are expected to do two hours of exercise per week. Pupils in years 10 and 11 are expected to do one hour of exercise per week. In Wales, pupils are expected to do two hours of PE a week. In Scotland, Scottish pupils are expected to have at least two hours of PE per week during primary and lower secondary education. In Northern Ireland, pupils are expected to participate in at least two hours of physical education (PE) per week during years 8 to 10. PE remains part of the curriculum for years 11 and 12, though the time allocated may vary. North America Canada In British Columbia, the government has mandated in the grade one curriculum that students must participate in physical activity daily five times a week. The educator is also responsible for planning Daily Physical Activity (DPA), which is thirty minutes of mild to moderate physical activity a day (not including curriculum physical education classes). The curriculum also requires students in grade one to be knowledgeable about healthy living. For example, they must be able to describe the benefits of regular exercise, identify healthy choices in activities, and describe the importance of choosing healthy food. Ontario, Canada has a similar procedure in place. On October 6, 2005, the Ontario Ministry of Education (OME) implemented a DPA policy in elementary schools, for those in grades 1 through 8. The government also requires that all students in grades 1 through 8, including those with special needs, be provided with opportunities to participate in a minimum of twenty minutes of sustained, moderate to vigorous physical activity each school day during instructional time. United States The 2012 "Shape Of The Nation Report" by the National Association for Sport and Physical Education (part of SHAPE America) and the American Heart Association found that while nearly 75% of states require physical education in elementary through high school, over half of the states permit students to substitute other activities for their required physical education credit, or otherwise fail to mandate a specific amount of instructional time. According to the report, only six states (Illinois, Hawaii, Massachusetts, Mississippi, New York, and Vermont) require physical education at every grade level. A majority of states in 2016 did not require a specific amount of instructional time, and more than half allow exemptions or substitution. These loopholes can lead to reduced effectiveness of the physical education programs. Zero Hour is a before-school physical education class first implemented by Naperville Central High School. In the state of Illinois, this program is known as Learning Readiness P.E. (LRPE). The program was based on research indicating that students who are physically fit are more academically alert, experience growth in brain cells, and enhancement in brain development. NCHS pairs a P.E. class that incorporates cardiovascular exercise, core strength training, cross-lateral movements, as well as literacy and math strategies which enhance learning and improve achievement. See also Recreation Exercise Lack of physical education Sports day Worldwide Day of Play References External links Education Sports science Education by subject
0.775703
0.998174
0.774287
Typology
Typology is the study of various traits and types, or the systematic classification of the types of something according to their common characteristics. Typology is the act of finding, counting and classifying facts with the help of eyes, other senses and logic. Typology may refer to: Typology (anthropology), human anatomical categorization based on morphological traits Typology (archaeology), classification of artefacts according to their characteristics Typology (linguistics), study and classification of languages according to their structural features Morphological typology, a method of classifying languages Typology (psychology), a model of personality types Psychological typologies, classifications used by psychologists to describe the distinctions between people Typology (statistics), a concept in statistics, research design and social sciences Typology (theology), the Christian interpretation of some figures and events in the Old Testament as foreshadowing the New Testament Typology (urban planning and architecture), the classification of characteristics common to buildings or urban spaces Building typology, relating to buildings and architecture Farm typology, farm classification by the USDA Sociopolitical typology, four types, or levels, of a political organization See also The Bechers' photographic typologies Blanchard's transsexualism typology, a controversial classification of trans women Johnson's Typology, a classification of intimate partner violence (IPV) Topology (disambiguation) Type (disambiguation) Typification, a process of creating standard (typical) social construction based on standard assumptions Typology of Greek vase shapes, classification of Greek vases Typography, the art and technique of arranging type to make written language legible, readable and appealing when displayed
0.782684
0.989189
0.774222
Functional psychology
Functional psychology or functionalism refers to a psychological school of thought that was a direct outgrowth of Darwinian thinking which focuses attention on the utility and purpose of behavior that has been modified over years of human existence. Edward L. Thorndike, best known for his experiments with trial-and-error learning, came to be known as the leader of the loosely defined movement. This movement arose in the U.S. in the late 19th century in direct contrast to Edward Titchener's structuralism, which focused on the contents of consciousness rather than the motives and ideals of human behavior. Functionalism denies the principle of introspection, which tends to investigate the inner workings of human thinking rather than understanding the biological processes of the human consciousness. While functionalism eventually became its own formal school, it built on structuralism's concern for the anatomy of the mind and led to greater concern over the functions of the mind and later to the psychological approach of behaviorism. History Functionalism opposed the prevailing structuralism of psychology of the late 19th century. Edward Titchener, the main structuralist, gave psychology its first definition as a science of the study of mental experience, of consciousness, to be studied by trained introspection. At the start of the nineteenth century, there was a discrepancy between psychologists who were interested in the analysis of the structures of the mind and those who turned their attention to studying the function of mental processes. This resulted in a battle of structuralism versus functionalism. The main goal of Structuralism was to make attempts to study human consciousness within the confines of an actual living experience, but this could make studying the human mind impossible, functionalism is in stark contrast to that. Structural psychology was concerned with mental contents while functionalism is concerned with mental operations. It is argued that structural psychology emanated from philosophy and remained closely allied to it, while functionalism has a close ally in biology. William James is considered to be the founder of functional psychology. But he would not consider himself as a functionalist, nor did he truly like the way science divided itself into schools. John Dewey, George Herbert Mead, Harvey A. Carr, and especially James Rowland Angell were the main proponents of functionalism at the University of Chicago. Another group at Columbia, including notably James McKeen Cattell, Edward L. Thorndike, and Robert S. Woodworth, were also considered functionalists and shared some of the opinions of Chicago's professors. Egon Brunswik represents a more recent, but Continental, version. The functionalists retained an emphasis on conscious experience. Behaviourists also rejected the method of introspection but criticized functionalism because it was not based on controlled experiments and its theories provided little predictive ability. B.F. Skinner was a developer of behaviourism. He did not think that considering how the mind affects behaviour was worthwhile, for he considered behaviour simply as a learned response to an external stimulus. Yet, such behaviourist concepts tend to deny the human capacity for random, unpredictable, sentient decision-making, further blocking the functionalist concept that human behaviour is an active process driven by the individual. Perhaps, a combination of both the functionalist and behaviourist perspectives provides scientists with the most empirical value, but, even so, it remains philosophically (and physiologically) difficult to integrate the two concepts without raising further questions about human behaviour. For instance, consider the interrelationship between three elements: the human environment, the human autonomic nervous system (our fight or flight muscle responses), and the human somatic nervous system (our voluntary muscle control). The behaviourist perspective explains a mixture of both types of muscle behaviour, whereas the functionalist perspective resides mostly in the somatic nervous system. It can be argued that all behavioural origins begin within the nervous system, prompting all scientists of human behaviour to possess basic physiological understandings, something very well understood by the functionalist founder William James. The main problems with structuralism were the elements and their attributes, their modes of composition, structural characteristics, and the role of attention. Because of these problems, many psychologists began to shift their attention from mental states to mental processes. This change of thought was preceded by a change in the whole conception of what psychology is. Three parts ushered functional psychology into the modern-day psychology. Utilizing the Darwinian ideology, the mind was considered to perform a diverse biological function on its own and can evolve and adapt to varying circumstances. Secondly, the physiological functioning of the organism results in the development of the consciousness. Lastly, the promise of the impact of functional psychology to the improvement of education, mental hygiene and abnormal states. Notable people James Angell James Angell was a proponent of the struggle for the emergence of functional psychology. He argued that mental elements identified by the structuralist were temporary and only existed at the moment of sensory perception. During his American Psychological Association presidential address, Angell laid out three major ideas regarding functionalism. The first of his ideas being that functional psychology is focused on mental operations and their relationship with biology and these mental operations were a way of dealing with the conditions of the environment. Second, mental operations contribute to the relationship between an organism's needs and the environment in which it lives. Its mental functions aid in the survival of the organism in unfamiliar situations. Lastly, functionalism does not abide by the rules of dualism because it is the study of how mental functions relate to behavior. Mary Calkins Mary Calkins attempted to make strides in reconciling structural and functional psychology during her APA presidential address. It was a goal of Calkin's for her school of self-psychology to be a place where functionalism and structuralism could unite under common ground. John Dewey John Dewey, an American psychologist and philosopher, became the organizing principle behind the Chicago school of functional psychology in 1894. His first important contribution to the development of functional psychology was a paper criticizing "the reflex arc" concept in psychology. Herman Ebbinghaus Herman Ebbinghaus's study on memory was a monumental moment in psychology. He was influenced by the Fechner's work on perception and from the Elements of Psychophysics. He used himself as a subject when he set out to prove that some higher mental processes could be experimentally investigated. His experiment was hailed as an important contribution to psychology by Wundt. William James James was the first American psychologist and wrote the first general textbook regarding psychology. In this approach he reasoned that the mental act of consciousness must be an important biological function. He also noted that it was a psychologist's job to understand these functions so they can discover how the mental processes operate. This idea was an alternative approach to Structuralism, which was the first paradigm in psychology (Gordon, 1995). In opposition of Titchener's idea that the mind was simple, William James argued that the mind should be a dynamic concept. James's main contribution to functionalism was his theory of the subconscious. He said there were three ways of looking at the subconscious in which it may be related to the conscious. First, the subconscious is identical in nature with states of consciousness. Second, it's the same as conscious but impersonal. Lastly, he said that the subconscious is a simple brain state but with no mental counterpart. According to An Illustrated History of American Psychology, James was the most influential pioneer. In 1890, he argued that psychology should be a division of biology and adaptation should be an area of focus. His main theories that contributed to the development of functional psychology were his ideas about the role of consciousness, the effects of emotions, and the usefulness of instincts and habits Joseph Jastrow In 1901, Joseph Jastrow declared that functional psychology appeared to welcome the other areas of psychology that were neglected by structuralism. In 1905, a wave of acceptance was eminent as there had been a widespread acceptance of functionalism over the structural view of psychology. Edward Titchener Edward Titchener made arguments that structural psychology preceded functional psychology because mental structures need to be isolated and understood before their function be ascertained. Despite Titchener's enthusiasm towards functional psychology, he was weary and urged other psychologists to avoid the appeal of functional psychology and continue to embrace the rigorous introspective experimental psychology. James Ward James Ward was a pioneer of functional psychology in Britain. Once a minister, after experiencing a turmoil in his spiritual life, he turned to psychology but not without an attempt at physiology. He eventually settled for philosophy. He later made attempts at establishing psychological laboratory. Ward believed perception is not passive reception of sensation, but an active grasping of the environment. Ward's presence influenced the adoption of functionalist view in British psychology and later served as the turning point for the development of cognitive psychology. Wilhelm Wundt Later in his life, Dewey neglected to mention Wilhelm Wundt, a German philosopher and psychologist, as an influence towards his functional psychology. In fact, Dewey gave all credit to James. At the time it didn't seem worthwhile to bring up old theories from a German philosopher who only held a temporary spotlight and whose reputation went into a rather negative decline in America in the early twentieth century. Wundt's major contribution to functional psychology was when he made will into a structural concept. Though controversial, according to Titchener's definition of structuralism, Wundt was actually more of a structuralist than functionalist. Despite this claim, it is possibly one of the greatest ironies in the history of psychology that Wundt be deemed responsible for major contributions to functionalism due to his spark of several functionalist rebellions. Contemporary descendants Evolutionary psychology is based on the idea that knowledge concerning the function of the psychological phenomena affecting human evolution is necessary for a complete understanding of the human psyche. Even the project of studying the evolutionary functions of consciousness is now an active topic of study. Like evolutionary psychology, James's functionalism was inspired by Charles Darwin's theory of natural selection. Functionalism was the basis of development for several subtypes of psychology including child and developmental psychology, clinical psychology, psychometrics, and industrial/vocational psychology. Functionalism eventually dropped out of popular favor and was replaced by the next dominant paradigm, behaviourism. See also Functionalism (philosophy of mind) References External links "functionalism" – Encyclopædia Britannica Online Mary Calkins (1906) "A Reconciliation Between Structural And Functional Psychology" James R. Angell (1907) "The Province of Functional Psychology" James R. Angell (1906), Psychology: An Introductory Study of the Structure and Function of Human Consciousness Behaviorism History of psychology William James Psychological theories Consciousness
0.781353
0.990838
0.774194
School psychology
School psychology is a field that applies principles from educational psychology, developmental psychology, clinical psychology, community psychology, and behavior analysis to meet the learning and behavioral health needs of children and adolescents. It is an area of applied psychology practiced by a school psychologist. They often collaborate with educators, families, school leaders, community members, and other professionals to create safe and supportive school environments. They carry out psychological testing, psychoeducational assessment, intervention, prevention, counseling, and consultation in the ethical, legal, and administrative codes of their profession. Historical foundations School psychology dates back to the beginning of American psychology in the late 19th and early 20th centuries. The field is tied to both functional and clinical psychology. School psychology actually came out of functional psychology. School psychologists were interested in childhood behaviors, learning processes, and dysfunction with life or in the brain itself. They wanted to understand the causes of the behaviors and their effects on learning. In addition to its origins in functional psychology, school psychology is also the earliest example of clinical psychology, beginning around 1890. While both clinical and school psychologists wanted to help improve the lives of children, they approached it in different ways. School psychologists were in fact concerned with school learning and childhood behavioral problems, which largely contrasts the mental health focus of clinical psychologists. Another significant event in the foundation of school psychology as it is today was the Thayer Conference. The Thayer Conference was first held in August 1954 in West Point, New York in Hotel Thayer. The 9 day-long conference was conducted by the American Psychological Association (APA). The purpose of the conference was to develop a position on the roles, functions, and necessary training and credentialing of a school psychologist. At the conference, forty-eight participants that represented practitioners and trainers of school psychologists discussed the roles and functions of a school psychologist and the most appropriate way to train them. At the time of the Thayer Conference, school psychology was still a very young profession with only about 1,000 school psychology practitioners. One of the goals of the Thayer Conference was to define school psychologists. The agreed upon definition stated that school psychologists were psychologists who specialize in education and have specific knowledge of assessment and learning of all children. School psychologists use this knowledge to assist school personnel in enriching the lives of all children. This knowledge is also used to help identify and work with children with exceptional needs. It was discussed that a school psychologist must be able to assess and develop plans for children considered to be at risk. A school psychologist is also expected to better the lives of all children in the school; therefore, it was determined that school psychologists should be advisors in the planning and implementation of school curriculum. Participants at the conference felt that since school psychology is a specialty, individuals in the field should have a completed a two-year graduate training program or a four-year doctoral program. Participants felt that states should be encouraged to establish certification standards to ensure proper training. It was also decided that a practicum experience be required to help facilitate experiential knowledge within the field. The Thayer Conference is one of the most significant events in the history of school psychology because it was there that the field was initially shaped into what it is today. Before the Thayer Conference defined school psychology, practitioners used seventy-five different professional titles. By providing one title and a definition, the conference helped to get school psychologists recognized nationally. Since a consensus was reached regarding the standards of training and major functions of a school psychologist, the public can now be assured that all school psychologists are receiving adequate information and training to become a practitioner. It is essential that school psychologists meet the same qualifications and receive appropriate training nationwide. These essential standards were first addressed at the Thayer Conference. At the Thayer Conference some participants felt that in order to hold the title of a school psychologist an individual must have earned a doctoral degree. The issues of titles, labels, and degree levels are still debated among psychologists today. However, APA and NASP reached a resolution on this issue for the US in 2010. Social reform in the early 1900s The late 19th century marked the era of social reforms directed at children. It was due to these social reforms that the need for school psychologists emerged. These social reforms included compulsory schooling, juvenile courts, child labor laws as well as a growth of institutions serving children. Society was starting to "change the 'meaning of children' from an economic source of labor to a psychological source of love and affection". Historian Thomas Fagan argues that the preeminent force behind the need for school psychology was compulsory schooling laws. Prior to the compulsory schooling law, only 20% of school aged children completed elementary school and only 8% completed high school. Due to the compulsory schooling laws, there was an influx of students with mental and physical defects who were required by law to be in school. There needed to be an alternative method of teaching for these different children. Between 1910 and 1914, schools in both rural and urban areas created small special education classrooms for these children. From the emergence of special education classrooms came the need for "experts" to help assist in the process of child selection for special education. Thus, school psychology was founded. Important contributors to the founding Lightner Witmer Lightner Witmer has been acknowledged as the founder of school psychology. Witmer was a student of both Wilhelm Wundt and James Mckeen Cattell. While Wundt believed that psychology should deal with the average or typical performance, Cattell's teachings emphasized individual differences. Witmer followed Cattell's teachings and focused on learning about each individual child's needs. Witmer opened the first psychological and child guidance clinic in 1896 at the University of Pennsylvania. Witmer's goal was to prepare psychologists to help educators solve children's learning problems, specifically those with individual differences. Witmer became an advocate for these special children. He was not focused on their deficits per se, but rather helping them overcome them, by looking at the individual's positive progress rather than all they still could not achieve. Witmer stated that his clinic helped "to discover mental and moral defects and to treat the child in such a way that these defects may be overcome or rendered harmless through the development of other mental and moral traits". He strongly believed that active clinical interventions could help to improve the lives of the individual children. Since Witmer saw much success through his clinic, he saw the need for more experts to help these individuals. Witmer argued for special training for the experts working with exceptional children in special educational classrooms. He called for a "new profession which will be exercised more particularly in connection with educational problems, but for which the training of the psychologist will be a prerequisite". As Witmer believed in the appropriate training of these school psychologists, he also stressed the importance of appropriate and accurate testing of these special children. The IQ testing movement was sweeping through the world of education after its creation in 1905. However, the IQ test negatively influenced special education. The IQ test creators, Lewis Terman and Henry Goddard, held a nativist view of intelligence, believing that intelligence was inherited and difficult if not impossible to modify in any meaningful way through education. These notions were often used as a basis for excluding children with disabilities from the public schools. Witmer argued against the standard pencil and paper IQ and Binet type tests in order to help select children for special education. Witmer's child selection process included observations and having children perform certain mental tasks. Granville Stanley Hall Another important figure to the origin of school psychology was Granville Stanley Hall. Rather than looking at the individual child as Witmer did, Hall focused more on the administrators, teachers and parents of exceptional children He felt that psychology could make a contribution to the administrator system level of the application of school psychology. Hall created the child study movement, which helped to invent the concept of the "normal" child. Through Hall's child study, he helped to work out the mappings of child development and focused on the nature and nurture debate of an individual's deficit. Hall's main focus of the movement was still the exceptional child despite the fact that he worked with atypical children. Arnold Gesell Bridging the gap between the child study movement, clinical psychology and special education, Arnold Gesell, was the first person in the United States to officially hold the title of school psychologist. He successfully combined psychology and education by evaluating children and making recommendations for special teaching. Arnold Gesell paved the way for future school psychologists. Gertrude Hildreth Gertrude Hildreth was a psychologist with the Lincoln School at Teacher's College, Columbia then at Brooklyn College in New York. She authored many books including the first book pertaining to school psychology titled, "Psychological Service for School Problems" written in 1930. The book discussed applying the science of psychology to address the perceived problems in schools. The main focus of the book was on applied educational psychology to improve learning outcomes. Hildreth listed 11 problems that can be solved by applying psychological techniques, including: instructional problems in the classroom, assessment of achievement, interpretation of test results, instructional groupings of students for optimal outcomes, vocational guidance, curriculum development, and investigations of exceptional pupils. Hildreth emphasized the importance of collaboration with parents and teachers. She is also known for her development of the Metropolitan Readiness Tests and for her contribution to the Metropolitan Achievement test. In 1933 and 1939 Hildreth published a bibliography of Mental Tests and Rating Scales encompassing a 50-year time period and over 4,000 titles. She wrote approximately 200 articles and bulletins and had an international reputation for her work in education. Controversies and debates Assessment process Empirical evidence has not confirmed biases in referral, assessment, or identification. Some have claimed that a better assessment process would not necessarily focus on if a student qualifies for a special education program, but rather the unique learning style of each student, and how to best help them succeed. The National Research Council has called attention to the questionable reliability of educational decision making in special education as there can be vast numbers of false positives and/or false negatives. Misidentified students in special education is problematic and can contribute to long term negative outcomes. The effects of these outcomes differs based on many factors, including race or location. During the identification process, school psychologists must consider ecological factors and environmental context such as socioeconomic status. Socioeconomic status may limit funding and materials, impact curriculum quality, increase teacher-to-student ratios, and perpetuate a negative school climate. Technology With the ever growing use of technology, school psychologists are faced with several issues, both ethical and within the populations they try to serve. As it is so easy to share and communicate over technology, concerns are raised as to just how easy it is for outsiders to get access to the private information that school psychologists deal with every day. Thus exchanging and storing information digitally may come under scrutiny if precautions such as password protecting documents and specifically limiting access within school systems to personal files. Another issue is that of how students communicate using this technology. There are both concerns on how to address these virtual communications and on how appropriate it is to access them. Concerns on where the line can be drawn on where intervention methods end and invasion of privacy begin are raised by students, parents, administrators, and faculty. Addressing these behaviors becomes even more complicated when considering the current methods of treatment for problematic behaviors, and implementation of these strategies can become complex, if not impossible, within the use of technology. Racial disproportionality in special education Disproportionality refers to a group's under or overrepresentation in comparison to other groups within a certain context. In the field of school psychology, disproportionality of minority students in special education is a concern. Special Education Disproportionality has been defined as the relationship between one's membership to a specific group and the probability of being placed in a specific disability category. Systemic prejudice is believed by some to be one of the root causes of the mischaracterization of minority children as being disabled or problematic. "Research on disproportionality in the U.S. context has posited two overlapping types of rationales: those who believed disproportionate representation is linked to poverty and health outcomes versus those who believed in the systemwide racist practices that contributed to over-representation of minority students." The United States Congress recently received an annual report on the implementation of IDEA which stated that proportionally Native Americans (14.09%) and African Americans (12.61%) were the two most highly represented racial groups within the realm of special education. In particular, African American males have been overidentified as having emotional disturbances and intellectual disabilities. They account for 21% of the special education population with emotional disturbances and 12% with learning disabilities. American Indian and Alaska Native students are also overrepresented in special education. They are shown to be 1.53 times more likely to receive services for various learning disabilities and 2.89 more likely to obtain services targeting developmental delays than all other Non-Native American student groups combined. Overall, Hispanic students are often overidentified for special education in general; however, it is common for them to be under-identified for Autism Spectrum Disorder and speech and language impairments in comparison to White students. Minority populations often have an increased susceptibility to economic, social and cultural disadvantages that can affect academic achievement. According to the US Department of Education, "Black children were three times as likely to live in poor families as white children in 2015. 12 percent of white and Asian children lived in poor families, compared with 36 percent of black children, 30 percent of Hispanic children, 33 percent of American Indian children, and 19 percent of others." There may be other alternative explanations for behavior and academic performance as well. For example, Black children are twice as likely as Whites to experience heightened levels of lead in the blood due to prolonged lead exposure. Lead poisoning can be known to affect a child's behavior by increasing their levels of irritability, hyperactivity, and inattentiveness even in less severe cases. Cultural bias Some school psychologists realize the need to understand and accept their own cultural beliefs and values in order to understand the impact it may have when delivering services to clients and families. For example, these school psychologists ensure that students who are minorities, including African Americans, Hispanics, Asians, and Native Americans are being equally represented at the system level, in the classroom, and receiving a fair education. For staff, it is important to look at one's own culture while seeing the value in diversity. Making sure that each individual student has an equal opportunity for education can greatly increase the general quality of that education. It is also vital to learn how to adapt to diversity and integrate a comprehensive way to understand cultural knowledge. Staff members should keep the terms race, privilege, implicit bias, micro aggression, and cultural relevance in mind when thinking about social justice. Services Intervention One of the primary roles and responsibilities of school psychologists working in schools is to develop and implement programs geared towards the optimal learning and mental well being of students. School psychologists call these programs 'interventions' when they are implemented in response to a significant issue affecting one or more students. Interventions in school psychology are typically classified as "direct" when practitioners work with students to rectify their own academic or behavioral problems and as "indirect" when they collaborate with the student's family or teachers to correct academic or behavioral problems. Popular intervention formats include individual meetings, school assemblies, parent-teacher conferences, workshops, and awareness campaigns. After significant developments in related psychological fields over the latter half of the twentieth century, school psychologists have begun to move towards intervention frameworks that center on individually tailored assessments and evidence-based interventions, rather than diagnosed disabilities. This is part of a larger movement to expand the role of school psychologists outside of special education. School psychologists, as researchers and practitioners, can make important contributions to the development and implementation of scientifically based intervention and prevention programs to address learning and behavioral needs of students. Newly designed interventions must be empirically tested through a series of randomized studies conducted by researchers in order to be proven effective for school environments. Evidence-based interventions, known within the field as EBIs, while widely circulated amongst researchers, can be difficult to implement within school environments. This is due in small part to the fledgling nature of school psychology as a field, but also due to the difference between research settings and clinical or classroom settings, with the later being generally more unpredictable and vulnerable to outside influences than the former. Thus, practitioners often modify research-based interventions in order to suit the particular needs of a student or student population. Intervention and prevention research needs to address a range of questions related not only to efficacy and effectiveness, but also to feasibility given resources, acceptability, social validity, integrity, and sustainability. A specific example of an intervention that has recently become popular among school psychologists is the School-Wide Positive Behavioral Interventions and Supports (SWPBIS) intervention. The SWPBIS involves a communal effort among school staff to establish school-wide behavioral expectations, which are reinforced by reward systems in order to promote positive forms of coaching and mentorship. Authorized under Individuals with Disabilities Education Improvement Act (IDIEA), the SWPBIS system has been implemented in over 25,000 schools as of 2018. Like other Evidence-based interventions, the SWPBIS has a large body of research supporting its effectiveness in promoting positive academic and interpersonal behaviors among students. School psychologists are involved in the implementation of academic, behavioral, and social/emotional interventions within a school across a continuum of supports. These systems and policies should convey clear behavior expectations and promote consistency among educators. Continuous reinforcement of positive behaviors can yield extremely positive results. Schoolwide positive behavior supports A systematic approach that proactively promotes constructive behaviors in a school can yield positive outcomes. These programs are designed to improve and support students’ social, behavioral, and learning outcomes by promoting a positive school climate and providing targeted training to students and educators within a school. Data should be collected consistently to assess implementation effectiveness, screen and monitor student behavior, and develop or modify action plans. Check and Connect C&C is a structured mentoring intervention to promote student success and engagement at school with learning through relationship building and systematic use of data. It is structured to maximize personal contact and opportunities to build trusting relationships. It was developed in 1990 at the Institute on Community Integration University of Minnesota in collaboration with the Minneapolis Public School System. It emphasizes school completion, with academic, social, and emotional competencies. Students may be referred to the program if they exhibit signs of withdrawal in academic, emotional, or behavioral areas. The team consists of the student, check and connect coordinator, community services, school staff, monitor, and family. The essential components of this intervention are the mentor component, the check component, and the connect component. The program is implemented by a monitor, who serves multiple roles as a mentor, an advocate, and a service coordinator. These serve to build a strong relationship with the student based on mutual trust and open communication, nurtured through a long-term commitment focused on success at school and with learning. The "check" component is observed from the student levels of engagement. These are things such as attendance, suspension, credits, grades, and behavior that are “checked” for progress regularly by mentors and used to guide their efforts to increase and maintain students’ “connection” with the school. The "connect" component is timely, personalized, data-based interventions designed to provide support tailored to individual student needs, based on the student's level of engagement with school. The monitor’s goal is to make education a priority for withdrawn students. This intervention gives students a person to motivate, encourage, and inform them on how important graduating is. Academic interventions Academic interventions can be conceptualized as a set of procedures and strategies designed to improve student performance with the intent of closing the gap between how a student is currently performing and the expectations of how they should be performing. Short term and long term interventions used within a problem-solving model must be evidence-based. This means the intervention strategies must have been evaluated by research that utilized rigorous data analysis and peer review procedures to determine the effectiveness. Implementing evidence-based interventions for behavior and academic concerns requires significant training, skill development, and supervised practice. Linking assessment and intervention is critical for determining that the correct intervention has been chosen. School psychologists have been specifically trained to ensure that interventions are implemented with integrity to maximize positive outcomes for children in a school setting. Assessment Historically, the main role of school psychologists has been to assess and diagnose students with behavioral or learning disabilities and determine their eligibility for exceptional needs programs. Within the contemporary field, the roles and responsibilities of individual practitioners have expanded significantly beyond the service of special needs students; however, assessment remains a central service performed by school psychologists. Current trends in the field of school psychology call for practitioners to move away from IQ-based assessment practices and encourage assessments that consider students’ individual profiles and attainable, more tailored intervention practices. Individualized education programs (IEPs) are reports summarizing the student’s current performance, goals to guide the student’s progress, and proposed resources to meet any special educational needs. School psychologists are equipped to provide assessment and to determine when an assessment is warranted. School psychologist have completed in depth advanced preparation in selecting and administering tests as well as interpreting and evaluating information obtained from assessment. Advanced training allows school psychologists to be extremely familiar with the central principles of measurement employing multi-method, multi-source, and multi-setting approaches that are sensitive to contextual influences. They select and use the most appropriate assessment instruments and techniques, for the purpose for which they were designed, and for which there is supporting psychometric evidence. School psychologists are aware of the limitations of assessment and the information that is collected, interpreted, and reported. School psychologists use assessment information in a manner that minimizes the potential for misunderstanding and misuse. School psychologists play a substantial role in data-driven decision-making in each of the following areas: routine decisions, screening, progress monitoring, problem identification, school-wide decisions, problem analysis for instruction, problem analysis for intervention planning, program evaluation, accountability, eligibility, and diagnostic decisions. Systems-level services Leaders in the field of school psychology recognize the practical challenges that school psychologists face when striving for systems-level change and have highlighted a more manageable domain within a systems-level approach – the classroom. Overall, it makes sense for school psychologists to devote considerable effort to monitoring and improving school and classroom-based performance for all children and youth because it has been shown to be an effective preventive approach. Universal screening School psychologists play an important role in supporting youth mental wellness, but identifying youth who are in distress can be challenging. Some schools have implemented universal mental health screening programs to help school psychologists find and help struggling youth. For instance, schools in King County, Washington are using the Check Yourself digital screening tool designed by Seattle Children's Hospital to measure, understand, and nurture individual students’ well-being. Check Yourself collects information about lifestyle, behaviour, and social determinants of health to identify at-risk youth so that school psychologists can intervene and direct youth to the services they need. Mental health screening provides school psychologists with valuable insights so that interventions are better fitted to student needs. Crisis intervention Crisis intervention is an integral part of school psychology. School administrators view school psychologists as the school's crisis intervention "experts". Crisis events can significantly affect a student's ability to learn and function effectively. Many school crisis response models suggest that a quick return to normal rituals and routines can be helpful in coping with crises. The primary goal of crisis interventions is to help crisis-exposed students return to their basic abilities of problem-solving so the student can return to their pre-crisis level of functioning. Prevention A way in which school psychologists can help students is by creating primary prevention programs. Information about prevention should also be connected to current events in the community. Social Justice The three major elements that comprise social justice include equity, fairness, and respect. The concept of social justice includes all individuals having equal access to opportunities and resources. A major component behind social justice is the idea of being culturally aware and sensitive. American Psychological Association (APA) and the National Association of School Psychologists (NASP) both have ethical principles and codes of conduct that present aspirational elements of social justice that school psychologists may abide by. Although ethical principles exist, there is federal legislation that acts accordingly to social justice. For example, the Elementary and Secondary Education Act of 1965 (ESEA) and the Individuals with Disabilities Education Improvement Act of 2004 (IDEA) address issues such as poverty and disability to promote the concept of social justice in schools. Schools are becoming increasingly diverse with growing awareness of these differences. Cultural diversity factors that can be addressed through social justice practice include race/ethnicity, gender, socioeconomic status (SES), religion, and sexual orientation. With the various elements that can impact a student's education and become a source of discrimination, there is a greater call for the practice of social justice in schools. School psychologists that consider the framework of social justice know that injustices that low SES students face can sometimes be different when compared to high SES students. Advocacy A major role of school psychologists involves advocating and speaking up for individuals as needed. Advocacy can be done at district, regional, state, or national level. School psychologists advocate for students, parents, and caregivers. Consultation and collaboration are key components of school psychology and advocacy. There may be times when school personnel may not agree with the school psychologist. Differing opinions can be problematic because a school psychologist advocates for what is in the best interest of the student. School psychologists and staff members can help facilitate awareness through courageous conversations. Multicultural competence School psychologists offer many types of services in order to be multiculturally competent. Multicultural competence extends to race, ethnicity, social class, gender, religion, sexual orientation, disability, age, and geographic region. Because the field of school psychology serves such a diverse range of students, maintaining representation for minority groups continues to be a priority. Despite such importance, history has seen an underrepresentation of culturally and linguistically diverse (CLD) school psychologists. which may appear alarming given that the diversity of our youth continues to increase exponentially. Thus, current professionals in the field have prioritized the acquisition of CLD school psychologists. School psychologists are trained to use their skills, knowledge, and professional practices in promoting diversity and advocating for services for all students, families, teachers, and schools. School psychologists may also work with teachers and educators to provide an integrated multicultural education classroom and curriculum that allows more students to be represented in learning. Efforts to increase multicultural perspectives among school psychologists have been on the rise to account for the increased diversity within schools. Such efforts include establishing opportunities for individuals representative of minority groups to become school psychologists and implementing a diverse array of CLD training programs within the field. Education In order to become a school psychologist, one must first learn about school psychology by successfully completing a graduate-level training program. A B.A. or B.S. is not sufficient. United States School psychology training programs are housed in university schools of education or departments of psychology. School psychology programs require courses, practica, and internships. Degree requirements Specific degree requirements vary across training programs. School psychology training programs offer masters-level (M.A., M.S., M.Ed.), specialist-level degrees (Ed.S., Psy.S., SSP, CAGS), and doctoral-level degrees (Ph.D., Psy.D. or Ed.D.) degrees. Regardless of degree title, a supervised internship is the defining feature of graduate-level training that leads to certification to practice as a school psychologist. Specialist-level training typically requires 3–4 years of graduate training including a 9-month (1200 hour) internship in a school setting. Doctoral-level training programs typically require 5–7 years of graduate training. Requirements typically include more coursework in core psychology and professional psychology, more advanced statistics coursework, involvement in research endeavors, a doctoral dissertation, and a one-year (1500+ hour) internship (which may be in a school or other settings such as clinics or hospitals). In the past, a master's degree was considered the standard for practice in schools. As of 2017, the specialist-level degree is considered the entry-level degree in school psychology. Masters-level degrees in school psychology may lead to obtaining related credentials (such as Educational Diagnostician, School Psychological Examiner, School Psychometrist) in one or two states. International In the UK, the similar practice and study of School Psychology is more often termed Educational Psychology and requires a doctorate (in Educational Psychology) which then enables individuals to register and subsequently practice as a licensed educational psychologist. Employment in the United States In the United States, job prospects in school psychology are excellent. Across all disciplines of psychology, the abundance of opportunities is considered among the best for both specialist and doctoral level practitioners. They mostly work in schools. Other settings include clinics, hospitals, correctional facilities, universities, and independent practice. Demographic information According to the NASP Research Committee, 87.5% of school psychologists are female with an average age of 42.7. In 2004–05, average earnings for school practitioners ranged from $56,262 for those with a 180-day annual contract to $68,764 for school psychologists with a 220-day contract. In 2009–10, average earnings for school practitioners ranged from $64,168 for those with a 180-day annual contract to $71,320 for school psychologists with a 200-day contract. In 2019–2020, average earnings for full-time practitioners ranged from $65,397 to $81,458, with a median of $74,000. For university faculty in school psychology, the salary estimate is $77,801. Based on surveys performed by NASP in 2020–2021, it's shown that 85.7% of school psychologists are white, while minority races those who chose not to identify their race making up the remaining 14.3%. Of this remaining percentage, the next largest populations represented in school psychology are African-American and multiracial people, at 3.9% and 2.7%, respectively. Shortages in the Field There is a lack of trained school psychologists within the field. While jobs are available across the country, there are just not enough people to fill them. Due to the low supply and high demand of school psychologists, being a school psychologist is very demanding. School psychologists may feel under pressure to supply adequate mental health and intervention services to the students in their care. Burnout is a risk of being a school psychologist. This risk has increased in recent years due to a shortage of school psychologists nationwide, increases in school enrollment in small cities, and the mental health effects that the COVID-19 pandemic had on children of all ages. In January, 2022, NASP published updated statistics on the shortages of school psychologists nationwide and per state. NASP states that the recommended ratio of students to school psychologists is 500 to 1, however the national ratio for the 2020-2021 school year was 1162 students per 1 school psychologist. States that currently have the largest ratio between students and school psychologists (>2000 to 1, respectively) are Texas, New Mexico, Arkansas, Oklahoma, Louisiana, Mississippi, Alabama, and Georgia. Alabama has the largest disparity of any state: 369,280 students to 1 school psychologist. As of the publication of this data, Connecticut is the only state meeting NASP standards of a ratio of up to 500 students per 1 school psychologist. Bilingual School Psychologists Approximately 21% of school-age children ages 5–7 speak a language other than English. For this reason, there is an enormous demand for bilingual school psychologists in the United States. The National Association of School Psychologists (NASP) does not currently offer bilingual certification in the field. However, there are a number of professional training opportunities that bilingual LSSPs/School Psychologists can attend in order to prepare to adequately administer assessments. In addition, there are 7 NASP-Approved school psychology programs that offer a bilingual specialization: Brooklyn College-City University of New York- Specialist Level Gallaudet University- Specialist Level Queens College-City University of New York- Specialist Level San Diego State University- Specialist Level Texas State University- Specialist Level University of Colorado Denver- Doctoral Level Fordham University- Lincoln Center- Doctoral Level New York and Illinois are the only two states that offer a bilingual credential for school psychologists. School psychology internationally The role of a school psychologist in the United States and Canada may differ considerably from the role of a school psychologist elsewhere. Especially in the United States, the role of school psychologist has been closely linked to public law for education of students with disabilities. In most other nations, this is not the case. Despite this difference, many of the basic functions of a school psychologist, such as consultation, intervention, and assessment are shared by most school psychologists worldwide. It is difficult to estimate the number of school psychologists worldwide. Recent surveys indicate there may be around 76,000 to 87,000 school psychologists practicing in 48 countries, including 32,300 in the United States and 3,500 in Canada. Following the United States, Turkey has the next largest estimated number of school psychologists (11,327), followed by Spain (3,600), and then both Canada and Japan (3,500 each). Credentialing in the United States In most states (excluding Texas and Hawaii), a state education agency credentials school psychologists for practice in the schools. The Nationally Certified School Psychologist (NCSP) credential is offered by the National Association of School Psychologists (NASP). The NCSP credential is an example of a non-practice credential as holding the NCSP does not make one eligible to provide services without first meeting the state requirements to work as a school psychologist. State psychology boards (which may go by different names in each state) also offer credentials for school psychologists in some states. For example, Texas offers the LSSP credential which permits licensees to deliver school psychological services within public and private schools. Subspecializations in the United States Pediatric School Psychology Pediatric School Psychology is a sub-specialty that includes competencies of school, educational, and health psychology. Pediatric school psychologists bring knowledge of human learning and development, as well as understanding of school systems, chronic health conditions, and bio-psycho-social influences. Pediatric school psychologists work across multiple settings and share similar roles. Both professionals focus on prevention and intervention efforts related to students’ behavior, education, and physical health. Additionally, pediatric school psychologists can simplify collaboration between school systems, healthcare providers, and family systems to address the academic, social-emotional, behavioral, and overall health of students. Pediatric school psychologists also contribute to developing and maintaining Tier 1 prevention activities and the facilitation of health promotion programs structured to address the population they are serving. The field of Pediatric School Psychology is relatively new and requires doctoral level education. Traditional school psychology training programs are beginning to endorse pediatric school psychology subspecializations. For example, the University of South Florida requires students in the School Psychology Ph.D. program to have an area of emphasis, one option being pediatric school psychology. Lehigh University in Pennsylvania has a similar option to complete an endorsement in Pediatric School Psychology as part of their doctoral training, which requires 8 credit hours beyond the regular doctoral requirements. Students at Lehigh University enroll in the Pediatric School Psychology endorsement as a part of the competitive Leadership Training project supported by the U.S. Department of Education. While the majority of traditional school psychology programs do not offer a subspecialization in pediatric school psychology, this does not necessarily limit students. When a formal subspecialization option is not available, one may request and select field experiences in typical pediatric school psychology settings. Typical settings include hospitals, school-based health clinics, and medical centers. The Pediatric School Psychology Interest Group is an interest group within the National Association of School Psychologists where members can discuss topics related to the subspecialization with experts in the field. The group also holds an annual meeting at the Annual Convention. Behavioral School Psychology Behavioral school psychology uses the same principals as behavioral psychology which dates back to 1913 where it became established by John B. Watson. There are several other thinkers that influenced the field of behavioral psychology. The field really blossomed from the ideas of Ivan Pavlov’s classical conditioning and B.F. Skinner’s operant conditioning. Operant conditioning uses rewards and punishments to increase and decrease behaviors. School psychologists use these ideas to increase positive behaviors and decrease problem behaviors that interfere with a student's learning. While the idea of behavioral psychology has its critics, and a lot of them say there are other factors that go into one’s behaviors, one of the strengths is that behaviors are observable, therefore much easier to measure, collect data and recognize a change. Behavioral psychology in schools has expanded largely during the last 15 years, because of two main reasons. Inclusive schooling has become increasingly prevalent in today’s world, starting with the “Free and Appropriate Public Education” (FAPE) (1997) requiring that students who have developmental disabilities, cognitive impairments, and behavior disorders will be provided with the opportunity to attend their chosen public schools with their peers who do not have disorders or disabilities. Before the implementation of the regular education initiative (REI), a response to identified problems in the system for educating low-performing children. It was common for this population to be enrolled in private programs or other educational settings other than the public schools with outside resources. Because of the increased enrollment of students with these complications, public schools have recognized the benefits of collaborating with behavior consultants to improve academic instruction and reduce discipline problems. This produces many referrals to professionals who provide consultation as psychologists or behavior specialists affiliated with a private practice, clinic, or human services agency. School Based Mental Health Mental health in children is an important factor that influences success in school and life. If mental health problems within children go unresolved, negative outcomes such as academic and behavior problems can arise. Mental health is not only the absence of mental illness, but also includes social, emotional, and behavioral health, along with the ability to cope with life's challenges. As the need for mental health services for children and youth grow, schools are becoming an ideal place to provide this form of service. The benefits of addressing mental health problems in the life of a child are significant in a number of different ways. Quality of life for the child increases, physical health can improve, and they have better chances at attaining a quality education, as well as healthy social skills. It is even more cost efficient to address mental health problems in young people than it is later in life. Professional organizations International International School Psychology Association National American Psychological Association's Division 16: School Psychology Australian Psychologists and Counsellors in Schools ('Educational Psychology Specialty Group') of the ('German Psychological Society') Indian School Psychology Association National Association of School Psychologists (America) New Zealand Psychological Society's Institute of Educational and Developmental Psychology Journals Canadian Journal of School Psychology International Journal of School & Educational Psychology Journal of Psychoeducational Assessment Psychology in the Schools School Psychology Forum: Research in Practice School Psychology International School Psychology Quarterly School Psychology Review See also Applied psychology School pedagogy Educational Psychology School Counselor School Psychological Examiner School Social Worker Special Education Washington County Closed-Circuit Educational Television Project Outline of psychology References Works cited Hosp, John L.; Daniel J. Reschly, Daniel J. (2002)."Regional Differences in School Psychology Practice" School Psychology Review.31 (1): 11-29. Merrell, K. W., Ervin, R. A., & Peacock, G. G. (2012). School psychology for the 21st Century: Foundations and practices. Guilford Press. Simon, D. J. (2016). School-centered interventions: Evidence-based strategies for social, emotional, and academic success. American Psychological Association. American Psychological Association. (n.d.). Individuals with disabilities education act (ACT). American Psychological Association. Retrieved February 27, 2022, from https://www.apa.org/advocacy/education/idea University of Wisconsin Population Health Institute. (n.d.). School-wide positive behavioral interventions and supports (tier 1). County Health Rankings & Roadmaps. Retrieved April 16, 2022, from https://www.countyhealthrankings.org/take-action-to-improve-health/what-works-for-health/strategies/school-wide-positive-behavioral-interventions-and-supports-tier-1 Bradshaw, C. P., Waasdorp, T. E., & Leaf, P. J. (2012). Effects of school-wide positive behavioral interventions and supports on child behavior problems. Pediatrics, 130(5). https://doi.org/10.1542/peds.2012-0243 Further reading American Psychological Association Commission for the Recognition of Specialties and Proficiencies in Professional Psychology (n.d.). Archival description of school psychology. Retrieved on December 29, 2007 from American Psychological Association Curtis, M.J.; Castillo, J.M.; Cohen, R.M. (2009). "Best practices in systems-level change". Communique Online. 38 (2). Archived from the original on 2010-01-23. Retrieved 2012-04-09. Fagan, T. K. (1996). Witmer's contributions to school psychological services. American Psychologist, 51. Fagan, T. K. & Wise, P. S. (2007). School Psychology: Past, present, and future, (3rd ed.). Bethesda, MD: National Association of School Psychologists. Harrison, P. L. & Thomas, A. (Eds.). (2014). Best practices in school psychology. Bethesda, MD: National Association of School Psychologists. National Association of School Psychologists (July 15, 2000). Standards for Training and Field Placement Programs in School Psychology / Standards for the Credentialing of School Psychologists. National Association of School Psychologists. Oritz, Samuel O. (2008). Best Practices in School Psychology V: Best Practices in Nondiscriminatory Assessment Practices. National Association of School Psychologists. . External links National Association of School Psychologists American Psychological Association Division 16-School Psychology Student Affiliates of School Psychology The Standards for Educational and Psychological Testing International School Psychology Association Global School Psychology Network School Psychology India Education and training occupations Educational psychology
0.784453
0.986863
0.774148
Hardiness (psychology)
Psychological hardiness, alternatively referred to as personality hardiness or cognitive hardiness in the literature, is a personality style first introduced by Suzanne C. Kobasa in 1979. Kobasa described a pattern of personality characteristics that distinguished managers and executives who remained healthy under life stress, as compared to those who developed health problems. In the following years, the concept of hardiness was further elaborated in a book and a series of research reports by Salvatore Maddi, Kobasa and their graduate students at the University of Chicago. Definitions In early research on hardiness, it was usually defined as a personality structure that functions as a resistance resource in encounters with stressful conditions. The personality structure is composed of the three related general dispositions: commitment a tendency to involve oneself in activities in life and to have a genuine interest in and curiosity about the surrounding world (activities, things, other people) control a tendency to believe and act as if one can influence the events taking place around oneself through one's own efforts challenge the belief that change, rather than stability, is the normal mode of life and constitutes motivating opportunities for personal growth rather than threats to security Maddi characterized hardiness as a combination of three attitudes (commitment, control, and challenge) that provide the courage and motivation needed to turn stressful circumstances from potential calamities into opportunities for personal growth. P.T. Bartone considers hardiness as something more global than mere attitudes. He conceives of hardiness as a broad personality style or generalized mode of functioning that includes cognitive, emotional, and behavioural qualities. This style of functioning affects how one views oneself and interacts with the world around. Historical roots Early conceptualizations of hardiness are evident in Maddi's work, most notably in his descriptions of the ideal identity and premorbid personality. In 1967, Maddi argued that chronic states of meaninglessness and alienation from existence were becoming typical features of modern life. Like other existential psychologists before him, Maddi believed that feelings of apathy and boredom, and inability to believe in the interest-value of the things one is engaged in—feelings that characterised modern living—were caused by upheavals in culture and society, increased industrialization and technological power, and more rigidly differentiated social structures in which people's identities were defined in terms of their social roles. Maddi went on to outline two distinct personality types, based on how people identify or see themselves. The premorbid personality sees him- or herself in fairly simple terms, as nothing more than “a player of social roles and an embodiment of biological needs.” This type of identity thus stresses qualities that are the least unique for him or her when compared to other species (biological needs) or other people (social roles). According to Maddi, people with a premorbid identity can continue with their life for a long time and ostensibly feel adequate and reasonably successful. However, this personality type is also prone to being precipitated into a state of chronic existential neurosis under conditions of stress. This existential neurosis is characterized by the belief that one's life is meaningless, by feelings of apathy and boredom, and by a sense that one's activities are not chosen. In stark contrast to the premorbid personality, one finds the ideal identity. Though still a player of social roles and an expression of the biological sides of man, this personality type also has a deeper and richer understanding of his or her unique psychological side – mental processes like symbolization, imagination, and judgement. Whereas the premorbid personality accepts social roles as given, feels powerless to influence actions, and merely tries to play the roles as well as possible; the ideal identity, through expression of his or her psychological side, does not feel powerless in the face of social pressure. This person can perceive alternatives to mere role-playing, can switch roles more easily, and even redefine existing roles. As a consequence of this deeper psychological understanding of the self, the ideal identity is actively engaged in and interested in life, is willing to act to influence events, and is interested in new experiences and in learning new things. Resiliency mechanisms Hardiness is often considered an important factor in psychological resilience or an individual-level pathway leading to resilient outcomes. A body of research suggests that hardiness has beneficial effects and buffers the detrimental effect of stress on health and performance. Although early studies relied almost exclusively on male business executives, over the years this buffer-effect has been demonstrated in a large variety of occupational groups as well as non-professionals, including military groups, teachers and university staff, firefighters, and students. However, not every investigation has demonstrated such moderating or buffering effects and there is a debate whether the effects of hardiness are interactive or primarily independent of levels of stress. Hardiness appears to confer resiliency by means of a combination of cognitive and behavioural mechanisms, and biophysical processes. Very simplified: as stressful circumstances mount, so does the physical and mental strain on the person, and if this strain is sufficiently intense and prolonged, breakdowns in health and performance are to be expected. The personality style of hardiness moderates this process by encouraging effective mental and behavioural coping, building and utilizing social support, and engaging in effective self-care and health practices. Cognitive appraisals According to Kobasa, people high in hardiness tend to put stressful circumstances into perspective and interpret them as less threatening. As a consequence of these optimistic appraisals, the impact of the stressful events is reduced and they are less likely to negatively affect the health of the person. Research on self-reported stressors, real-life stressful experiences, and laboratory-induced stress support this claim. For example, two studies used military cadets undergoing stressful training as participants and found that cadets that scored high on hardiness appraised the combat training in less threatening terms, and at the same time viewed themselves as more capable of coping with the training. Behavioral coping The coping style most commonly associated with hardiness is transformational coping, which transforms stressful events into less stressful ones. At the cognitive level this involves setting the event into a broader perspective in which it does not seem so terrible. At the level of action, people high in hardiness are believed to react to stressful events by increasing their interaction with them, trying to turn them into an advantage and opportunity for growth. In the process they achieve greater understanding. In support of this notion, two studies demonstrated that the effects of hardiness on symptoms of illness were partly mediated through the positive relation of hardiness to presumed beneficial coping styles and the negative relation to presumed harmful styles of coping. Social resources and health-promoting behaviour Transformational coping can also include health-promoting behaviour and recruiting or making adequate use of social resources. One study showed that in relation to work-environment stress, support from the boss but not support from home promoted health among executives high in hardiness. For those executives ranked low in hardiness, support from the boss did not promote health and family support worsened their health status. These results suggested that hardy people know what type of support to use in a given situation. Another study found support for an indirect effect of hardiness through social support on post-traumatic stress symptomatology in American veterans of the Vietnam War. Although several studies found hardiness to be related to making good use of social resources, some studies failed to support this, finding instead that the two concepts made independent contributions to positive health outcomes. Several investigations found hardiness and physical exercise to be uncorrelated. However, one study examined a broad array of health-protective behaviours, including exercise, and found that hardiness worked indirectly through these behaviours to influence health. Another study found that hardiness was negatively correlated with self-reported alcohol use and with drug use obtained through both urine screens and self-report. Biophysiology Hardiness appears to be related to differences in physiological arousal. Hardiness helps decrease how much stressful events produce arousal in the sympathetic nervous system. Study participants who score high on hardiness exhibit lower cardiovascular reactivity in response to stress. Another study examined the functional efficacy of immune cells in participants who scored low and high on hardiness. It considered in vitro proliferation of lymphocytes in response to invading microorganisms (antigens and mitogens), a process believed to mimic the series of events that occurs in vivo following stimulation by invading microorganisms. Results showed that participants who scored high on hardiness had significantly higher mean antigen- and mitogen-induced proliferative responses. Other studies associated hardiness with variations in cholesterol and hormone levels. Bartone and associates examined hardiness levels against a full lipid profile including high-density lipoprotein, usually considered a beneficial type of cholesterol. This study showed that participants high in hardiness were more than two times as likely to have high levels of high-density lipoprotein compared with participants low in hardiness. Although hardiness might be related to lower levels of the “stress-hormone” cortisol, one of the few studies that investigated this found higher hardiness associated with higher levels of cortisol. Measurement Several instruments measure hardiness. The most frequently used are the Personal Views Survey, the Dispositional Resilience Scale, and the Cognitive Hardiness Scale. Other scales based on hardiness theory have been designed to measure hardiness in specific contexts and in special populations, for example parental grief and among the chronically ill. Hardiness, like many personality variables in the field of psychology, measures a continuous dimension. People vary in their levels of hardiness along a continuum from low to high, with a small percentage scoring at the extreme low/high ends. Given large enough samples, the distribution of scores on hardiness measures approximates a normal, Gaussian distribution. Similarities with other constructs Hardiness has some similarities with other personality constructs. Chief among these are locus of control, sense of coherence (SOC), self-efficacy, and dispositional optimism. Despite their very different theoretical approaches – hardiness arose from existential psychology and philosophy, SOC has its roots in sociology, whereas locus of control, self-efficacy, and dispositional optimism are all based on a learning/social cognitive perspective – some striking similarities are present. People with a strong SOC perceive life as comprehensible, cognitively meaningful, and manageable. Persons with strong SOC are more likely to adapt to demanding situations and can cope successfully with strenuous life events. Both SOC and the commitment dimension of hardiness emphasize an ability to feel deeply involved in the aspects of our lives. Furthermore, both SOC and control emphasize personal resources in facing the demands of stressful situations. The most notable difference between SOC and hardiness is the challenge facet, with the former highlighting stability whereas the latter emphasizes change. Hardiness and the remaining constructs of locus of control, dispositional optimism, and self-efficacy all emphasize goal-directed behaviour in some form. For instance, in accordance with the theory of dispositional optimism, what we expect will be the outcomes of our behaviour helps determine whether we respond to adversity by continuing our efforts or by disengagement. Holding a positive outlook leads to continuous effort to obtain a goal, whereas negative expectations of the future lead to giving up. Similarly, in Bandura's writings on self-efficacy, our beliefs about our ability to do what is required to manage prospective situations highly influences the situations we seek out and the goals we set. See also References External links Hardiness-Resilience.com Hardiness Institute 1979 introductions Positive psychology
0.793552
0.975544
0.774146
Narcissistic personality disorder
Narcissistic personality disorder (NPD) is a personality disorder characterized by a life-long pattern of exaggerated feelings of self-importance, an excessive need for admiration, and a diminished ability to empathize with other people's feelings. Narcissistic personality disorder is one of the sub-types of the broader category known as personality disorders. It is often comorbid with other mental disorders and associated with significant functional impairment and psychosocial disability. Personality disorders are a class of mental disorders characterized by enduring and inflexible maladaptive patterns of behavior, cognition, and inner experience, exhibited across many contexts and deviating from those accepted by any culture. These patterns develop by early adulthood, and are associated with significant distress or impairment. Criteria for diagnosing personality disorders are listed in the sixth chapter of the International Classification of Diseases (ICD) and in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM). There is no standard treatment for NPD. Its high comorbidity with other mental disorders influences treatment choice and outcomes. Psychotherapeutic treatments generally fall into two categories: psychoanalytic/psychodynamic and cognitive behavioral therapy, with growing support for integration of both in therapy. However, there is an almost complete lack of studies determining the effectiveness of treatments. One's subjective experience of the mental disorder, as well as their agreement to and level of engagement with treatment, are highly dependent on their motivation to change. Signs and symptoms Despite outward signs of grandiosity, many people with NPD struggle with symptoms of intense shame, worthlessness, low self-compassion, and self-loathing. Their view of themselves is extremely malleable and dependent on others' opinions of them. They are also hypersensitive to criticism and possess an intense need for admiration. People with NPD gain self-worth and meaning through this admiration. Individuals with NPD are often motivated to achieve their goals, status, improvement, and perfectionism, and to ignore relationships or avoid situations due to fears of incompetence, failure, worthlessness, inferiority, shame, humiliation, and losing control. People with NPD will try to gain social status and approval in an attempt to avoid and combat these feelings, often by exaggerating their skills, accomplishments, and their degree of intimacy with people they consider high-status. Alongside this, they may have difficulty accepting help, vengeful fantasies, a sense of entitlement, and they may feign humility. They are more likely to try forms of plastic surgery due to a desire to gain attention and to be seen as beautiful. A sense of personal superiority may lead them to monopolize conversations, look down on others or to become impatient and disdainful when other persons talk about themselves. Drastic shifts in levels of self-esteem can result in a significantly decreased ability to regulate emotions. Patients with NPD have an impaired ability to recognize facial expressions or mimic emotions, as well as a lower capacity for emotional empathy and emotional intelligence. However they do not display a compromised capacity for cognitive empathy or an impaired theory of mind, which are the abilities to understand others' feelings and attribute mental states to oneself or others respectively. They may also have difficulty relating to others’ experiences and being emotionally vulnerable. People with NPD are less likely to engage in prosocial behavior. They can still act in selfless ways to improve others' perceptions of them, advance their social status, or if explicitly told to. Despite these characteristics, they are more likely to overestimate their capacity for empathy. It is common for people with NPD to have difficult relationships. Narcissists may disrespect others' boundaries or idealize and devalue them. They commonly keep people emotionally distant, and project, deny, or split. Narcissists respond with anger and hostility towards rejection, and can degrade, insult, or blame others who disagree with them. They generally lack self-awareness, and will have a difficult time understanding their own traits and narcissistic tendencies, either due to a belief that NPD characteristics do not apply to them, or due to a refusal to accept or endorse negative characteristics in an attempt to maintain a positive self image. Narcissists can have difficulty seeing multiple perspectives on issues and might engage in black and white thinking. Despite this, people with NPD will often feel as they are skilled at accurately assessing others' feelings. Problematic social media use Diagnosis The DSM-5 indicates that: "Many highly successful individuals display personality traits that might be considered narcissistic. Only when these traits are inflexible, maladaptive, and persisting, and cause significant functional impairment or subjective distress, do they constitute narcissistic personality disorder." Given the high-function sociability associated with narcissism, some people with NPD might not view such a diagnosis as a functional impairment to their lives. Although overconfidence tends to make people with NPD very ambitious, such a mindset does not necessarily lead to professional high achievement and success, because they refuse to take risks, in order to avoid failure or the appearance of failure. Moreover, the psychological inability to tolerate disagreement, contradiction, and criticism, makes it difficult for persons with NPD to work cooperatively or to maintain long-term relationships. DSM-5 The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) describes NPD as possessing at least five of the following nine criteria. A grandiose sense of self-importance (exaggerates achievements and talents, expects to be recognized as superior without commensurate achievements) Preoccupation with fantasies of unlimited success, power, brilliance, beauty, or ideal love Believing that they are "special" and unique and can only be understood by, or should associate with, other special or high-status people (or institutions) Requiring excessive admiration A sense of entitlement (unreasonable expectations of especially favorable treatment or automatic compliance with their expectations) Being interpersonally exploitative (taking advantage of others to achieve their own ends) Lacking empathy (unwilling to recognize or identify with the feelings and needs of others) Often being envious of others or believing that others are envious of them Showing arrogant, haughty behaviors or attitudes Within the DSM-5, NPD is a cluster B personality disorder. Individuals with cluster B personality disorders often appear dramatic, emotional, or erratic. Narcissistic personality disorder is a mental disorder characterized by a life-long pattern of exaggerated feelings of self-importance, an excessive craving for admiration, and a diminished ability to empathize with others' feelings. A diagnosis of NPD, like other personality disorders, is made by a qualified healthcare professional in a clinical interview. In the narcissistic personality disorder, there is a fragile sense of self that becomes a view of oneself as exceptional. Narcissistic personality disorder usually develops either in youth or in early adulthood. True symptoms of NPD are pervasive, are apparent in varied social situations, and are rigidly consistent over time. Severe symptoms of NPD can significantly impair the person's mental capabilities to develop meaningful human relationships, such as friendship, kinship, and marriage. Generally, the symptoms of NPD also impair the person's psychological abilities to function socially, either at work or at school, or within important societal settings. The DSM-5 indicates that, in order to qualify as symptomatic of NPD, the person's manifested personality traits must substantially differ from social norms. ICD-11 and ICD-10 In the International Statistical Classification of Diseases and Related Health Problems, 11th Edition ICD-11 of the World Health Organization (WHO), all personality disorders are diagnosed under a single title called "personality disorder". The criteria for diagnosis are mainly concerned with assessing dysfunction, distress and maladaptive behavior. Once a diagnosis has been made, the clinician then can draw upon five trait domains to describe the particular causes of dysfunction, as these have major implications for potential treatments. NPD, as it currently conceptualised, would correspond more or less entirely to the ICD-11 trait of Dissociality, which includes self-centredness (grandiosity, attention-seeking, entitlement and egocentricity) and lack of empathy (callousness, ruthlessness, manipulativeness, interpersonal exploitativeness, and hostility). In the previous edition, the ICD-10, narcissistic personality disorder (NPD) is listed under the category of "other specific personality disorders", meaning the ICD-10 required that cases otherwise described as NPD in the DSM-5 would only need to meet a general set of diagnostic criteria. Differential diagnosis The occurrence of narcissistic personality disorder presents a high rate of comorbidity with other mental disorders. People with a fragile variant of NPD (see Subtypes) are prone to bouts of psychological depression, often to the degree that meets the clinical criteria for a co-occurring depressive disorder. NPD is associated with the occurrence of bipolar disorder and substance use disorders, especially cocaine use disorder. NPD may also be comorbid or differentiated with the occurrence of other mental disorders, including histrionic personality disorder, borderline personality disorder, antisocial personality disorder, or paranoid personality disorder. NPD should also be differentiated from mania and hypomania as these cases can also present with grandiosity, but present with different levels of functional impairment. It is common for children and adolescents to display personality traits that resemble NPD, but such occurrences are usually transient, and register below the clinical criteria for a formal diagnosis of NPD. Subtypes Although the DSM-5 diagnostic criteria for NPD has been viewed as homogeneous, there are a variety of subtypes used for classification of NPD. There is poor consensus on how many subtypes exist, but there is broad acceptance that there are at least two: grandiose or overt narcissism, and vulnerable or covert narcissism. However, none of the subtypes of NPD are recognized in the DSM-5 or in the ICD-11. Empirically verified subtypes Some research has indicated the existence of three subtypes of NPD, which can be distinguished by symptom criteria, comorbidity and other clinical criteria. These are as follows: Grandiose/Overt: the group exhibits grandiosity, entitlement, interpersonal exploitativeness and manipulation, pursuit of power and control, lack of empathy and remorse, and marked irritability and hostility. This group was noted for high levels of comorbid antisocial and paranoid personality disorders, substance abuse, externalizing, unemployment and greater likelihood of violence. Of note, Russ et al. observed that this group "do not appear to suffer from underlying feelings of inadequacy or to be prone to negative affect states other than anger", an observation corroborated by recent research which found this variant to show strong inverse associations with depressive, anxious-avoidant, and dependant/victimised features. Vulnerable/Covert: this variant is defined by feelings of shame, envy, resentment, and inferiority (which is occasionally "masked" by arrogance), entitlement, a belief that one is misunderstood or unappreciated, and excessive reactivity to slights or criticism. This variant is associated with elevated levels of neuroticism, psychological distress, depression, and anxiety. In fact, recent research suggests that vulnerable narcissism is mostly the product of dysfunctional levels of neuroticism. Vulnerable narcissism is sometimes comorbid with diagnoses of avoidant, borderline and dependent personality disorders. High-functioning/Exhibitionistic: A third subtype for classifying people with NPD, initially theorized by psychiatrist Glen Gabbard, is termed high functioning or exhibitionistic. This variant has been described as "high functioning narcissists [who] were grandiose, competitive, attention-seeking, and sexually provocative; they tended to show adaptive functioning and utilize their narcissistic traits to succeed." This group has been found to have relatively few psychological issues and high rates of obsessive-compulsive personality disorder, with excessive perfectionism posited as a potential cause for their impairment. Others Oblivious/Hypervigilant: Glen Gabbard described two subtypes of NPD in 1989, later referred to as equivalent to, the grandiose and vulnerable subtypes. The first was the "oblivious" subtype of narcissist, equivalent to the grandiose subtype. This group was described as being grandiose, arrogant and thick-skinned, while also exhibiting personality traits of helplessness and emotional emptiness, low self-esteem and shame. These were observed in people with NPD to be expressed as socially avoidant behavior in situations where self-presentation is difficult or impossible, leading to withdrawal from situations where social approval is not given. The second subtype Gabbard described was termed "hypervigilant", equivalent to the vulnerable subtype. People with this subtype of NPD were described as having easily hurt feelings, an oversensitive temperament, and persistent feelings of shame. Communal narcissism: A fourth type is the communal narcissist. Communal narcissism is a form of narcissism that occurs in group settings. It is characterized by an inflated sense of importance and a need for admiration from others. In relation to the grandiose narcissist, a communal narcissist is arrogant and self-motivating, and shares the sense of entitlement and grandiosity. However, the communal narcissist seeks power and admiration in the communal realm. They see themselves as altruistic, saintly, caring, helpful, and warm. Individuals who display communal narcissism often seek out positions of power and influence within their groups. Millon's subtypes In the study Disorders of Personality: DSM-IV-TM and Beyond (1996), Theodore Millon suggested five subtypes of NPD, although they did not identify specific treatments per subtype. Masterson's subtypes (exhibitionist and closet) In 1993, James F. Masterson proposed two subtypes for pathological narcissism, exhibitionist and closet. Both fail to adequately develop an age- and phase- appropriate self because of defects in the quality of psychological nurturing provided, usually by the mother. A person with exhibitionist narcissism is similar to NPD described in the DSM-IV and differs from closet narcissism in several ways. A person with closet narcissism is more likely to be described as having a deflated, inadequate self-perception and greater awareness of emptiness within. A person with exhibitionist narcissism would be described as having an inflated, grandiose self-perception with little or no conscious awareness of feelings of emptiness. Such a person would assume that their condition was normal and that others were just like them. A person with closet narcissism is described to seek constant approval from others and appears similar to those with borderline personality disorder in the need to please others. A person with exhibitionist narcissism seeks perfect admiration all the time from others. Malignant narcissism Malignant narcissism, a term first coined in Erich Fromm's 1964 book The Heart of Man: Its Genius for Good and Evil, is a syndrome consisting of a combination of NPD, antisocial personality disorder, and paranoid traits. A person with malignant narcissism was described as deriving higher levels of psychological gratification from accomplishments over time, suspected to worsen the disorder. Because a person with malignant narcissism becomes more involved in psychological gratification, it was suspected to be a risk factor for developing antisocial, paranoid, and schizoid personality disorders. The term malignant is added to the term narcissist to indicate that individuals with this disorder have a severe form of narcissistic disorder that is characterized also by features of paranoia, psychopathy (anti-social behaviors), aggression, and sadism. Historical demarcation of grandiose and vulnerable types Over the years, many clinicians and theorists have described two variants of NPD akin to the grandiose and vulnerable expressions of trait narcissism. Some examples include: Assessment and screening Narcissistic Personality Inventory Risk factors for NPD and grandiose/overt and vulnerable/covert subtypes are measured using the narcissistic personality inventory, an assessment tool originally developed in 1979, which has undergone multiple iterations with new versions in 1984, 2006 and 2014. It captures principally grandiose narcissism, but also seems to capture elements of vulnerability. A popular three-factor model has it that grandiose narcissism is assessed via the Leadership/Authority and Grandiose/Exhibitionism facets, while a combination of grandiose and vulnerable traits are indexed by the Entitlement/Exploitativeness facet. Pathological Narcissism Inventory The Pathological Narcissism Inventory (PNI) was designed to measure fluctuations in grandiose and vulnerable narcissistic states, similar to what is ostensibly observed by some clinicians (though empirical demonstration of this phenomenon is lacking). While having both "grandiosity" and vulnerability scales, empirically both seem to primarily capture vulnerable narcissism. The PNI scales show significant associations with parasuicidal behavior, suicide attempts, homicidal ideation, and several aspects of psychotherapy utilization. Five-Factor Narcissism Inventory In 2013, the Five-Factor Narcissism Inventory (FFNI) was defined as a comprehensive assay of grandiose and vulnerable expressions of trait narcissism. The scale measures 11 traits of grandiose narcissism and 4 traits of vulnerable narcissism, both of which correlate with clinical ratings of NPD (with grandiose features of arrogance, grandiose fantasies, manipulativeness, entitlement and exploitativeness showing stronger relations). Later analysis revealed that the FFNI actually measures three factors: Agentic Extraversion: an exaggerated sense of self-importance, grandiose fantasies, striving for greatness and acclaim, social dominance and authoritativeness, and exhibitionistic, charming interpersonal conduct. Self-Centred Antagonism: disdain for others, psychological entitlement, interpersonally exploitative and manipulative behaviour, lack of empathy, anger in response to criticism or rebuke, suspiciousness, and thrill-seeking. Narcissistic Neuroticism: shame-proneness, oversensitivity and negative emotionality to criticism and rebuke, and excessive need for admiration to maintain self-esteem. Grandiose narcissism is a combination of agency and antagonism, and vulnerability is a combination of antagonism and neuroticism. The three factors show differential associations with clinically important variables. Agentic traits are associated with high self-esteem, positive view others and the future, autonomous and authentic living, commitment to personal growth, sense of purpose in life and life satisfaction. Neurotic traits show precisely the opposite correlation with all of these variables, while antagonistic traits show more complex associations; they are associated with negative view of others (but necessarily of the self), a sense of alienation from their 'true self', disinterest in personal growth, negative relationships with others, and all forms of aggression. Millon Clinical Multiaxial Inventory The Millon Clinical Multiaxial Inventory (MCMI) is another diagnostic test developed by Theodore Millon. The MCMI includes a scale for narcissism. The NPI and MCMI have been found to be well correlated. Whereas the MCMI measures narcissistic personality disorder (NPD), the NPI measures narcissism as it occurs in the general population; the MCMI is a screening tool. In other words, the NPI measures "normal" narcissism; i.e., most people who score very high on the NPI do not have NPD. Indeed, the NPI does not capture any sort of narcissism taxon as would be expected if it measured NPD. A 2020 study found that females scored significantly higher on vulnerable narcissism than males, but no gender differences were found for grandiose narcissism. Causes The cause of narcissistic personality disorder (NPD) is unclear, although there is evidence for a strong biological or genetic underpinning. Research has found NPD has a strong heritable component. It is unclear if or how much a person's upbringing contributes to the development of NPD, although many speculative theories have been proposed. Evidence to support social factors in the development of NPD is limited. Some studies have found NPD correlates with permissive and overindulgent parenting in childhood, while others have found correlations with harsh discipline, neglect or abuse. Findings have been inconsistent, and scientists do not know if these correlations are causal, as these studies do not control for genetic confounding. This problem of genetic confounding is explained by psychologist Svenn Torgersen in a 2009 review: Twin studies allow scientists to assess the influence of genes and environment, in particular, how much of the variation in a trait is attributed to the "shared environment" (influences shared by twins, such as parents and upbringing) or the "unshared environment" (measurement error, noise, differing illnesses between twins, randomness in brain growth, and social or non-social experiences that only one twin experienced). According to a 2018 review, twin studies of NPD have found little or no influence from the shared environment, and a major contribution of genes and the non-shared environment: According to neurogeneticist Kevin Mitchell, a lack of influence from the shared environment indicates that the non-shared environmental influence may be largely non-social, perhaps reflecting innate processes such as randomness in brain growth. Neuroscientists have also studied the brains of people with NPD using structural imaging technology. A 2021 review concluded the most consistent finding among NPD patients is lowered gray matter volume in the medial prefrontal cortex. Studies of the occurrence of narcissistic personality disorder identified structural abnormalities in the brains of people with NPD, specifically, a lesser volume of gray matter in the left, anterior insular cortex. The results of a 2015 study associated the condition of NPD with a reduced volume of gray matter in the prefrontal cortex. The regions of the brain identified and studied – the insular cortex and the prefrontal cortex – are associated with the human emotions of empathy and compassion, and with the mental functions of cognition and emotional regulation. The neurological findings of the studies suggest that NPD may be related to a compromised capacity for emotional empathy and emotional regulation. Evolutionary models of NPD have also been proposed. According to psychologist Marco Del Giudice, cluster B traits including NPD, predict increased mating success and fertility. NPD could potentially be an adaptive evolutionary phenomena, though a risky one that can sometimes result in social rejection and failure to reproduce. Another proposal is that NPD may result from an excess of traits which are only adaptive in moderate amounts (leadership success increases with moderate degrees of narcissism, but declines at the high end of narcissism). Research on NPD is limited, because patients are hard to recruit for study. The cause of narcissistic personality disorder requires further research. Management Treatment for NPD is primarily psychotherapeutic; there is no clear evidence that psychopharmacological treatment is effective for NPD, although it can prove useful for treating comorbid disorders. Psychotherapeutic treatment falls into two general categories: psychoanalytic/psychodynamic and cognitive behavioral. Psychoanalytic therapies include schema therapy, transference focused psychotherapy, mentalization-based treatment and metacognitive psychotherapy. Cognitive behavioral therapies include cognitive behavioral therapy and dialectal behavior therapy. Formats also include group therapy and couples therapy. The specific choice of treatment varies based on individual presentations. Management of narcissistic personality disorder has not been well studied, however many treatments tailored to NPD exist. Therapy is complicated by the lack of treatment-seeking behavior in people with NPD, despite mental distress. Additionally, people with narcissistic personality disorders have decreased life satisfaction and lower qualities of life, irrespective of diagnosis. People with NPD often present with comorbid mental disorders, complicating diagnosis and treatment. NPD is rarely the primary reason for which people seek mental health treatment. When people with NPD enter treatment (psychologic or psychiatric), they often express seeking relief from a comorbid mental disorder, including major depressive disorder, a substance use disorder (drug addiction), or bipolar disorder. Prognosis , no treatment guidelines exist for NPD and no empirical studies have been conducted on specific NPD groups to determine efficacy for psychotherapies and pharmacology. Though there is no known single cure for NPD, there are some things one can do to lessen its symptoms. Medications such as antidepressants, which treat depression, are commonly prescribed by healthcare providers; mood stabilizers to reduce mood swings and antipsychotic drugs to reduce the prevalence of psychotic episodes. The presence of NPD in patients undergoing psychotherapy for the treatment of other mental disorders is associated with slower treatment progress and higher dropout rates. In this therapy, the goals often are examining traits and behaviors that negatively affect life, identifying ways these behaviors cause distress to the person and others, exploring early experiences that contributed to narcissistic defenses, developing new coping mechanisms to replace those defenses, helping the person see themselves and others in more realistic and nuanced ways, rather than wholly good or wholly bad, identifying and practicing more helpful patterns of behavior, developing interpersonal skills, and learning to consider the needs and feelings of others. Epidemiology , overall prevalence is estimated to range from 0.8% to 6.2%. In 2008 under the DSM-IV, lifetime prevalence of NPD was estimated to be 6.2%, with 7.7% for men and 4.8% for women, with a 2015 study confirming the gender difference. In clinical settings, prevalence estimates range from 1% to 15%. The occurrence of narcissistic personality disorder presents a high rate of comorbidity with other mental disorders. History The term "narcissism" comes from a first century (written in the year 8 AD) book by the Roman poet Ovid. Metamorphoses Book III is a myth about two main characters, Narcissus and Echo. Narcissus is a handsome young man who spurns the advances of many potential lovers. When Narcissus rejects the nymph Echo, named this way because she was cursed to only echo the sounds that others made, the gods punish him by making him fall in love with his own reflection in a pool of water. When Narcissus discovers that the object of his love cannot love him back, he slowly pines away and dies. The concept of excessive selfishness has been recognized throughout history. In ancient Greece, the concept was understood as hubris. It is only since the late 1800s that narcissism has been defined in psychological terms: Havelock Ellis (1898) was the first psychologist to use the term when he linked the myth to the condition in one of his patients. Sigmund Freud (1905–1953) used the terms "narcissistic libido" in his Three Essays on the Theory of Sexuality. Ernest Jones (1913/1951) was the first to construe extreme narcissism as a character flaw. Robert Waelder (1925) published the first case study of narcissism. His patient was a successful scientist with an attitude of superiority, an obsession with fostering self-respect, and a lack of normal feelings of guilt. The patient was aloof and independent from others and had an inability to empathize with others' situations, and was selfish sexually. Waelder's patient was also overly logical and analytical and valued abstract intellectual thought (thinking for thinking's sake) over the practical application of scientific knowledge. Narcissistic personality was first described by the psychoanalyst Robert Waelder in 1925. The term narcissistic personality disorder (NPD) was coined by Heinz Kohut in 1968. Waelder's initial study has been influential in the way narcissism and the clinical disorder Narcissistic personality disorder are defined today Freudianism and psychoanalysis Much early history of narcissism and NPD originates from psychoanalysis. Regarding the adult neurotic's sense of omnipotence, Sigmund Freud said that "this belief is a frank acknowledgement of a relic of the old megalomania of infancy"; and concluded that: "we can detect an element of megalomania in most other forms of paranoic disorder. We are justified in assuming that this megalomania is essentially of an infantile nature, and that, as development proceeds, it is sacrificed to social considerations." Narcissistic injury and narcissistic scar are terms used by Freud in the 1920s. Narcissistic wound and narcissistic blow are other, almost interchangeable, terms. When wounded in the ego, either by a real or a perceived criticism, a narcissistic person's displays of anger can be disproportionate to the nature of the criticism suffered; but typically, the actions and responses of the NPD person are deliberate and calculated. Despite occasional flare-ups of personal insecurity, the inflated self-concept of the NPD person is primarily stable. In The Psychology of Gambling (1957), Edmund Bergler considered megalomania to be a normal occurrence in the psychology of a child, a condition later reactivated in adult life, if the individual takes up gambling. In The Psychoanalytic Theory of Neurosis (1946), Otto Fenichel said that people who, in their later lives, respond with denial to their own narcissistic injury usually undergo a similar regression to the megalomania of childhood. Narcissistic supply Narcissistic supply was a concept introduced by Otto Fenichel in 1938, to describe a type of admiration, interpersonal support, or sustenance drawn by an individual from his or her environment and essential to their self-esteem. The term is typically used in a negative sense, describing a pathological or excessive need for attention or admiration that does not take into account the feelings, opinions, or preferences of other people. Narcissistic rage The term narcissistic rage was a concept introduced by Heinz Kohut in 1972. Narcissistic rage was theorised as a reaction to a perceived threat to a narcissist's self-esteem or self-worth. Narcissistic rage occurs on a continuum from aloofness, to expressions of mild irritation or annoyance, to serious outbursts, including violent attacks. Narcissistic rage reactions are not necessarily limited to narcissistic personality disorder. They may also be seen in catatonic, paranoid delusion, and depressive episodes. It was later suggested that narcissistic people have two layers of rage; the first layer of rage being directed constant anger towards someone else, with the second layer being self-deprecating. Object relations In the second half of the 20th century, in contrast to Freud's perspective of megalomania as an obstacle to psychoanalysis, in the US and UK Kleinian psychologists used the object relations theory to re-evaluate megalomania as a defence mechanism. This Kleinian therapeutic approach built upon Heinz Kohut's view of narcissistic megalomania as an aspect of normal mental development, by contrast with Otto Kernberg's consideration of such grandiosity as a pathological distortion of normal psychological development. To the extent that people are pathologically narcissistic, the person with NPD can be a self-absorbed individual who passes blame by psychological projection and is intolerant of contradictory views and opinions; is apathetic towards the emotional, mental, and psychological needs of other people; and is indifferent to the negative effects of their behaviors, whilst insisting that people should see them as an ideal person. The merging of the terms "inflated self-concept" and "actual self" is evident in later research on the grandiosity component of narcissistic personality disorder, along with incorporating the defence mechanisms of idealization and devaluation and of denial. Comparison to other personality disorders NPD shares properties with borderline personality disorder, including social stigma, unclear causes and prevalence rates. In a 2020 study, it was argued that NPD is following a similar historical trend to borderline personality disorder: "In the past three decades, enormous progress has been made to elucidate the psychopathology, longitudinal course, and effective treatment for BPD. NPD, which remains as similarly stigmatized and poorly understood as BPD once was, now carries the potential for a new wave of investigation and treatment development." However, NPD also shares some commonality with the now discredited "multiple personality disorder" (MPD) personality constellation in popular culture and clinical lore. MPD received a high level of mainstream media attention the 1980s, followed by a nearly complete removal from public discourse within the following two decades; this was in part due to thorough debunking many of its propositions and the evident societal harm created by its entry into the legal defence realm. Similar to MPD, NPD has been the subject of high levels of preoccupation in social and popular media forums, without a firm empirical basis despite over a century of description in clinical lore. The NPD label may be misused colloquially and clinically to disparage a target for the purpose of buttressing one's own self-esteem, or other motives that are detrimental for the person receiving the label. Finally, the rise in popular interest in NPD is not accompanied by hypothesized increases in narcissism among recent generations, despite widespread assumptions to the contrary. Controversy The extent of controversy about narcissism was on display when the committee on personality disorders for the 5th Edition (2013) of the Diagnostic and Statistical Manual of Mental Disorders recommended the removal of Narcissistic Personality from the manual. A contentious three-year debate unfolded in the clinical community with one of the sharpest critics being John Gunderson, who led the DSM personality disorders committee for the 4th edition of the manual. The American Psychiatric Association's (APA) formulation, description, and definition of narcissistic personality disorder, as published in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Ed., Text Revision (DSM-IV-TR, 2000), was criticised by clinicians as inadequately describing the range and complexity of the personality disorder that is NPD. That it is excessively focused upon "the narcissistic individual's external, symptomatic, or social interpersonal patterns – at the expense of ... internal complexity and individual suffering", which reduced the clinical utility of the NPD definition in the DSM-IV-TR. In revising the diagnostic criteria for personality disorders, the work group for the list of "Personality and Personality Disorders" proposed the elimination of narcissistic personality disorder (NPD) as a distinct entry in the DSM-5, and thus replaced a categorical approach to NPD with a dimensional approach, which is based upon the severity of the dysfunctional-personality-trait domains. Clinicians critical of the DSM-5 revision characterized the new diagnostic system as an "unwieldy conglomeration of disparate models that cannot happily coexist", which is of limited usefulness in clinical practice. Despite the reintroduction of the NPD entry, the APA's re-formulation, re-description, and re-definition of NPD, towards a dimensional view based upon personality traits, remains in the list of personality disorders of the DSM-5. A 2011 study concluded that narcissism should be conceived as personality dimensions pertinent to the full range of personality disorders, rather than as a distinct diagnostic category. In a 2012 literature review about NPD, the researchers concluded that narcissistic personality disorder "shows nosological inconsistency, and that its consideration as a trait domain needed further research would be strongly beneficial to the field." In a 2018 latent structure analysis, results suggested that the DSM-5 NPD criteria fail to distinguish some aspects of narcissism relevant to diagnosis of NPD and subclinical narcissism. In popular culture Suzanne Stone-Maretto, Nicole Kidman's character in the film To Die For (1995), wants to appear on television at all costs, even if this involves murdering her husband. A psychiatric assessment of her character noted that she "was seen as a prototypical narcissistic person by the raters: on average, she satisfied 8 of 9 criteria for narcissistic personality disorder... had she been evaluated for personality disorders, she would receive a diagnosis of narcissistic personality disorder". Jay Gatsby, the eponymous character of F. Scott Fitzgerald's novel The Great Gatsby (1925), "an archetype of self-made American men seeking to join high society", has been described by English professor Giles Mitchell as a "pathological narcissist" for whom the "ego-ideal" has become "inflated and destructive" and whose "grandiose lies, poor sense of reality, sense of entitlement, and exploitive treatment of others" conspire toward his own demise. See also Messiah complex Superiority complex References Further reading Dark triad Psychoanalytic terminology Wikipedia medicine articles ready to translate Wikipedia neurology articles ready to translate
0.774299
0.99979
0.774137
Biological psychiatry
Biological psychiatry or biopsychiatry is an approach to psychiatry that aims to understand mental disorder in terms of the biological function of the nervous system. It is interdisciplinary in its approach and draws on sciences such as neuroscience, psychopharmacology, biochemistry, genetics, epigenetics and physiology to investigate the biological bases of behavior and psychopathology. Biopsychiatry is the branch of medicine which deals with the study of the biological function of the nervous system in mental disorders. There is some overlap with neurology, which focuses on disorders where gross or visible pathology of the nervous system is apparent, such as epilepsy, cerebral palsy, encephalitis, neuritis, Parkinson's disease and multiple sclerosis. There is also some overlap with neuropsychiatry, which typically deals with behavioral disturbances in the context of apparent brain disorder. In contrast biological psychiatry describes the basic principles and then delves deeper into various disorders. It is structured to follow the organisation of the DSM-IV, psychiatry's primary diagnostic and classification guide. The contributions of this field explore functional neuroanatomy, imaging, and neuropsychology and pharmacotherapeutic possibilities for depression, anxiety and mood disorders, substance abuse and eating disorders, schizophrenia and psychotic disorders, and cognitive and personality disorders. Biological psychiatry and other approaches to mental illness are not mutually exclusive, but may simply attempt to deal with the phenomena at different levels of explanation. Because of the focus on the biological function of the nervous system, however, biological psychiatry has been particularly important in developing and prescribing drug-based treatments for mental disorders. In practice, however, psychiatrists may advocate both medication and psychological therapies when treating mental illness. The therapy is more likely to be conducted by clinical psychologists, psychotherapists, occupational therapists or other mental health workers who are more specialized and trained in non-drug approaches. The history of the field extends back to the ancient Greek physician Hippocrates, but the phrase biological psychiatry was first used in peer-reviewed scientific literature in 1953. The phrase is more commonly used in the United States than in some other countries such as the UK. However the term "biological psychiatry" is sometimes used as a phrase of disparagement in controversial dispute. Scope and detailed definition Biological psychiatry is a branch of psychiatry where the focus is chiefly on researching and understanding the biological basis of major mental disorders such as unipolar and bipolar affective (mood) disorders, schizophrenia and organic mental disorders such as Alzheimer's disease. This knowledge has been gained using imaging techniques, psychopharmacology, neuroimmunochemistry and so on. Discovering the detailed interplay between neurotransmitters and the understanding of the neurotransmitter fingerprint of psychiatric drugs such as clozapine has been a helpful result of the research. On a research level, it includes all possible biological bases of behavior — biochemical, genetic, physiological, neurological and anatomical. On a clinical level, it includes various therapies, such as drugs, diet, avoidance of environmental contaminants, exercise, and alleviation of the adverse effects of life stress, all of which can cause measurable biochemical changes. The biological psychiatrist views all of these as possible etiologies of or remedies for mental health disorders. However, the biological psychiatrist typically does not discount talk therapies. Medical psychiatric training generally includes psychotherapy and biological approaches. Accordingly, psychiatrists are usually comfortable with a dual approach: "psychotherapeutic methods […] are as indispensable as psychopharmacotherapy in a modern psychiatric clinic". Basis for biological psychiatry Sigmund Freud developed psychotherapy in the early 1900s, and through the 1950s this technique was prominent in treating mental health disorders. However, in the late 1950s, the first modern antipsychotic and antidepressant drugs were developed: chlorpromazine (also known as Thorazine), the first widely used antipsychotic, was synthesized in 1950, and iproniazid, one of the first antidepressants, was first synthesized in 1957. In 1959 imipramine, the first tricyclic antidepressant, was developed. Based significantly on clinical observations of the above drug results, in 1965 the seminal paper "The catecholamine hypothesis of affective disorders" was published. It articulated the "chemical imbalance" hypothesis of mental health disorders, especially depression. It formed much of the conceptual basis for the modern era in biological psychiatry. The hypothesis has been extensively revised since its advent in 1965. More recent research points to deeper underlying biological mechanisms as the possible basis for several mental health disorders. Modern brain imaging techniques allow noninvasive examination of neural function in patients with mental health disorders, however this is currently experimental. With some disorders it appears the proper imaging equipment can reliably detect certain neurobiological problems associated with a specific disorder. If further studies corroborate these experimental results, future diagnosis of certain mental health disorders could be expedited using such methods. Another source of data indicating a significant biological aspect of some mental health disorders is twin studies. Identical twins have the same nuclear DNA, so carefully constructed studies may indicate the relative importance of environmental and genetic factors on the development of a particular mental health disorder. The results from this research and the associated hypotheses form the basis for biological psychiatry and the treatment approaches in a clinical setting. Scope of clinical biological psychiatric treatment Since various biological factors can affect mood and behavior, psychiatrists often evaluate these before initiating further treatment. For example, dysfunction of the thyroid gland may mimic a major depressive episode, or hypoglycemia (low blood sugar) may mimic psychosis. While pharmacological treatments are used to treat many mental disorders, other non-drug biological treatments are used as well, ranging from changes in diet and exercise to transcranial magnetic stimulation and electroconvulsive therapy. Types of non-biological treatments such as cognitive therapy, behavioral therapy, and psychodynamic psychotherapy are often used in conjunction with biological therapies. Biopsychosocial models of mental illness are widely in use, and psychological and social factors play a large role in mental disorders, even those with an organic basis such as schizophrenia. Diagnostic process Correct diagnosis is important for mental health disorders, otherwise the condition could worsen, resulting in a negative impact on both the patient and the healthcare system. Another problem with misdiagnosis is that a treatment for one condition might exacerbate other conditions. In other cases apparent mental health disorders could be a side effect of a serious biological problem such as concussion, brain tumor, or hormonal abnormality, which could require medical or surgical intervention. Examples of biologic treatments Seasonal affective disorder: light therapy, SSRIs (Like fluoxetine and paroxetine) Clinical depression: SSRIs, serotonin-norepinephrine reuptake inhibitors (venlafaxine), serotonin modulator and stimulators (Vortioxetine), dopamine reuptake inhibitors: (bupropion), tricyclic antidepressants, monoamine oxidase inhibitors, electroconvulsive therapy, transcranial magnetic stimulation, fish oil, St. John's wort Bipolar disorder: lithium carbonate, antipsychotics (like olanzapine or quetiapine), anticonvulsants (like valproic acid, lamotrigine and topiramate). Schizophrenia: antipsychotics such as haloperidol, clozapine, olanzapine, risperidone and quetiapine. Generalized anxiety disorder: SSRIs, benzodiazepines, buspirone Obsessive-compulsive disorder: tricyclic antidepressants, SSRIs ADHD: clonidine, D-amphetamine, methamphetamine, and methylphenidate History Early 20th century Sigmund Freud was originally focused on the biological causes of mental illness. Freud's professor and mentor, Ernst Wilhelm von Brücke, strongly believed that thought and behavior were determined by purely biological factors. Freud initially accepted this and was convinced that certain drugs (particularly cocaine) functioned as antidepressants. He spent many years trying to "reduce" personality to neurology, a cause he later gave up on before developing his now well-known psychoanalytic theories. Nearly 100 years ago, Harvey Cushing, the father of neurosurgery, noted that pituitary gland problems often cause mental health disorders. He wondered whether the depression and anxiety he observed in patients with pituitary disorders were caused by hormonal abnormalities, the physical tumor itself, or both. Mid 20th century An important point in modern history of biological psychiatry was the discovery of modern antipsychotic and antidepressant drugs. Chlorpromazine (also known as Thorazine), an antipsychotic, was first synthesized in 1950. In 1952, iproniazid, a drug being trialed against tuberculosis, was serendipitously discovered to have anti-depressant effects, leading to the development of MAOIs as the first class of antidepressants. In 1959 imipramine, the first tricyclic antidepressant, was developed. Research into the action of these drugs led to the first modern biological theory of mental health disorders called the catecholamine theory, later broadened to the monoamine theory, which included serotonin. These were popularly called the "chemical imbalance" theory of mental health disorders. Late 20th century Starting with fluoxetine (marketed as Prozac) in 1988, a series of monoamine-based antidepressant medications belonging to the class of selective serotonin reuptake inhibitors were approved. These were no more effective than earlier antidepressants, but generally had fewer side effects. Most operate on the same principle, which is modulation of monoamines (neurotransmitters) in the neuronal synapse. Some drugs modulate a single neurotransmitter (typically serotonin). Others affect multiple neurotransmitters, called dual action or multiple action drugs. They are no more effective clinically than single action versions. That most antidepressants invoke the same biochemical method of action may explain why they are each similarly effective in rough terms. Recent research indicates antidepressants often work but are less effective than previously thought. Problems with catecholamine/monoamine hypotheses The monoamine hypothesis was compelling, especially based on apparently successful clinical results with early antidepressant drugs, but even at the time there were discrepant findings. Only a minority of patients given the serotonin-depleting drug reserpine became depressed; in fact reserpine even acted as an antidepressant in many cases. This was inconsistent with the initial monoamine theory which said depression was caused by neurotransmitter deficiency. Another problem was the time lag between antidepressant biological action and therapeutic benefit. Studies showed the neurotransmitter changes occurred within hours, yet therapeutic benefit took weeks. To explain these behaviors, more recent modifications of the monoamine theory describe a synaptic adaptation process which takes place over several weeks. Yet this alone does not appear to explain all of the therapeutic effects. Latest biological hypotheses of mental health disorders New research indicates different biological mechanisms may underlie some mental health disorders, only indirectly related to neurotransmitters and the monoamine chemical imbalance hypothesis. Recent research indicates a biological "final common pathway" may exist which both electroconvulsive therapy and most current antidepressant drugs have in common. These investigations show recurrent depression may be a neurodegenerative disorder, disrupting the structure and function of brain cells, destroying nerve cell connections, even killing certain brain cells, and precipitating a decline in overall cognitive function. In this new biological psychiatry viewpoint, neuronal plasticity is a key element. Increasing evidence points to various mental health disorders as a neurophysiological problem which inhibits neuronal plasticity. This is called the neurogenic hypothesis of depression. It promises to explain pharmacological antidepressant action, including the time lag from taking the drug to therapeutic onset, why downregulation (not just upregulation) of neurotransmitters can help depression, why stress often precipitates mood disorders, and why selective modulation of different neurotransmitters can help depression. It may also explain the neurobiological mechanism of other non-drug effects on mood, including exercise, diet and metabolism. By identifying the neurobiological "final common pathway" into which most antidepressants funnel, it may allow rational design of new medications which target only that pathway. This could yield drugs which have fewer side effects, are more effective and have quicker therapeutic onset. There is significant evidence that oxidative stress plays a role in schizophrenia. Criticism of A number of patients, activists, and psychiatrists dispute biological psychiatry as a scientific concept or as having a proper empirical basis, for example arguing that there are no known biomarkers for recognized psychiatric conditions. This position has been represented in academic journals such as The Journal of Mind and Behavior and Ethical Human Psychology and Psychiatry, which publishes material specifically countering "the idea that emotional distress is due to an underlying organic disease." Alternative theories and models instead view mental disorders as non-biomedical and might explain it in terms of, for example, emotional reactions to negative life circumstances or to acute trauma. Fields such as social psychiatry, clinical psychology, and sociology may offer non-biomedical accounts of mental distress and disorder for certain ailments and are sometimes critical of biopsychiatry. Social critics believe biopsychiatry fails to satisfy the scientific method because they believe there is no testable biological evidence of mental disorders. Thus, these critics view biological psychiatry as a pseudoscience attempting to portray psychiatry as a biological science. R.D. Laing argued that attributing mental disorders to biophysical factors was often flawed due to the diagnostic procedure. The "complaint" is often made by a family member, not the patient, the "history" provided by someone other than patient, and the "examination" consists of observing strange, incomprehensible behavior. Ancillary tests (EEG, PET) are often done after diagnosis, when treatment has begun, which makes the tests non-blind and incurs possible confirmation bias. The psychiatrist Thomas Szasz commented frequently on the limitations of the medical approach to psychiatry and argued that mental illnesses are medicalized problems in living. Silvano Arieti, while approving of the use of medication in some cases of schizophrenia, preferred intensive psychotherapy without medication if possible. He was also known for approving the use of electroconvulsive therapy on those with disorganized schizophrenia in order to make them reachable by psychotherapy. The views he expressed in Interpretation of Schizophrenia are nowadays known as the trauma model of mental disorders, an alternative to the biopsychiatric model. See also Biopsychiatry controversy Biological psychology Psychiatry Therapygenetics Pharmacogenetics Neuropsychology Medical genetics References Biology of bipolar disorder
0.794614
0.974161
0.774082
Mental distress
Mental distress or psychological distress encompasses the symptoms and experiences of a person's internal life that are commonly held to be troubling, confusing or out of the ordinary. Mental distress can potentially lead to a change of behavior, affect a person's emotions in a negative way, and affect their relationships with the people around them. Certain traumatic life experiences (such as bereavement, stress, lack of sleep, use of drugs, assault, abuse, or accidents such as the death of a loved one) can induce mental distress. Those who are members of vulnerable populations might experience discrimination that places them at increased risk for experiencing mental distress as well. This may be something which resolves without further medical intervention, though people who endure such symptoms longer term are more likely to be diagnosed with mental illness. This definition is not without controversy as some mental health practitioners would use the terms "mental distress" and "mental disorder" interchangeably. Some users of mental health services prefer the term "mental distress" in describing their experience as they feel it better captures that sense of the unique and personal nature of their experience, while also making it easier to relate to, since everyone experiences distress at different times. The term also fits better with the social model of disability. Differences from mental disorder Some psychiatrists may use these two terms "mental distress" and "mental disorder" interchangeably. However, it can be argued that there are fundamental variations between mental distress and mental disorder. "Mental distress" has a wider scope than the related term "mental illness", which refers to a specific set of medically defined conditions. A person in mental distress may exhibit some of the broader symptoms described in psychiatry, without actually being 'ill' in a medical sense. People with mental distress may also exhibit temporary symptoms on a daily basis, while patients diagnosed with mental disorder may potentially have to be treated by a psychiatrist. Types The following are types of major mental distress: Anxiety disorder Post-traumatic stress disorder (PTSD) Depression Bipolar disorder Schizophrenia Symptoms and causes The symptoms for mental distress include a wide range of physical to mental conditions. Physical symptoms may include sleep disturbance, anorexia (lack of appetite), loss of menstruation for women, headaches, chronic pain, and fatigue. Mental conditions may include difficulty in anger management, compulsive/obsessive behavior, a significant change in social behavior, a diminished sexual desire, and mood swings. Minor mental distress cases are caused by stress in daily problems, such as forgetting your car keys or being late for an event. However, the major types of mental distress described can be caused by other important factors. One such cause is chemical imbalances in the brain, which can lead to irrational decisions and emotional pain. For example, when the brain lacks serotonin, a chemical that regulates the brain's functioning, it can lead to depression, appetite changes, aggression, and anxiety. Another cause of mental distress can be exposure to severely distressing life-threatening situations and experiences. A third cause, in very rare cases, can be inheritance. Some research has shown that very few people have the genetics for the potential to develop mental distress. However, there are many factors that must be accounted for. Mental distress is not a contagious disease that can be caught like the common cold. Mental distress is a psychological condition. In the United States African-Americans The social disparities associated with mental health in the Black community have remained constant over time. According to the Office of Minority Health, African Americans are 30% more likely than European Americans to report serious psychological distress. Moreover, Black people are more likely to have Major Depressive Disorder, and communicate higher instances of intense symptoms/disability. For this reason, researchers have attempted to examine the sociological causes and systemic inequalities which contribute to these disparities in order to highlight issues for further investigation. Nonetheless, much of the research on the mental well-being of Black people are unable to separate race, culture, socioeconomic status, ethnicity, or behavioural and biological factors. According to Hunter and Schmidt (2010), there are three distinct beliefs embraced by Black people which speak to their socio-cultural experience in the United States: racism, stigma associated with mental illness, and the importance of physical health. African Americans are less likely to report depression due to heavy social stigma within their community and culture. These social aspects of mental health can generate distress. Therefore, discrimination within the healthcare community and larger society, attitudes related to mental health, and general physical health contribute to the mental well-being of Black people. There are also disparities with mental health among Black women. One of the reasons why Black women tend to neglect mental health support and treatment is the aura of the Strong Black Woman schema or S.B.W. According to Watson and Hunter, scholars have traced the origins of the S.B.W. race-gender schema to slavery and have suggested that the schema persist because of the struggles that African-American women continue to experience, such as financial hardship, racism, and sexism. Watson and Hunter state that due to the Strong Black Woman schema, Black women have a tendency to handle tough and difficult situations alone. African-American youth Comparable to their adult counterparts, Black adolescents experience mental health disparities. The primary reasons for this have been stipulated to be discrimination, inadequate treatment, and underutilization of mental health services, though Black youth have been shown to have higher self-esteem than their white counterparts. Similarly, children of immigrants, or second-generation Americans, often encounter barriers to optimal mental well-being. Discrimination and its effects on mental health are evident in adolescents' ability to achieve in school and overall self-esteem. Researchers have been unable to pinpoint exact causes for Black teenagers' underutilization of mental health services. One study attributed it to using alternative methods of support instead of formal treatments. Moreover, Black youth used other means of support such as peers and spiritual leaders. This demonstrates that Black teens are uncomfortable disclosing personal matters to professionals. It is difficult to decipher if this is cultural or a youth-related issue, as most teens do not choose to access formal providers for their mental health needs. Common stigma among immigrants "Mental health stigma, particularly personal stigma, is important because those who hold stigma beliefs are less willing to obtain the needed treatment (1-9). Often due to stigma, individuals will avoid treatment until the disorder is nearly incapacitating. This avoidance is particularly pronounced in members of ethnic minority groups because they are less likely to seek mental health treatment than those of European Americans [e.g., Ref. (4, 10–12)]. Expressly, Immigrants who hold personal stigma against mental illness are less likely to seek treatment. Its often that immigrants feel stigmatized because they're already undocumented which makes them feel embarrassed, causing them to refrain from treatment. Demographic and societal factors There has been a history of disparity and exclusion in regards to the treatment of Black Americans which consists of slavery, imprisonment in the criminal justice system, the inability to vote, marry, attend school, or own property amongst other factors. These factors have attributed to the increase of mental distress in the Black community and due to the lack of resources afforded/known in the community also leads to a lack of resources and treatments available for members of the community to seek and receive some for of help. LGBTQ+ Community Those who identify as part of the LGBTQ+ community have a higher risk of experiencing mental distress, most likely as a result of continued discrimination and victimization. Members of this population are often confronted with derogatory and hateful comments (physically and/or through social media). This discrimination has the potential of affecting their feelings of self-worth and confidence, leading to anxiety, depression, and even suicidality. It is for this reason that members of the LGBTQ+ community may experience higher rates of mental distress than their cisgender and heterosexual counterparts. Along with the increased risk of experiencing mental distress, members of this community may refrain from seeking mental health care due to past discrimination by medical professionals. In addition to the lack of knowledge and research with this population, this group is marginalized due to the lack of funding as most of the funds go to campaigns for the younger LGBTQ+ population. A study published in 2021 found that "LGBTQ+ students experienced more bullying and psychological distress". References Further reading External links Mental Distress Changes Psychological stress
0.784061
0.987207
0.77403
Anthropology
Anthropology is the scientific study of humanity, concerned with human behavior, human biology, cultures, societies, and linguistics, in both the present and past, including archaic humans. Social anthropology studies patterns of behavior, while cultural anthropology studies cultural meaning, including norms and values. The term sociocultural anthropology is commonly used today. Linguistic anthropology studies how language influences social life. Biological or physical anthropology studies the biological development of humans. Archaeology, often termed as "anthropology of the past," studies human activity through investigation of physical evidence. It is considered a branch of anthropology in North America and Asia, while in Europe, archaeology is viewed as a discipline in its own right or grouped under other related disciplines, such as history and palaeontology. Etymology The abstract noun anthropology is first attested in reference to history. Its present use first appeared in Renaissance Germany in the works of Magnus Hundt and Otto Casmann. Their Neo-Latin derived from the combining forms of the Greek words ánthrōpos (, "human") and lógos (, "study"). Its adjectival form appeared in the works of Aristotle. It began to be used in English, possibly via French , by the early 18th century. Origin and development of the term Through the 19th century In 1647, the Bartholins, early scholars of the University of Copenhagen, defined as follows: Sporadic use of the term for some of the subject matter occurred subsequently, such as the use by Étienne Serres in 1839 to describe the natural history, or paleontology, of man, based on comparative anatomy, and the creation of a chair in anthropology and ethnography in 1850 at the French National Museum of Natural History by Jean Louis Armand de Quatrefages de Bréau. Various short-lived organizations of anthropologists had already been formed. The Société Ethnologique de Paris, the first to use the term ethnology, was formed in 1839 and focused on methodically studying human races. After the death of its founder, William Frédéric Edwards, in 1842, it gradually declined in activity until it eventually dissolved in 1862. Meanwhile, the Ethnological Society of New York, currently the American Ethnological Society, was founded on its model in 1842, as well as the Ethnological Society of London in 1843, a break-away group of the Aborigines' Protection Society. These anthropologists of the times were liberal, anti-slavery, and pro-human-rights activists. They maintained international connections. Anthropology and many other current fields are the intellectual results of the comparative methods developed in the earlier 19th century. Theorists in diverse fields such as anatomy, linguistics, and ethnology, started making feature-by-feature comparisons of their subject matters, and were beginning to suspect that similarities between animals, languages, and folkways were the result of processes or laws unknown to them then. For them, the publication of Charles Darwin's On the Origin of Species was the epiphany of everything they had begun to suspect. Darwin himself arrived at his conclusions through comparison of species he had seen in agronomy and in the wild. Darwin and Wallace unveiled evolution in the late 1850s. There was an immediate rush to bring it into the social sciences. Paul Broca in Paris was in the process of breaking away from the Société de biologie to form the first of the explicitly anthropological societies, the Société d'Anthropologie de Paris, meeting for the first time in Paris in 1859. When he read Darwin, he became an immediate convert to Transformisme, as the French called evolutionism. His definition now became "the study of the human group, considered as a whole, in its details, and in relation to the rest of nature". Broca, being what today would be called a neurosurgeon, had taken an interest in the pathology of speech. He wanted to localize the difference between man and the other animals, which appeared to reside in speech. He discovered the speech center of the human brain, today called Broca's area after him. His interest was mainly in Biological anthropology, but a German philosopher specializing in psychology, Theodor Waitz, took up the theme of general and social anthropology in his six-volume work, entitled Die Anthropologie der Naturvölker, 1859–1864. The title was soon translated as "The Anthropology of Primitive Peoples". The last two volumes were published posthumously. Waitz defined anthropology as "the science of the nature of man". Following Broca's lead, Waitz points out that anthropology is a new field, which would gather material from other fields, but would differ from them in the use of comparative anatomy, physiology, and psychology to differentiate man from "the animals nearest to him". He stresses that the data of comparison must be empirical, gathered by experimentation. The history of civilization, as well as ethnology, are to be brought into the comparison. It is to be presumed fundamentally that the species, man, is a unity, and that "the same laws of thought are applicable to all men". Waitz was influential among British ethnologists. In 1863, the explorer Richard Francis Burton and the speech therapist James Hunt broke away from the Ethnological Society of London to form the Anthropological Society of London, which henceforward would follow the path of the new anthropology rather than just ethnology. It was the 2nd society dedicated to general anthropology in existence. Representatives from the French Société were present, though not Broca. In his keynote address, printed in the first volume of its new publication, The Anthropological Review, Hunt stressed the work of Waitz, adopting his definitions as a standard. Among the first associates were the young Edward Burnett Tylor, inventor of cultural anthropology, and his brother Alfred Tylor, a geologist. Previously Edward had referred to himself as an ethnologist; subsequently, an anthropologist. Similar organizations in other countries followed: The Anthropological Society of Madrid (1865), the American Anthropological Association in 1902, the Anthropological Society of Vienna (1870), the Italian Society of Anthropology and Ethnology (1871), and many others subsequently. The majority of these were evolutionists. One notable exception was the Berlin Society for Anthropology, Ethnology, and Prehistory (1869) founded by Rudolph Virchow, known for his vituperative attacks on the evolutionists. Not religious himself, he insisted that Darwin's conclusions lacked empirical foundation. During the last three decades of the 19th century, a proliferation of anthropological societies and associations occurred, most independent, most publishing their own journals, and all international in membership and association. The major theorists belonged to these organizations. They supported the gradual osmosis of anthropology curricula into the major institutions of higher learning. By 1898, 48 educational institutions in 13 countries had some curriculum in anthropology. None of the 75 faculty members were under a department named anthropology. 20th and 21st centuries Anthropology as a specialized field of academic study developed much through the end of the 19th century. Then it rapidly expanded beginning in the early 20th century to the point where many of the world's higher educational institutions typically included anthropology departments. Thousands of anthropology departments have come into existence, and anthropology has also diversified from a few major subdivisions to dozens more. Practical anthropology, the use of anthropological knowledge and technique to solve specific problems, has arrived; for example, the presence of buried victims might stimulate the use of a forensic archaeologist to recreate the final scene. The organization has also reached a global level. For example, the World Council of Anthropological Associations (WCAA), "a network of national, regional and international associations that aims to promote worldwide communication and cooperation in anthropology", currently contains members from about three dozen nations. Since the work of Franz Boas and Bronisław Malinowski in the late 19th and early 20th centuries, social anthropology in Great Britain and cultural anthropology in the US have been distinguished from other social sciences by their emphasis on cross-cultural comparisons, long-term in-depth examination of context, and the importance they place on participant-observation or experiential immersion in the area of research. Cultural anthropology, in particular, has emphasized cultural relativism, holism, and the use of findings to frame cultural critiques. This has been particularly prominent in the United States, from Boas' arguments against 19th-century racial ideology, through Margaret Mead's advocacy for gender equality and sexual liberation, to current criticisms of post-colonial oppression and promotion of multiculturalism. Ethnography is one of its primary research designs as well as the text that is generated from anthropological fieldwork. In Great Britain and the Commonwealth countries, the British tradition of social anthropology tends to dominate. In the United States, anthropology has traditionally been divided into the four field approach developed by Franz Boas in the early 20th century: biological or physical anthropology; social, cultural, or sociocultural anthropology; archaeological anthropology; and linguistic anthropology. These fields frequently overlap but tend to use different methodologies and techniques. European countries with overseas colonies tended to practice more ethnology (a term coined and defined by Adam F. Kollár in 1783). It is sometimes referred to as sociocultural anthropology in the parts of the world that were influenced by the European tradition. Fields Anthropology is a global discipline involving humanities, social sciences and natural sciences. Anthropology builds upon knowledge from natural sciences, including the discoveries about the origin and evolution of Homo sapiens, human physical traits, human behavior, the variations among different groups of humans, how the evolutionary past of Homo sapiens has influenced its social organization and culture, and from social sciences, including the organization of human social and cultural relations, institutions, social conflicts, etc. Early anthropology originated in Classical Greece and Persia and studied and tried to understand observable cultural diversity, such as by Al-Biruni of the Islamic Golden Age. As such, anthropology has been central in the development of several new (late 20th century) interdisciplinary fields such as cognitive science, global studies, and various ethnic studies. According to Clifford Geertz, Sociocultural anthropology has been heavily influenced by structuralist and postmodern theories, as well as a shift toward the analysis of modern societies. During the 1970s and 1990s, there was an epistemological shift away from the positivist traditions that had largely informed the discipline. During this shift, enduring questions about the nature and production of knowledge came to occupy a central place in cultural and social anthropology. In contrast, archaeology and biological anthropology remained largely positivist. Due to this difference in epistemology, the four sub-fields of anthropology have lacked cohesion over the last several decades. Sociocultural Sociocultural anthropology draws together the principal axes of cultural anthropology and social anthropology. Cultural anthropology is the comparative study of the manifold ways in which people make sense of the world around them, while social anthropology is the study of the relationships among individuals and groups. Cultural anthropology is more related to philosophy, literature and the arts (how one's culture affects the experience for self and group, contributing to a more complete understanding of the people's knowledge, customs, and institutions), while social anthropology is more related to sociology and history. In that, it helps develop an understanding of social structures, typically of others and other populations (such as minorities, subgroups, dissidents, etc.). There is no hard-and-fast distinction between them, and these categories overlap to a considerable degree. Inquiry in sociocultural anthropology is guided in part by cultural relativism, the attempt to understand other societies in terms of their own cultural symbols and values. Accepting other cultures in their own terms moderates reductionism in cross-cultural comparison. This project is often accommodated in the field of ethnography. Ethnography can refer to both a methodology and the product of ethnographic research, i.e. an ethnographic monograph. As a methodology, ethnography is based upon long-term fieldwork within a community or other research site. Participant observation is one of the foundational methods of social and cultural anthropology. Ethnology involves the systematic comparison of different cultures. The process of participant-observation can be especially helpful to understanding a culture from an emic (conceptual, vs. etic, or technical) point of view. The study of kinship and social organization is a central focus of sociocultural anthropology, as kinship is a human universal. Sociocultural anthropology also covers economic and political organization, law and conflict resolution, patterns of consumption and exchange, material culture, technology, infrastructure, gender relations, ethnicity, childrearing and socialization, religion, myth, symbols, values, etiquette, worldview, sports, music, nutrition, recreation, games, food, festivals, and language (which is also the object of study in linguistic anthropology). Comparison across cultures is a key element of method in sociocultural anthropology, including the industrialized (and de-industrialized) West. The Standard Cross-Cultural Sample (SCCS) includes 186 such cultures. Biological Biological anthropology and physical anthropology are synonymous terms to describe anthropological research focused on the study of humans and non-human primates in their biological, evolutionary, and demographic dimensions. It examines the biological and social factors that have affected the evolution of humans and other primates, and that generate, maintain or change contemporary genetic and physiological variation. Archaeological Archaeology is the study of the human past through its material remains. Artifacts, faunal remains, and human altered landscapes are evidence of the cultural and material lives of past societies. Archaeologists examine material remains in order to deduce patterns of past human behavior and cultural practices. Ethnoarchaeology is a type of archaeology that studies the practices and material remains of living human groups in order to gain a better understanding of the evidence left behind by past human groups, who are presumed to have lived in similar ways. Linguistic Linguistic anthropology (not to be confused with anthropological linguistics) seeks to understand the processes of human communications, verbal and non-verbal, variation in language across time and space, the social uses of language, and the relationship between language and culture. It is the branch of anthropology that brings linguistic methods to bear on anthropological problems, linking the analysis of linguistic forms and processes to the interpretation of sociocultural processes. Linguistic anthropologists often draw on related fields including sociolinguistics, pragmatics, cognitive linguistics, semiotics, discourse analysis, and narrative analysis. Ethnography Ethnography is a method of analysing social or cultural interaction. It often involves participant observation though an ethnographer may also draw from texts written by participants of in social interactions. Ethnography views first-hand experience and social context as important. Tim Ingold distinguishes ethnography from anthropology arguing that anthropology tries to construct general theories of human experience, applicable in general and novel settings, while ethnography concerns itself with fidelity. He argues that the anthropologist must make his writing consistent with their understanding of literature and other theory but notes that ethnography may be of use to the anthropologists and the fields inform one another. Key topics by field: sociocultural Art, media, music, dance and film Art One of the central problems in the anthropology of art concerns the universality of 'art' as a cultural phenomenon. Several anthropologists have noted that the Western categories of 'painting', 'sculpture', or 'literature', conceived as independent artistic activities, do not exist, or exist in a significantly different form, in most non-Western contexts. To surmount this difficulty, anthropologists of art have focused on formal features in objects which, without exclusively being 'artistic', have certain evident 'aesthetic' qualities. Boas' Primitive Art, Claude Lévi-Strauss' The Way of the Masks (1982) or Geertz's 'Art as Cultural System' (1983) are some examples in this trend to transform the anthropology of 'art' into an anthropology of culturally specific 'aesthetics'. Media Media anthropology (also known as the anthropology of media or mass media) emphasizes ethnographic studies as a means of understanding producers, audiences, and other cultural and social aspects of mass media. The types of ethnographic contexts explored range from contexts of media production (e.g., ethnographies of newsrooms in newspapers, journalists in the field, film production) to contexts of media reception, following audiences in their everyday responses to media. Other types include cyber anthropology, a relatively new area of internet research, as well as ethnographies of other areas of research which happen to involve media, such as development work, social movements, or health education. This is in addition to many classic ethnographic contexts, where media such as radio, the press, new media, and television have started to make their presences felt since the early 1990s. Music Ethnomusicology is an academic field encompassing various approaches to the study of music (broadly defined), that emphasize its cultural, social, material, cognitive, biological, and other dimensions or contexts instead of or in addition to its isolated sound component or any particular repertoire. Ethnomusicology can be used in a wide variety of fields, such as teaching, politics, cultural anthropology etc. While the origins of ethnomusicology date back to the 18th and 19th centuries, it was formally termed "ethnomusicology" by Dutch scholar Jaap Kunst . Later, the influence of study in this area spawned the creation of the periodical Ethnomusicology and the Society of Ethnomusicology. Visual Visual anthropology is concerned, in part, with the study and production of ethnographic photography, film and, since the mid-1990s, new media. While the term is sometimes used interchangeably with ethnographic film, visual anthropology also encompasses the anthropological study of visual representation, including areas such as performance, museums, art, and the production and reception of mass media. Visual representations from all cultures, such as sandpaintings, tattoos, sculptures and reliefs, cave paintings, scrimshaw, jewelry, hieroglyphs, paintings, and photographs are included in the focus of visual anthropology. Economic, political economic, applied and development Economic Economic anthropology attempts to explain human economic behavior in its widest historic, geographic and cultural scope. It has a complex relationship with the discipline of economics, of which it is highly critical. Its origins as a sub-field of anthropology begin with the Polish-British founder of anthropology, Bronisław Malinowski, and his French compatriot, Marcel Mauss, on the nature of gift-giving exchange (or reciprocity) as an alternative to market exchange. Economic Anthropology remains, for the most part, focused upon exchange. The school of thought derived from Marx and known as Political Economy focuses on production, in contrast. Economic anthropologists have abandoned the primitivist niche they were relegated to by economists, and have now turned to examine corporations, banks, and the global financial system from an anthropological perspective. Political economy Political economy in anthropology is the application of the theories and methods of historical materialism to the traditional concerns of anthropology, including, but not limited to, non-capitalist societies. Political economy introduced questions of history and colonialism to ahistorical anthropological theories of social structure and culture. Three main areas of interest rapidly developed. The first of these areas was concerned with the "pre-capitalist" societies that were subject to evolutionary "tribal" stereotypes. Sahlin's work on hunter-gatherers as the "original affluent society" did much to dissipate that image. The second area was concerned with the vast majority of the world's population at the time, the peasantry, many of whom were involved in complex revolutionary wars such as in Vietnam. The third area was on colonialism, imperialism, and the creation of the capitalist world-system. More recently, these political economists have more directly addressed issues of industrial (and post-industrial) capitalism around the world. Applied Applied anthropology refers to the application of the method and theory of anthropology to the analysis and solution of practical problems. It is a "complex of related, research-based, instrumental methods which produce change or stability in specific cultural systems through the provision of data, initiation of direct action, and/or the formulation of policy". Applied anthropology is the practical side of anthropological research; it includes researcher involvement and activism within the participating community. It is closely related to development anthropology (distinct from the more critical anthropology of development). Development Anthropology of development tends to view development from a critical perspective. The kind of issues addressed and implications for the approach involve pondering why, if a key development goal is to alleviate poverty, is poverty increasing? Why is there such a gap between plans and outcomes? Why are those working in development so willing to disregard history and the lessons it might offer? Why is development so externally driven rather than having an internal basis? In short, why does so much planned development fail? Kinship, feminism, gender and sexuality Kinship Kinship can refer both to the study of the patterns of social relationships in one or more human cultures, or it can refer to the patterns of social relationships themselves. Over its history, anthropology has developed a number of related concepts and terms, such as "descent", "descent groups", "lineages", "affines", "cognates", and even "fictive kinship". Broadly, kinship patterns may be considered to include people related both by descent (one's social relations during development), and also relatives by marriage. Within kinship you have two different families. People have their biological families and it is the people they share DNA with. This is called consanguinity or "blood ties". People can also have a chosen family in which they chose who they want to be a part of their family. In some cases, people are closer with their chosen family more than with their biological families. Feminist Feminist anthropology is a four field approach to anthropology (archeological, biological, cultural, linguistic) that seeks to reduce male bias in research findings, anthropological hiring practices, and the scholarly production of knowledge. Anthropology engages often with feminists from non-Western traditions, whose perspectives and experiences can differ from those of white feminists of Europe, America, and elsewhere. From the perspective of the Western world, historically such 'peripheral' perspectives have been ignored, observed only from an outsider perspective, and regarded as less-valid or less-important than knowledge from the Western world. Exploring and addressing that double bias against women from marginalized racial or ethnic groups is of particular interest in intersectional feminist anthropology. Feminist anthropologists have stated that their publications have contributed to anthropology, along the way correcting against the systemic biases beginning with the "patriarchal origins of anthropology (and (academia)" and note that from 1891 to 1930 doctorates in anthropology went to males more than 85%, more than 81% were under 35, and only 7.2% to anyone over 40 years old, thus reflecting an age gap in the pursuit of anthropology by first-wave feminists until later in life. This correction of systemic bias may include mainstream feminist theory, history, linguistics, archaeology, and anthropology. Feminist anthropologists are often concerned with the construction of gender across societies. Gender constructs are of particular interest when studying sexism. According to St. Clair Drake, Vera Mae Green was, until "[w]ell into the 1960s", the only African American female anthropologist who was also a Caribbeanist. She studied ethnic and family relations in the Caribbean as well as the United States, and thereby tried to improve the way black life, experiences, and culture were studied. However, Zora Neale Hurston, although often primarily considered to be a literary author, was trained in anthropology by Franz Boas, and published Tell my Horse about her "anthropological observations" of voodoo in the Caribbean (1938). Feminist anthropology is inclusive of the anthropology of birth as a specialization, which is the anthropological study of pregnancy and childbirth within cultures and societies. Medical, nutritional, psychological, cognitive and transpersonal Medical Medical anthropology is an interdisciplinary field which studies "human health and disease, health care systems, and biocultural adaptation". It is believed that William Caudell was the first to discover the field of medical anthropology. Currently, research in medical anthropology is one of the main growth areas in the field of anthropology as a whole. It focuses on the following six basic fields: The development of systems of medical knowledge and medical care The patient-physician relationship The integration of alternative medical systems in culturally diverse environments The interaction of social, environmental and biological factors which influence health and illness both in the individual and the community as a whole The critical analysis of interaction between psychiatric services and migrant populations ("critical ethnopsychiatry": Beneduce 2004, 2007) The impact of biomedicine and biomedical technologies in non-Western settings Other subjects that have become central to medical anthropology worldwide are violence and social suffering (Farmer, 1999, 2003; Beneduce, 2010) as well as other issues that involve physical and psychological harm and suffering that are not a result of illness. On the other hand, there are fields that intersect with medical anthropology in terms of research methodology and theoretical production, such as cultural psychiatry and transcultural psychiatry or ethnopsychiatry. Nutritional Nutritional anthropology is a synthetic concept that deals with the interplay between economic systems, nutritional status and food security, and how changes in the former affect the latter. If economic and environmental changes in a community affect access to food, food security, and dietary health, then this interplay between culture and biology is in turn connected to broader historical and economic trends associated with globalization. Nutritional status affects overall health status, work performance potential, and the overall potential for economic development (either in terms of human development or traditional western models) for any given group of people. Psychological Psychological anthropology is an interdisciplinary subfield of anthropology that studies the interaction of cultural and mental processes. This subfield tends to focus on ways in which humans' development and enculturation within a particular cultural group – with its own history, language, practices, and conceptual categories – shape processes of human cognition, emotion, perception, motivation, and mental health. It also examines how the understanding of cognition, emotion, motivation, and similar psychological processes inform or constrain our models of cultural and social processes. Cognitive Cognitive anthropology seeks to explain patterns of shared knowledge, cultural innovation, and transmission over time and space using the methods and theories of the cognitive sciences (especially experimental psychology and evolutionary biology) often through close collaboration with historians, ethnographers, archaeologists, linguists, musicologists and other specialists engaged in the description and interpretation of cultural forms. Cognitive anthropology is concerned with what people from different groups know and how that implicit knowledge changes the way people perceive and relate to the world around them. Transpersonal Transpersonal anthropology studies the relationship between altered states of consciousness and culture. As with transpersonal psychology, the field is much concerned with altered states of consciousness (ASC) and transpersonal experience. However, the field differs from mainstream transpersonal psychology in taking more cognizance of cross-cultural issues – for instance, the roles of myth, ritual, diet, and text in evoking and interpreting extraordinary experiences. Political and legal Political Political anthropology concerns the structure of political systems, looked at from the basis of the structure of societies. Political anthropology developed as a discipline concerned primarily with politics in stateless societies, a new development started from the 1960s, and is still unfolding: anthropologists started increasingly to study more "complex" social settings in which the presence of states, bureaucracies and markets entered both ethnographic accounts and analysis of local phenomena. The turn towards complex societies meant that political themes were taken up at two main levels. Firstly, anthropologists continued to study political organization and political phenomena that lay outside the state-regulated sphere (as in patron-client relations or tribal political organization). Secondly, anthropologists slowly started to develop a disciplinary concern with states and their institutions (and on the relationship between formal and informal political institutions). An anthropology of the state developed, and it is a most thriving field today. Geertz's comparative work on "Negara", the Balinese state, is an early, famous example. Legal Legal anthropology or anthropology of law specializes in "the cross-cultural study of social ordering". Earlier legal anthropological research often focused more narrowly on conflict management, crime, sanctions, or formal regulation. More recent applications include issues such as human rights, legal pluralism, and political uprisings. Public Public anthropology was created by Robert Borofsky, a professor at Hawaii Pacific University, to "demonstrate the ability of anthropology and anthropologists to effectively address problems beyond the discipline – illuminating larger social issues of our times as well as encouraging broad, public conversations about them with the explicit goal of fostering social change". Nature, science, and technology Cyborg Cyborg anthropology originated as a sub-focus group within the American Anthropological Association's annual meeting in 1993. The sub-group was very closely related to STS and the Society for the Social Studies of Science. Donna Haraway's 1985 Cyborg Manifesto could be considered the founding document of cyborg anthropology by first exploring the philosophical and sociological ramifications of the term. Cyborg anthropology studies humankind and its relations with the technological systems it has built, specifically modern technological systems that have reflexively shaped notions of what it means to be human beings. Digital Digital anthropology is the study of the relationship between humans and digital-era technology and extends to various areas where anthropology and technology intersect. It is sometimes grouped with sociocultural anthropology, and sometimes considered part of material culture. The field is new, and thus has a variety of names with a variety of emphases. These include techno-anthropology, digital ethnography, cyberanthropology, and virtual anthropology. Ecological Ecological anthropology is defined as the "study of cultural adaptations to environments". The sub-field is also defined as, "the study of relationships between a population of humans and their biophysical environment". The focus of its research concerns "how cultural beliefs and practices helped human populations adapt to their environments, and how their environments change across space and time. The contemporary perspective of environmental anthropology, and arguably at least the backdrop, if not the focus of most of the ethnographies and cultural fieldworks of today, is political ecology. Many characterize this new perspective as more informed with culture, politics and power, globalization, localized issues, century anthropology and more. The focus and data interpretation is often used for arguments for/against or creation of policy, and to prevent corporate exploitation and damage of land. Often, the observer has become an active part of the struggle either directly (organizing, participation) or indirectly (articles, documentaries, books, ethnographies). Such is the case with environmental justice advocate Melissa Checker and her relationship with the people of Hyde Park. Environment Social sciences, like anthropology, can provide interdisciplinary approaches to the environment. Professor Kay Milton, Director of the Anthropology research network in the School of History and Anthropology, describes anthropology as distinctive, with its most distinguishing feature being its interest in non-industrial indigenous and traditional societies. Anthropological theory is distinct because of the consistent presence of the concept of culture; not an exclusive topic but a central position in the study and a deep concern with the human condition. Milton describes three trends that are causing a fundamental shift in what characterizes anthropology: dissatisfaction with the cultural relativist perspective, reaction against cartesian dualisms which obstructs progress in theory (nature culture divide), and finally an increased attention to globalization (transcending the barriers or time/space). Environmental discourse appears to be characterized by a high degree of globalization. (The troubling problem is borrowing non-indigenous practices and creating standards, concepts, philosophies and practices in western countries.) Anthropology and environmental discourse now have become a distinct position in anthropology as a discipline. Knowledge about diversities in human culture can be important in addressing environmental problems - anthropology is now a study of human ecology. Human activity is the most important agent in creating environmental change, a study commonly found in human ecology which can claim a central place in how environmental problems are examined and addressed. Other ways anthropology contributes to environmental discourse is by being theorists and analysts, or by refinement of definitions to become more neutral/universal, etc. In exploring environmentalism - the term typically refers to a concern that the environment should be protected, particularly from the harmful effects of human activities. Environmentalism itself can be expressed in many ways. Anthropologists can open the doors of environmentalism by looking beyond industrial society, understanding the opposition between industrial and non-industrial relationships, knowing what ecosystem people and biosphere people are and are affected by, dependent and independent variables, "primitive" ecological wisdom, diverse environments, resource management, diverse cultural traditions, and knowing that environmentalism is a part of culture. Historical Ethnohistory is the study of ethnographic cultures and indigenous customs by examining historical records. It is also the study of the history of various ethnic groups that may or may not exist today. Ethnohistory uses both historical and ethnographic data as its foundation. Its historical methods and materials go beyond the standard use of documents and manuscripts. Practitioners recognize the utility of such source material as maps, music, paintings, photography, folklore, oral tradition, site exploration, archaeological materials, museum collections, enduring customs, language, and place names. Religion The anthropology of religion involves the study of religious institutions in relation to other social institutions, and the comparison of religious beliefs and practices across cultures. Modern anthropology assumes that there is complete continuity between magical thinking and religion, and that every religion is a cultural product, created by the human community that worships it. Urban Urban anthropology is concerned with issues of urbanization, poverty, and neoliberalism. Ulf Hannerz quotes a 1960s remark that traditional anthropologists were "a notoriously agoraphobic lot, anti-urban by definition". Various social processes in the Western World as well as in the "Third World" (the latter being the habitual focus of attention of anthropologists) brought the attention of "specialists in 'other cultures'" closer to their homes. There are two main approaches to urban anthropology: examining the types of cities or examining the social issues within the cities. These two methods are overlapping and dependent of each other. By defining different types of cities, one would use social factors as well as economic and political factors to categorize the cities. By directly looking at the different social issues, one would also be studying how they affect the dynamic of the city. Key topics by field: archaeological and biological Anthrozoology Anthrozoology (also known as "human–animal studies") is the study of interaction between living things. It is an interdisciplinary field that overlaps with a number of other disciplines, including anthropology, ethology, medicine, psychology, veterinary medicine and zoology. A major focus of anthrozoologic research is the quantifying of the positive effects of human-animal relationships on either party and the study of their interactions. It includes scholars from a diverse range of fields, including anthropology, sociology, biology, and philosophy. Biocultural Biocultural anthropology is the scientific exploration of the relationships between human biology and culture. Physical anthropologists throughout the first half of the 20th century viewed this relationship from a racial perspective; that is, from the assumption that typological human biological differences lead to cultural differences. After World War II the emphasis began to shift toward an effort to explore the role culture plays in shaping human biology. Evolutionary Evolutionary anthropology is the interdisciplinary study of the evolution of human physiology and human behaviour and the relation between hominins and non-hominin primates. Evolutionary anthropology is based in natural science and social science, combining the human development with socioeconomic factors. Evolutionary anthropology is concerned with both biological and cultural evolution of humans, past and present. It is based on a scientific approach, and brings together fields such as archaeology, behavioral ecology, psychology, primatology, and genetics. It is a dynamic and interdisciplinary field, drawing on many lines of evidence to understand the human experience, past and present. Forensic Forensic anthropology is the application of the science of physical anthropology and human osteology in a legal setting, most often in criminal cases where the victim's remains are in the advanced stages of decomposition. A forensic anthropologist can assist in the identification of deceased individuals whose remains are decomposed, burned, mutilated or otherwise unrecognizable. The adjective "forensic" refers to the application of this subfield of science to a court of law. Palaeoanthropology Paleoanthropology combines the disciplines of paleontology and physical anthropology. It is the study of ancient humans, as found in fossil hominid evidence such as petrifacted bones and footprints. Genetics and morphology of specimens are crucially important to this field. Markers on specimens, such as enamel fractures and dental decay on teeth, can also give insight into the behaviour and diet of past populations. Organizations Contemporary anthropology is an established science with academic departments at most universities and colleges. The single largest organization of anthropologists is the American Anthropological Association (AAA), which was founded in 1903. Its members are anthropologists from around the globe. In 1989, a group of European and American scholars in the field of anthropology established the European Association of Social Anthropologists (EASA) which serves as a major professional organization for anthropologists working in Europe. The EASA seeks to advance the status of anthropology in Europe and to increase visibility of marginalized anthropological traditions and thereby contribute to the project of a global anthropology or world anthropology. Hundreds of other organizations exist in the various sub-fields of anthropology, sometimes divided up by nation or region, and many anthropologists work with collaborators in other disciplines, such as geology, physics, zoology, paleontology, anatomy, music theory, art history, sociology and so on, belonging to professional societies in those disciplines as well. List of major organizations American Anthropological Association American Ethnological Society Asociación de Antropólogos Iberoamericanos en Red, AIBR Anthropological Society of London Center for World Indigenous Studies Ethnological Society of London European Association of Social Anthropologists Max Planck Institute for Evolutionary Anthropology Network of Concerned Anthropologists N.N. Miklukho-Maklai Institute of Ethnology and Anthropology Royal Anthropological Institute of Great Britain and Ireland Society for Anthropological Sciences Society for Applied Anthropology USC Center for Visual Anthropology Ethics As the field has matured it has debated and arrived at ethical principles aimed at protecting both the subjects of anthropological research as well as the researchers themselves, and professional societies have generated codes of ethics. Anthropologists, like other researchers (especially historians and scientists engaged in field research), have over time assisted state policies and projects, especially colonialism. Some commentators have contended: That the discipline grew out of colonialism, perhaps was in league with it, and derives some of its key notions from it, consciously or not. (See, for example, Gough, Pels and Salemink, but cf. Lewis 2004). That ethnographic work is often ahistorical, writing about people as if they were "out of time" in an "ethnographic present" (Johannes Fabian, Time and Its Other). In his article "The Misrepresentation of Anthropology and Its Consequence", Herbert S. Lewis critiqued older anthropological works that presented other cultures as if they were strange and unusual. While the findings of those researchers should not be discarded, the field should learn from its mistakes. Cultural relativism As part of their quest for scientific objectivity, present-day anthropologists typically urge cultural relativism, which has an influence on all the sub-fields of anthropology. This is the notion that cultures should not be judged by another's values or viewpoints, but be examined dispassionately on their own terms. There should be no notions, in good anthropology, of one culture being better or worse than another culture. Ethical commitments in anthropology include noticing and documenting genocide, infanticide, racism, sexism, mutilation (including circumcision and subincision), and torture. Topics like racism, slavery, and human sacrifice attract anthropological attention and theories ranging from nutritional deficiencies, to genes, to acculturation, to colonialism, have been proposed to explain their origins and continued recurrences. To illustrate the depth of an anthropological approach, one can take just one of these topics, such as "racism" and find thousands of anthropological references, stretching across all the major and minor sub-fields. Military involvement Anthropologists' involvement with the U.S. government, in particular, has caused bitter controversy within the discipline. Franz Boas publicly objected to US participation in World War I, and after the war he published a brief exposé and condemnation of the participation of several American archaeologists in espionage in Mexico under their cover as scientists. But by the 1940s, many of Boas' anthropologist contemporaries were active in the allied war effort against the Axis Powers (Nazi Germany, Fascist Italy, and Imperial Japan). Many served in the armed forces, while others worked in intelligence (for example, Office of Strategic Services and the Office of War Information). At the same time, David H. Price's work on American anthropology during the Cold War provides detailed accounts of the pursuit and dismissal of several anthropologists from their jobs for communist sympathies. Attempts to accuse anthropologists of complicity with the CIA and government intelligence activities during the Vietnam War years have turned up little. Many anthropologists (students and teachers) were active in the antiwar movement. Numerous resolutions condemning the war in all its aspects were passed overwhelmingly at the annual meetings of the American Anthropological Association (AAA). Professional anthropological bodies often object to the use of anthropology for the benefit of the state. Their codes of ethics or statements may proscribe anthropologists from giving secret briefings. The Association of Social Anthropologists of the UK and Commonwealth (ASA) has called certain scholarship ethically dangerous. The "Principles of Professional Responsibility" issued by the American Anthropological Association and amended through November 1986 stated that "in relation with their own government and with host governments ... no secret research, no secret reports or debriefings of any kind should be agreed to or given." The current "Principles of Professional Responsibility" does not make explicit mention of ethics surrounding state interactions. Anthropologists, along with other social scientists, are working with the US military as part of the US Army's strategy in Afghanistan. The Christian Science Monitor reports that "Counterinsurgency efforts focus on better grasping and meeting local needs" in Afghanistan, under the Human Terrain System (HTS) program; in addition, HTS teams are working with the US military in Iraq. In 2009, the American Anthropological Association's Commission on the Engagement of Anthropology with the US Security and Intelligence Communities (CEAUSSIC) released its final report concluding, in part, that: Post-World War II developments Before WWII British 'social anthropology' and American 'cultural anthropology' were still distinct traditions. After the war, enough British and American anthropologists borrowed ideas and methodological approaches from one another that some began to speak of them collectively as 'sociocultural' anthropology. Basic trends There are several characteristics that tend to unite anthropological work. One of the central characteristics is that anthropology tends to provide a comparatively more holistic account of phenomena and tends to be highly empirical. The quest for holism leads most anthropologists to study a particular place, problem or phenomenon in detail, using a variety of methods, over a more extensive period than normal in many parts of academia. In the 1990s and 2000s, calls for clarification of what constitutes a culture, of how an observer knows where his or her own culture ends and another begins, and other crucial topics in writing anthropology were heard. These dynamic relationships, between what can be observed on the ground, as opposed to what can be observed by compiling many local observations remain fundamental in any kind of anthropology, whether cultural, biological, linguistic or archaeological. Biological anthropologists are interested in both human variation and in the possibility of human universals (behaviors, ideas or concepts shared by virtually all human cultures). They use many different methods of study, but modern population genetics, participant observation and other techniques often take anthropologists "into the field," which means traveling to a community in its own setting, to do something called "fieldwork." On the biological or physical side, human measurements, genetic samples, nutritional data may be gathered and published as articles or monographs. Along with dividing up their project by theoretical emphasis, anthropologists typically divide the world up into relevant time periods and geographic regions. Human time on Earth is divided up into relevant cultural traditions based on material, such as the Paleolithic and the Neolithic, of particular use in archaeology. Further cultural subdivisions according to tool types, such as Olduwan or Mousterian or Levalloisian help archaeologists and other anthropologists in understanding major trends in the human past. Anthropologists and geographers share approaches to culture regions as well, since mapping cultures is central to both sciences. By making comparisons across cultural traditions (time-based) and cultural regions (space-based), anthropologists have developed various kinds of comparative method, a central part of their science. Commonalities between fields Because anthropology developed from so many different enterprises (see History of anthropology), including but not limited to fossil-hunting, exploring, documentary film-making, paleontology, primatology, antiquity dealings and curatorship, philology, etymology, genetics, regional analysis, ethnology, history, philosophy, and religious studies, it is difficult to characterize the entire field in a brief article, although attempts to write histories of the entire field have been made. Some authors argue that anthropology originated and developed as the study of "other cultures", both in terms of time (past societies) and space (non-European/non-Western societies). For example, the classic of urban anthropology, Ulf Hannerz in the introduction to his seminal Exploring the City: Inquiries Toward an Urban Anthropology mentions that the "Third World" had habitually received most of attention; anthropologists who traditionally specialized in "other cultures" looked for them far away and started to look "across the tracks" only in late 1960s. Now there exist many works focusing on peoples and topics very close to the author's "home". It is also argued that other fields of study, like History and Sociology, on the contrary focus disproportionately on the West. In France, the study of Western societies has been traditionally left to sociologists, but this is increasingly changing, starting in the 1970s from scholars like Isac Chiva and journals like Terrain ("fieldwork") and developing with the center founded by Marc Augé (Le Centre d'anthropologie des mondes contemporains, the Anthropological Research Center of Contemporary Societies). Since the 1980s it has become common for social and cultural anthropologists to set ethnographic research in the North Atlantic region, frequently examining the connections between locations rather than limiting research to a single locale. There has also been a related shift toward broadening the focus beyond the daily life of ordinary people; increasingly, research is set in settings such as scientific laboratories, social movements, governmental and nongovernmental organizations and businesses. See also Christian anthropology, a sub-field of theology Philosophical anthropology, a sub-field of philosophy Lists Notes References Works cited Further reading Dictionaries and encyclopedias Fieldnotes and memoirs Histories . Textbooks and key theoretical works External links Open Encyclopedia of Anthropology. (AIO) Behavioural sciences Humans
0.774299
0.999598
0.773987
Self-concept
In the psychology of self, one's self-concept (also called self-construction, self-identity, self-perspective or self-structure) is a collection of beliefs about oneself. Generally, self-concept embodies the answer to the question "Who am I?". The self-concept is distinguishable from self-awareness, which is the extent to which self-knowledge is defined, consistent, and currently applicable to one's attitudes and dispositions. Self-concept also differs from self-esteem: self-concept is a cognitive or descriptive component of one's self (e.g. "I am a fast runner"), while self-esteem is evaluative and opinionated (e.g. "I feel good about being a fast runner"). Self-concept is made up of one's self-schemas, and interacts with self-esteem, self-knowledge, and the social self to form the self as a whole. It includes the past, present, and future selves, where future selves (or possible selves) represent individuals' ideas of what they might become, what they would like to become, or what they are afraid of becoming. Possible selves may function as incentives for certain behaviour. The perception people have about their past or future selves relates to their perception of their current selves. The temporal self-appraisal theory argues that people have a tendency to maintain a positive self-evaluation by distancing themselves from their negative self and paying more attention to their positive one. In addition, people have a tendency to perceive the past self less favourably (e.g. "I'm better than I used to be") and the future self more positively (e.g. "I will be better than I am now"). History Psychologists Carl Rogers and Abraham Maslow had major influence in popularizing the idea of self-concept in the west. According to Rogers, everyone strives to reach an "ideal self." He believed that a person gets to self-actualize when they prove to themself that they are capable enough to achieve their goals and desires, but in order to attain their fullest potential, the person must have been raised in healthy surroundings which consist of "genuineness, acceptance, and empathy", however, the lack of relationships with people that have healthy personalities will stop the person from growing "like a tree without sunlight and water" and affect the individual's process to accomplish self- actualization. Rogers also hypothesized that psychologically healthy people actively move away from roles created by others' expectations, and instead look within themselves for validation. On the other hand, neurotic people have "self-concepts that do not match their experiences. They are afraid to accept their own experiences as valid, so they distort them, either to protect themselves or to win approval from others." According to Carl Rogers, the self-concept has three different components: The view one has of oneself (self-image) How much value one places on oneself (self-esteem or self-worth) What one wishes one were really like (ideal self) Abraham Maslow applied his concept of self-actualization in his hierarchy of needs theory. In this theory, he explained the process it takes for a person to achieve self-actualization. He argues that for an individual to get to the "higher level growth needs", he must first accomplish "lower deficit needs". Once the "deficiency needs" have been achieved, the person's goal is to accomplish the next step, which is the "being needs". Maslow noticed that once individuals reach this level, they tend to "grow as a person" and reach self-actualization. However, individuals who experienced negative events while being in the lower deficit needs level prevents them from ascending in the hierarchy of needs. The self-categorization theory developed by John Turner states that the self-concept consists of at least two "levels": a personal identity and a social one. In other words, one's self-evaluation relies on self-perceptions and how others perceive them. Self-concept can alternate rapidly between one's personal and social identity. Children and adolescents begin integrating social identity into their own self-concept in elementary school by assessing their position among peers. By age five, acceptance from peers significantly affects children's self-concept, affecting their behaviour and academic success. Model The self-concept is an internal model that uses self-assessments in order to define one's self-schemas. Changes in self-concept can be measured by spontaneous self-report where a person is prompted by a question like "Who are you?". Often when measuring changes to the self self-evaluation, whether a person has a positive or negative opinion of oneself, is measured instead of self-concept. Features such as personality, skills and abilities, occupation and hobbies, physical characteristics, gender, etc. are assessed and applied to self-schemas, which are ideas of oneself in a particular dimension (e.g., someone that considers themselves a geek will associate "geek-like" qualities to themselves). A collection of self-schemas makes up one's overall self-concept. For example, the statement "I am lazy" is a self-assessment that contributes to self-concept. Statements such as "I am tired", however, would not be part of someone's self-concept, since being tired is a temporary state and therefore cannot become a part of a self-schema. A person's self-concept may change with time as reassessment occurs, which in extreme cases can lead to identity crises. Parts Various theories identify different parts of the self include: Self-image The view one has of oneself Self-esteem How much you value yourself Ideal self What you wish to be Social identity The part of the self that is determined by members in social groups Development Researchers debate over when self-concept development begins. Some assert that gender stereotypes and expectations set by parents for their children affect children's understanding of themselves by approximately age three. However, at this developmental stage, children have a very broad sense of self; typically, they use words such as big or nice to describe themselves to others. While this represents the beginnings of self-concept, others suggest that self-concept develops later, in middle childhood, alongside the development of self-control. At this point, children are developmentally prepared to interpret their own feelings and abilities, as well as receive and consider feedback from peers, teachers, and family. In adolescence, the self-concept undergoes a significant time of change. Generally, self-concept changes more gradually, and instead, existing concepts are refined and solidified. However, the development of self-concept during adolescence shows a U-shaped curve, in which general self-concept decreases in early adolescence, followed by an increase in later adolescence. Romantic relationships can affect people's self-concept throughout a relationship. Self-expansion describes the addition of information to an individual's concept of self. Self-expansion can occur during relationships. Expansion of self-concept can occur during relationships, during new challenging experiences. Additionally, teens begin to evaluate their abilities on a continuum, as opposed to the "yes/no" evaluation of children. For example, while children might evaluate themselves "smart", teens might evaluate themselves as "not the smartest, but smarter than average." Despite differing opinions about the onset of self-concept development, researchers agree on the importance of one's self-concept, which influences people's behaviors and cognitive and emotional outcomes including (but not limited to) academic achievement, levels of happiness, anxiety, social integration, self-esteem, and life-satisfaction. Academic Academic self-concept refers to the personal beliefs about their academic abilities or skills. Some research suggests that it begins developing from ages three to five due to influence from parents and early educators. By age ten or eleven, children assess their academic abilities by comparing themselves to their peers. These social comparisons are also referred to as self-estimates. Self-estimates of cognitive ability are most accurate when evaluating subjects that deal with numbers, such as math. Self-estimates were more likely to be poor in other areas, such as reasoning speed. Some researchers suggest that to raise academic self-concept, parents and teachers need to provide children with specific feedback that focuses on their particular skills or abilities. Others also state that learning opportunities should be conducted in groups (both mixed-ability and like-ability) that downplay social comparison, as too much of either type of grouping can have adverse effects on children's academic self-concept and the way they view themselves in relation to their peers. Physical Physical self-concept is the individual's perception of themselves in areas of physical ability and appearance. Physical ability includes concepts such as physical strength and endurance, while appearance refers to attractiveness and body image. Adolescents experience significant changes in general physical self-concept at the onset of puberty, about eleven years old for girls and about 15 years old for boys. The bodily changes during puberty, in conjunction with the various psychological changes of this period, makes adolescence especially significant for the development of physical self-concept. An important factor of physical self-concept development is participation in physical activities. It has even been suggested that adolescent involvement in competitive sports increases physical self-concept. Gender identity A person's gender identity is a sense of one's own gender. These ideas typically form in young children. According to the International Encyclopedia of Marriage and Family, gender identity is developed at an early age when the child starts to communicate; by the age of eighteen months to two years is when the child begins to identify as a girl or a boy. After this stage, some consider gender identity already formed, although some consider non-gendered identities more salient during that young of an age. Kohlberg noted gender constancy occurs by the ages of five to six, a child becomes well-aware of their gender identity. Both biological and social factors may influence identities such as a sense of individuality, identities of place as well as gendered identities. As part of environmental attitudes, some suggest women more than men care about the environment. Forms of gender stereotyping is also important to consider in clinical settings. For example, a study at Kuwait University with a small sample of 102 individuals with gender dysphoria examined self-concept, masculinity and femininity. Findings were that children who grew up on lower family bonds had lower self-concept. Clearly, it is important to consider the context of social and political attitudes and beliefs before drawing any conclusions about gender identities in relation to personality, particularly about mental health and issues around acceptable behaviours. Measures Motivational properties Self-concept can have motivational properties. There are four types of motives in particular that are most related to self-concept: Self-assessment: desire to receive information about the self that is accurate Self-enhancement: desire to receive feedback that informs the self of positive or desirable characteristics Self-verification: desire to confirm what one already knows about the self Self-improvement: desire to learn things that will help to improve the self Some of these motives may be more prominent depending on the situation. In Western societies, the most automatic is the self-enhancement motive, and may be dominant in some situations where motives contradict one another. For example, the self-enhancement motive may contradict and dominate the self-assessment motive if one seeks out inaccurate compliments rather than honest feedback. Additionally, self-concept can motivate behavior because people tend to act in ways that reaffirm their self-concept, which is consistent with the idea of the self-verification motive. In particular, if people perceive the self a certain way and receive feedback contrary to this perception, a tension is produced that motivates them to reestablish consistency between environmental feedback and self-concept. For example, if someone believes herself to be outgoing, but someone tells her she is shy, she may be motivated to avoid that person or the environment in which she met that person because it is inconsistent with her self-concept of being an outgoing person. Further, another major motivational property of self-concept comes from the desire to eliminate the discrepancy between one's current self-concept and his or her ideal possible self. This is parallel with the idea of the self-improvement motive. For example, if one's current self-concept is that she is a novice at piano playing, though she wants to become a concert pianist, this discrepancy will generate motivation to engage in behaviors (like practicing playing piano) that will bring her closer to her ideal possible self (being a concert pianist). Cultural differences Worldviews about one's self in relation to others differ across and within cultures. Western cultures place particular importance on personal independence and on the expression of one's own attributes (i.e. the self is more important than the group). This is not to say those in an independent culture do not identify and support their society or culture, there is simply a different type of relationship. Non-Western cultures favor an interdependent view of the self: Interpersonal relationships are more important than one's individual accomplishments, and individuals experience a sense of oneness with the group. Such identity fusion can have positive and negative consequences. Identity fusion can give people the sense that their existence is meaningful provided the person feels included within the society (for example, in Japan, the definition of the word for self roughly translates to "one's share of the shared life space"). Identity fusion can also harm one's self-concept because one's behaviors and thoughts must be able to change to continue to align with those of the overall group. Non-interdependent self-concepts can also differ between cultural traditions. Additionally, one's social norms and cultural identities have a large effect on self-concept and mental well-being. When a person can clearly define their culture's norms and how those play a part in their life, that person is more likely to have a positive self-identity, leading to better self-concept and psychological welfare. One example of this is in regards to consistency. One of the social norms within a Western, independent culture is consistency, which allows each person to maintain their self-concept over time. The social norm in a non-Western, interdependent culture has a larger focus on one's ability to be flexible and to change as the group and environment change. If this social norm is not followed in either culture, this can lead to a disconnection with one's social identity, which affects personality, behavior, and overall self-concept. Buddhists emphasize the impermanence of any self-concept. Anit Somech, an organizational psychologist and professor, who carried a small study in Israel showed that the divide between independent and interdependent self-concepts exists within cultures as well. Researchers compared mid-level merchants in an urban community with those in a kibbutz (collective community). The managers from the urban community followed the independent culture. When asked to describe themselves, they primarily used descriptions of their own personal traits without comparison to others within their group. When the independent, urban managers gave interdependent-type responses, most were focused on work or school, due to these being the two biggest groups identified within an independent culture. The kibbutz managers followed the interdependent culture. They used hobbies and preferences to describe their traits, which is more frequently seen in interdependent cultures as these serve as a means of comparison with others in their society. There was also a large focus on residence, lending to the fact they share resources and living space with the others from the kibbutz. These types of differences were also seen in a study done with Swedish and Japanese adolescents. Typically, these would both be considered non-Western cultures, but the Swedish showed more independent traits, while the Japanese followed the expected interdependent traits. Along with viewing one's identity as part of a group, another factor that coincides with self-concept is stereotype threat. Many working names have been used for this term: stigmatization, stigma pressure, stigma vulnerability and stereotype vulnerability. The terminology that was settled upon Claude Steele and Joshua Aronson to describe this "situational predicament was 'stereotype threat.' This term captures the idea of a situational predicament as a contingency of their [marginalized] group identity, a real threat of judgment or treatment in the person's environment that went beyond any limitations within." Steele and Aronson described the idea of stereotype threat in their study of how this socio‐psychological notion affected the intellectual performance of African Americans. Steele and Aronson tested a hypothesis by administering a diagnostic exam between two different groups: African American and White students. For one group a stereotype threat was introduced while the other served as a control. The findings were that academic performance of the African American students was significantly lower than their White counterparts when a stereotype threat was perceived after controlling for intellectual ability. Since the inception by Steele and Aronson of stereotype threat, other research has demonstrated the applicability of this idea to other groups. When one's actions could negatively influence general assumptions of a stereotype, those actions are consciously emphasized. Instead of one's individual characteristics, one's categorization into a social group is what society views objectively – which could be perceived as a negative stereotype, thus creating a threat. "The notion that stereotypes held about a particular group may create psychologically threatening situations associated with fears of confirming judgment about one's group, and in turn, inhibit learning and performance." The presence of stereotype threat perpetuates a "hidden curriculum" that further marginalized minority groups. Hidden curriculum refers to a covert expression of prejudice where one standard is accepted as the "set and right way to do things". More specifically, the hidden curriculum is an unintended transmission of social constructs that operate in the social environment of an educational setting or classroom. In the United States' educational system, this caters to dominant culture groups in American society."A primary source of stereotyping is often the teachers education program itself. It is in these programs that teachers learn that poor students and students of color should be expected to achieve less than their 'mainstream' counterparts." These child-deficit assumptions that are built into the program that instructs teachers and lead to inadvertently testing all students on a "mainstream" standard that is not necessarily academic and that does not account for the social values and norms of non-"mainstream" students. For example, the model of "teacher as the formal authority" is the orthodox teaching role that has been perpetuated for many years until the 21st-century teaching model landed on the scene. As part of the 5 main teaching style proposed by Anthony Grasha, a cognitive and social psychologist until his death in 2003, the authoritarian style is described as believing that there are "correct, acceptable, and standard ways to do things". Gender issues Some say, girls tend to prefer one-on-one (dyadic) interaction, forming tight, intimate bonds, while boys prefer group activities. One study in particular found that boys performed almost twice as well in groups than in pairs, whereas girls did not show such a difference. In early adolescence, the variations in physical self-concepts appear slightly stronger for boys than girls. This includes self-concepts about movement, body, appearance and other physical attributes. Yet during periods of physical change such as infancy, adolescence and ageing, it is particularly useful to compare these self-concepts with measured skills before drawing broad conclusions Some studies suggest self-concept of social behaviours are substantially similar with specific variations for girls and boys. For instance, girls are more likely than boys to wait their turn to speak, agree with others, and acknowledge the contributions of others. It seems boys see themselves as building larger group relationships based on shared interests, threaten, boast, and call names. In mixed-sex pairs of children aged 33 months, girls were more likely to passively watch a boy play, and boys were more likely to be unresponsive to what the girls were saying. In some cultures, such stereotypical traits are sustained from childhood to adulthood suggesting a strong influence of expectations by other people in these cultures. The key impacts of social self-concepts on social behaviours and of social behaviours on social self-concepts is a vital area of ongoing research. In contrast, research suggest overall similarities for gender groups in self-concepts about academic work. In general, any variations are systematically gender-based yet small in terms of effect sizes. Any variations suggest overall academic self-concept are slightly stronger for men than women in mathematics, science and technology and slightly stronger for women than men about language related skills. It is important to observe there is no link between self concepts and skills [i.e., correlations about r = 0.19 are rather weak if statistically significant with large samples]. Clearly, even small variations in perceived self-concepts tend to reflect gender stereotypes evident in some cultures . In recent years, more women have been entering into the STEM field, working in predominantly mathematics, technology and science related careers. Many factors play a role in variations in gender effects on self-concept to accumulate as attitudes to mathematics and science; in particular, the impact other people's expectations rather than role-models on our self-concepts . Media A commonly-asked question is "why do people choose one form of media over another?" According to the Galileo Model, there are different forms of media spread throughout three-dimensional space. The closer one form of media is to another the more similar the source of media is to each other. The farther away from each form of media is in space, the least similar the source of media is. For example, mobile and cell phone are located closest in space where as newspaper and texting are farthest apart in space. The study further explained the relationship between self-concept and the use of different forms of media. The more hours per day an individual uses a form of media, the closer that form of media is to their self-concept. Self-concept is related to the form of media most used. If one considers oneself tech savvy, then one will use mobile phones more often than one would use a newspaper. If one considers oneself old fashioned, then one will use a magazine more often than one would instant message. In this day and age, social media is where people experience most of their communication. With developing a sense of self on a psychological level, feeling as part of a greater body such as social, emotional, political bodies can affect how one feels about themselves. If a person is included or excluded from a group, that can affect how they form their identities. Growing social media is a place for not only expressing an already formed identity, but to explore and experiment with developing identities. In the United Kingdom, a study about changing identities revealed that some people believe that partaking in online social media is the first time they have felt like themselves, and they have achieved their true identities. They also revealed that these online identities transferred to their offline identities. A 2007 study was done on adolescents aged 12 to 18 to view the ways in which social media affects the formation of an identity. The study found that it affected the formation in three different ways: risk taking, communication of personal views, and perceptions of influences. In this particular study, risk taking behavior was engaging with strangers. When it came to communication about personal views, half of the participants reported that it was easier to express these opinions online, because they felt an enhanced ability to be creative and meaningful. When it came to other's opinions, one subject reported finding out more about themselves, like openness to experience, because of receiving differing opinions on things such as relationships. See also References Further reading (on self-concept versus self-esteem) Educational psychology Conceptions of self Epistemology of science
0.775805
0.997596
0.77394
Thought
In their most common sense, the terms thought and thinking refer to cognitive processes that can happen independently of sensory stimulation. Their most paradigmatic forms are judging, reasoning, concept formation, problem solving, and deliberation. But other mental processes, like considering an idea, memory, or imagination, are also often included. These processes can happen internally independent of the sensory organs, unlike perception. But when understood in the widest sense, any mental event may be understood as a form of thinking, including perception and unconscious mental processes. In a slightly different sense, the term thought refers not to the mental processes themselves but to mental states or systems of ideas brought about by these processes. Various theories of thinking have been proposed, some of which aim to capture the characteristic features of thought. Platonists hold that thinking consists in discerning and inspecting Platonic forms and their interrelations. It involves the ability to discriminate between the pure Platonic forms themselves and the mere imitations found in the sensory world. According to Aristotelianism, to think about something is to instantiate in one's mind the universal essence of the object of thought. These universals are abstracted from sense experience and are not understood as existing in a changeless intelligible world, in contrast to Platonism. Conceptualism is closely related to Aristotelianism: it identifies thinking with mentally evoking concepts instead of instantiating essences. Inner speech theories claim that thinking is a form of inner speech in which words are silently expressed in the thinker's mind. According to some accounts, this happens in a regular language, like English or French. The language of thought hypothesis, on the other hand, holds that this happens in the medium of a unique mental language called Mentalese. Central to this idea is that linguistic representational systems are built up from atomic and compound representations and that this structure is also found in thought. Associationists understand thinking as the succession of ideas or images. They are particularly interested in the laws of association that govern how the train of thought unfolds. Behaviorists, by contrast, identify thinking with behavioral dispositions to engage in public intelligent behavior as a reaction to particular external stimuli. Computationalism is the most recent of these theories. It sees thinking in analogy to how computers work in terms of the storage, transmission, and processing of information. Various types of thinking are discussed in academic literature. A judgment is a mental operation in which a proposition is evoked and then either affirmed or denied. Reasoning, on the other hand, is the process of drawing conclusions from premises or evidence. Both judging and reasoning depend on the possession of the relevant concepts, which are acquired in the process of concept formation. In the case of problem solving, thinking aims at reaching a predefined goal by overcoming certain obstacles. Deliberation is an important form of practical thought that consists in formulating possible courses of action and assessing the reasons for and against them. This may lead to a decision by choosing the most favorable option. Both episodic memory and imagination present objects and situations internally, in an attempt to accurately reproduce what was previously experienced or as a free rearrangement, respectively. Unconscious thought is thought that happens without being directly experienced. It is sometimes posited to explain how difficult problems are solved in cases where no conscious thought was employed. Thought is discussed in various academic disciplines. Phenomenology is interested in the experience of thinking. An important question in this field concerns the experiential character of thinking and to what extent this character can be explained in terms of sensory experience. Metaphysics is, among other things, interested in the relation between mind and matter. This concerns the question of how thinking can fit into the material world as described by the natural sciences. Cognitive psychology aims to understand thought as a form of information processing. Developmental psychology, on the other hand, investigates the development of thought from birth to maturity and asks which factors this development depends on. Psychoanalysis emphasizes the role of the unconscious in mental life. Other fields concerned with thought include linguistics, neuroscience, artificial intelligence, biology, and sociology. Various concepts and theories are closely related to the topic of thought. The term "law of thought" refers to three fundamental laws of logic: the law of contradiction, the law of excluded middle, and the principle of identity. Counterfactual thinking involves mental representations of non-actual situations and events in which the thinker tries to assess what would be the case if things had been different. Thought experiments often employ counterfactual thinking in order to illustrate theories or to test their plausibility. Critical thinking is a form of thinking that is reasonable, reflective, and focused on determining what to believe or how to act. Positive thinking involves focusing one's attention on the positive aspects of one's situation and is intimately related to optimism. Definition The terms "thought" and "thinking" refer to a wide variety of psychological activities. In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation. This includes various different mental processes, like considering an idea or proposition or judging it to be true. In this sense, memory and imagination are forms of thought but perception is not. In a more restricted sense, only the most paradigmatic cases are considered thought. These involve conscious processes that are conceptual or linguistic and sufficiently abstract, like judging, inferring, problem solving, and deliberating. Sometimes the terms "thought" and "thinking" are understood in a very wide sense as referring to any form of mental process, conscious or unconscious. In this sense, they may be used synonymously with the term "mind". This usage is encountered, for example, in the Cartesian tradition, where minds are understood as thinking things, and in the cognitive sciences. But this sense may include the restriction that such processes have to lead to intelligent behavior to be considered thought. A contrast sometimes found in the academic literature is that between thinking and feeling. In this context, thinking is associated with a sober, dispassionate, and rational approach to its topic while feeling involves a direct emotional engagement. The terms "thought" and "thinking" can also be used to refer not to the mental processes themselves but to mental states or systems of ideas brought about by these processes. In this sense, they are often synonymous with the term "belief" and its cognates and may refer to the mental states which either belong to an individual or are common among a certain group of people. Discussions of thought in the academic literature often leave it implicit which sense of the term they have in mind. The word thought comes from Old English þoht, or geþoht, from the stem of þencan "to conceive of in the mind, consider". Theories of thinking Various theories of thinking have been proposed. They aim to capture the characteristic features of thinking. The theories listed here are not exclusive: it may be possible to combine some without leading to a contradiction. Platonism According to Platonism, thinking is a spiritual activity in which Platonic forms and their interrelations are discerned and inspected. This activity is understood as a form of silent inner speech in which the soul talks to itself. Platonic forms are seen as universals that exist in a changeless realm different from the sensible world. Examples include the forms of goodness, beauty, unity, and sameness. On this view, the difficulty of thinking consists in being unable to grasp the Platonic forms and to distinguish them as the original from the mere imitations found in the sensory world. This means, for example, distinguishing beauty itself from derivative images of beauty. One problem for this view is to explain how humans can learn and think about Platonic forms belonging to a different realm. Plato himself tries to solve this problem through his theory of recollection, according to which the soul already was in contact with the Platonic forms before and is therefore able to remember what they are like. But this explanation depends on various assumptions usually not accepted in contemporary thought. Aristotelianism and conceptualism Aristotelians hold that the mind is able to think about something by instantiating the essence of the object of thought. So while thinking about trees, the mind instantiates tree-ness. This instantiation does not happen in matter, as is the case for actual trees, but in mind, though the universal essence instantiated in both cases is the same. In contrast to Platonism, these universals are not understood as Platonic forms existing in a changeless intelligible world. Instead, they only exist to the extent that they are instantiated. The mind learns to discriminate universals through abstraction from experience. This explanation avoids various of the objections raised against Platonism. Conceptualism is closely related to Aristotelianism. It states that thinking consists in mentally evoking concepts. Some of these concepts may be innate, but most have to be learned through abstraction from sense experience before they can be used in thought. It has been argued against these views that they have problems in accounting for the logical form of thought. For example, to think that it will either rain or snow, it is not sufficient to instantiate the essences of rain and snow or to evoke the corresponding concepts. The reason for this is that the disjunctive relation between the rain and the snow is not captured this way. Another problem shared by these positions is the difficulty of giving a satisfying account of how essences or concepts are learned by the mind through abstraction. Inner speech theory Inner speech theories claim that thinking is a form of inner speech. This view is sometimes termed psychological nominalism. It states that thinking involves silently evoking words and connecting them to form mental sentences. The knowledge a person has of their thoughts can be explained as a form of overhearing one's own silent monologue. Three central aspects are often ascribed to inner speech: it is in an important sense similar to hearing sounds, it involves the use of language and it constitutes a motor plan that could be used for actual speech. This connection to language is supported by the fact that thinking is often accompanied by muscle activity in the speech organs. This activity may facilitate thinking in certain cases but is not necessary for it in general. According to some accounts, thinking happens not in a regular language, like English or French, but has its own type of language with the corresponding symbols and syntax. This theory is known as the language of thought hypothesis. Inner speech theory has a strong initial plausibility since introspection suggests that indeed many thoughts are accompanied by inner speech. But its opponents usually contend that this is not true for all types of thinking. It has been argued, for example, that forms of daydreaming constitute non-linguistic thought. This issue is relevant to the question of whether animals have the capacity to think. If thinking is necessarily tied to language then this would suggest that there is an important gap between humans and animals since only humans have a sufficiently complex language. But the existence of non-linguistic thoughts suggests that this gap may not be that big and that some animals do indeed think. Language of thought hypothesis There are various theories about the relation between language and thought. One prominent version in contemporary philosophy is called the language of thought hypothesis. It states that thinking happens in the medium of a mental language. This language, often referred to as Mentalese, is similar to regular languages in various respects: it is composed of words that are connected to each other in syntactic ways to form sentences. This claim does not merely rest on an intuitive analogy between language and thought. Instead, it provides a clear definition of the features a representational system has to embody in order to have a linguistic structure. On the level of syntax, the representational system has to possess two types of representations: atomic and compound representations. Atomic representations are basic whereas compound representations are constituted either by other compound representations or by atomic representations. On the level of semantics, the semantic content or the meaning of the compound representations should depend on the semantic contents of its constituents. A representational system is linguistically structured if it fulfills these two requirements. The language of thought hypothesis states that the same is true for thinking in general. This would mean that thought is composed of certain atomic representational constituents that can be combined as described above. Apart from this abstract characterization, no further concrete claims are made about how human thought is implemented by the brain or which other similarities to natural language it has. The language of thought hypothesis was first introduced by Jerry Fodor. He argues in favor of this claim by holding that it constitutes the best explanation of the characteristic features of thinking. One of these features is productivity: a system of representations is productive if it can generate an infinite number of unique representations based on a low number of atomic representations. This applies to thought since human beings are capable of entertaining an infinite number of distinct thoughts even though their mental capacities are quite limited. Other characteristic features of thinking include systematicity and inferential coherence. Fodor argues that the language of thought hypothesis is true as it explains how thought can have these features and because there is no good alternative explanation. Some arguments against the language of thought hypothesis are based on neural networks, which are able to produce intelligent behavior without depending on representational systems. Other objections focus on the idea that some mental representations happen non-linguistically, for example, in the form of maps or images. Computationalists have been especially interested in the language of thought hypothesis since it provides ways to close the gap between thought in the human brain and computational processes implemented by computers. The reason for this is that processes over representations that respect syntax and semantics, like inferences according to the modus ponens, can be implemented by physical systems using causal relations. The same linguistic systems may be implemented through different material systems, like brains or computers. In this way, computers can think. Associationism An important view in the empiricist tradition has been associationism, the view that thinking consists in the succession of ideas or images. This succession is seen as being governed by laws of association, which determine how the train of thought unfolds. These laws are different from logical relations between the contents of thoughts, which are found in the case of drawing inferences by moving from the thought of the premises to the thought of the conclusion. Various laws of association have been suggested. According to the laws of similarity and contrast, ideas tend to evoke other ideas that are either very similar to them or their opposite. The law of contiguity, on the other hand, states that if two ideas were frequently experienced together, then the experience of one tends to cause the experience of the other. In this sense, the history of an organism's experience determines which thoughts the organism has and how these thoughts unfold. But such an association does not guarantee that the connection is meaningful or rational. For example, because of the association between the terms "cold" and "Idaho", the thought "this coffee shop is cold" might lead to the thought "Russia should annex Idaho". One form of associationism is imagism. It states that thinking involves entertaining a sequence of images where earlier images conjure up later images based on the laws of association. One problem with this view is that we can think about things that we cannot imagine. This is especially relevant when the thought involves very complex objects or infinities, which is common, for example, in mathematical thought. One criticism directed at associationism in general is that its claim is too far-reaching. There is wide agreement that associative processes as studied by associationists play some role in how thought unfolds. But the claim that this mechanism is sufficient to understand all thought or all mental processes is usually not accepted. Behaviorism According to behaviorism, thinking consists in behavioral dispositions to engage in certain publicly observable behavior as a reaction to particular external stimuli. On this view, having a particular thought is the same as having a disposition to behave in a certain way. This view is often motivated by empirical considerations: it is very difficult to study thinking as a private mental process but it is much easier to study how organisms react to a certain situation with a given behavior. In this sense, the capacity to solve problems not through existing habits but through creative new approaches is particularly relevant. The term "behaviorism" is also sometimes used in a slightly different sense when applied to thinking to refer to a specific form of inner speech theory. This view focuses on the idea that the relevant inner speech is a derivative form of regular outward speech. This sense overlaps with how behaviorism is understood more commonly in philosophy of mind since these inner speech acts are not observed by the researcher but merely inferred from the subject's intelligent behavior. This remains true to the general behaviorist principle that behavioral evidence is required for any psychological hypothesis. One problem for behaviorism is that the same entity often behaves differently despite being in the same situation as before. This problem consists in the fact that individual thoughts or mental states usually do not correspond to one particular behavior. So thinking that the pie is tasty does not automatically lead to eating the pie, since various other mental states may still inhibit this behavior, for example, the belief that it would be impolite to do so or that the pie is poisoned. Computationalism Computationalist theories of thinking, often found in the cognitive sciences, understand thinking as a form of information processing. These views developed with the rise of computers in the second part of the 20th century, when various theorists saw thinking in analogy to computer operations. On such views, the information may be encoded differently in the brain, but in principle, the same operations take place there as well, corresponding to the storage, transmission, and processing of information. But while this analogy has some intuitive attraction, theorists have struggled to give a more explicit explanation of what computation is. A further problem consists in explaining the sense in which thinking is a form of computing. The traditionally dominant view defines computation in terms of Turing machines, though contemporary accounts often focus on neural networks for their analogies. A Turing machine is capable of executing any algorithm based on a few very basic principles, such as reading a symbol from a cell, writing a symbol to a cell, and executing instructions based on the symbols read. This way it is possible to perform deductive reasoning following the inference rules of formal logic as well as simulating many other functions of the mind, such as language processing, decision making, and motor control. But computationalism does not only claim that thinking is in some sense similar to computation. Instead, it is claimed that thinking just is a form of computation or that the mind is a Turing machine. Computationalist theories of thought are sometimes divided into functionalist and representationalist approaches. Functionalist approaches define mental states through their causal roles but allow both external and internal events in their causal network. Thought may be seen as a form of program that can be executed in the same way by many different systems, including humans, animals, and even robots. According to one such view, whether something is a thought only depends on its role "in producing further internal states and verbal outputs". Representationalism, on the other hand, focuses on the representational features of mental states and defines thoughts as sequences of intentional mental states. In this sense, computationalism is often combined with the language of thought hypothesis by interpreting these sequences as symbols whose order is governed by syntactic rules. Various arguments have been raised against computationalism. In one sense, it seems trivial since almost any physical system can be described as executing computations and therefore as thinking. For example, it has been argued that the molecular movements in a regular wall can be understood as computing an algorithm since they are "isomorphic to the formal structure of the program" in question under the right interpretation. This would lead to the implausible conclusion that the wall is thinking. Another objection focuses on the idea that computationalism captures only some aspects of thought but is unable to account for other crucial aspects of human cognition. Types of thinking A great variety of types of thinking are discussed in the academic literature. A common approach divides them into those forms that aim at the creation of theoretical knowledge and those that aim at producing actions or correct decisions, but there is no universally accepted taxonomy summarizing all these types. Entertaining, judging, and reasoning Thinking is often identified with the act of judging. A judgment is a mental operation in which a proposition is evoked and then either affirmed or denied. It involves deciding what to believe and aims at determining whether the judged proposition is true or false. Various theories of judgment have been proposed. The traditionally dominant approach is the combination theory. It states that judgments consist in the combination of concepts. On this view, to judge that "all men are mortal" is to combine the concepts "man" and "mortal". The same concepts can be combined in different ways, corresponding to different forms of judgment, for example, as "some men are mortal" or "no man is mortal". Other theories of judgment focus more on the relation between the judged proposition and reality. According to Franz Brentano, a judgment is either a belief or a disbelief in the existence of some entity. In this sense, there are only two fundamental forms of judgment: "A exists" and "A does not exist". When applied to the sentence "all men are mortal", the entity in question is "immortal men", of whom it is said that they do not exist. Important for Brentano is the distinction between the mere representation of the content of the judgment and the affirmation or the denial of the content. The mere representation of a proposition is often referred to as "entertaining a proposition". This is the case, for example, when one considers a proposition but has not yet made up one's mind about whether it is true or false. The term "thinking" can refer both to judging and to mere entertaining. This difference is often explicit in the way the thought is expressed: "thinking that" usually involves a judgment whereas "thinking about" refers to the neutral representation of a proposition without an accompanying belief. In this case, the proposition is merely entertained but not yet judged. Some forms of thinking may involve the representation of objects without any propositions, as when someone is thinking about their grandmother. Reasoning is one of the most paradigmatic forms of thinking. It is the process of drawing conclusions from premises or evidence. Types of reasoning can be divided into deductive and non-deductive reasoning. Deductive reasoning is governed by certain rules of inference, which guarantee the truth of the conclusion if the premises are true. For example, given the premises "all men are mortal" and "Socrates is a man", it follows deductively that "Socrates is mortal". Non-deductive reasoning, also referred to as defeasible reasoning or non-monotonic reasoning, is still rationally compelling but the truth of the conclusion is not ensured by the truth of the premises. Induction is one form of non-deductive reasoning, for example, when one concludes that "the sun will rise tomorrow" based on one's experiences of all the previous days. Other forms of non-deductive reasoning include the inference to the best explanation and analogical reasoning. Fallacies are faulty forms of thinking that go against the norms of correct reasoning. Formal fallacies concern faulty inferences found in deductive reasoning. Denying the antecedent is one type of formal fallacy, for example, "If Othello is a bachelor, then he is male. Othello is not a bachelor. Therefore, Othello is not male". Informal fallacies, on the other hand, apply to all types of reasoning. The source of their flaw is to be found in the content or the context of the argument. This is often caused by ambiguous or vague expressions in natural language, as in "Feathers are light. What is light cannot be dark. Therefore, feathers cannot be dark". An important aspect of fallacies is that they seem to be rationally compelling on the first look and thereby seduce people into accepting and committing them. Whether an act of reasoning constitutes a fallacy does not depend on whether the premises are true or false but on their relation to the conclusion and, in some cases, on the context. Concept formation Concepts are general notions that constitute the fundamental building blocks of thought. They are rules that govern how objects are sorted into different classes. A person can only think about a proposition if they possess the concepts involved in this proposition. For example, the proposition "wombats are animals" involves the concepts "wombat" and "animal". Someone who does not possess the concept "wombat" may still be able to read the sentence but cannot entertain the corresponding proposition. Concept formation is a form of thinking in which new concepts are acquired. It involves becoming familiar with the characteristic features shared by all instances of the corresponding type of entity and developing the ability to identify positive and negative cases. This process usually corresponds to learning the meaning of the word associated with the type in question. There are various theories concerning how concepts and concept possession are to be understood. The use of metaphor may aid in the processes of concept formation. According to one popular view, concepts are to be understood in terms of abilities. On this view, two central aspects characterize concept possession: the ability to discriminate between positive and negative cases and the ability to draw inferences from this concept to related concepts. Concept formation corresponds to acquiring these abilities. It has been suggested that animals are also able to learn concepts to some extent, due to their ability to discriminate between different types of situations and to adjust their behavior accordingly. Problem solving In the case of problem solving, thinking aims at reaching a predefined goal by overcoming certain obstacles. This process often involves two different forms of thinking. On the one hand, divergent thinking aims at coming up with as many alternative solutions as possible. On the other hand, convergent thinking tries to narrow down the range of alternatives to the most promising candidates. Some researchers identify various steps in the process of problem solving. These steps include recognizing the problem, trying to understand its nature, identifying general criteria the solution should meet, deciding how these criteria should be prioritized, monitoring the progress, and evaluating the results. An important distinction concerns the type of problem that is faced. For well-structured problems, it is easy to determine which steps need to be taken to solve them, but executing these steps may still be difficult. For ill-structured problems, on the other hand, it is not clear what steps need to be taken, i.e. there is no clear formula that would lead to success if followed correctly. In this case, the solution may sometimes come in a flash of insight in which the problem is suddenly seen in a new light. Another way to categorize different forms of problem solving is by distinguishing between algorithms and heuristics. An algorithm is a formal procedure in which each step is clearly defined. It guarantees success if applied correctly. The long multiplication usually taught in school is an example of an algorithm for solving the problem of multiplying big numbers. Heuristics, on the other hand, are informal procedures. They are rough rules-of-thumb that tend to bring the thinker closer to the solution but success is not guaranteed in every case even if followed correctly. Examples of heuristics are working forward and working backward. These approaches involve planning one step at a time, either starting at the beginning and moving forward or starting at the end and moving backward. So when planning a trip, one could plan the different stages of the trip from origin to destiny in the chronological order of how the trip will be realized, or in the reverse order. Obstacles to problem solving can arise from the thinker's failure to take certain possibilities into account by fixating on one specific course of action. There are important differences between how novices and experts solve problems. For example, experts tend to allocate more time for conceptualizing the problem and work with more complex representations whereas novices tend to devote more time to executing putative solutions. Deliberation and decision Deliberation is an important form of practical thinking. It aims at formulating possible courses of action and assessing their value by considering the reasons for and against them. This involves foresight to anticipate what might happen. Based on this foresight, different courses of action can be formulated in order to influence what will happen. Decisions are an important part of deliberation. They are about comparing alternative courses of action and choosing the most favorable one. Decision theory is a formal model of how ideal rational agents would make decisions. It is based on the idea that they should always choose the alternative with the highest expected value. Each alternative can lead to various possible outcomes, each of which has a different value. The expected value of an alternative consists in the sum of the values of each outcome associated with it multiplied by the probability that this outcome occurs. According to decision theory, a decision is rational if the agent chooses the alternative associated with the highest expected value, as assessed from the agent's own perspective. Various theorists emphasize the practical nature of thought, i.e. that thinking is usually guided by some kind of task it aims to solve. In this sense, thinking has been compared to trial-and-error seen in animal behavior when faced with a new problem. On this view, the important difference is that this process happens inwardly as a form of simulation. This process is often much more efficient since once the solution is found in thought, only the behavior corresponding to the found solution has to be outwardly carried out and not all the others. Episodic memory and imagination When thinking is understood in a wide sense, it includes both episodic memory and imagination. In episodic memory, events one experienced in the past are relived. It is a form of mental time travel in which the past experience is re-experienced. But this does not constitute an exact copy of the original experience since the episodic memory involves additional aspects and information not present in the original experience. This includes both a feeling of familiarity and chronological information about the past event in relation to the present. Memory aims at representing how things actually were in the past, in contrast to imagination, which presents objects without aiming to show how things actually are or were. Because of this missing link to actuality, more freedom is involved in most forms of imagination: its contents can be freely varied, changed, and recombined to create new arrangements never experienced before. Episodic memory and imagination have in common with other forms of thought that they can arise internally without any stimulation of the sensory organs. But they are still closer to sensation than more abstract forms of thought since they present sensory contents that could, at least in principle, also be perceived. Unconscious thought Conscious thought is the paradigmatic form of thinking and is often the focus of the corresponding research. But it has been argued that some forms of thought also happen on the unconscious level. Unconscious thought is thought that happens in the background without being experienced. It is therefore not observed directly. Instead, its existence is usually inferred by other means. For example, when someone is faced with an important decision or a difficult problem, they may not be able to solve it straight away. But then, at a later time, the solution may suddenly flash before them even though no conscious steps of thinking were taken towards this solution in the meantime. In such cases, the cognitive labor needed to arrive at a solution is often explained in terms of unconscious thoughts. The central idea is that a cognitive transition happened and we need to posit unconscious thoughts to be able to explain how it happened. It has been argued that conscious and unconscious thoughts differ not just concerning their relation to experience but also concerning their capacities. According to unconscious thought theorists, for example, conscious thought excels at simple problems with few variables but is outperformed by unconscious thought when complex problems with many variables are involved. This is sometimes explained through the claim that the number of items one can consciously think about at the same time is rather limited whereas unconscious thought lacks such limitations. But other researchers have rejected the claim that unconscious thought is often superior to conscious thought. Other suggestions for the difference between the two forms of thinking include that conscious thought tends to follow formal logical laws while unconscious thought relies more on associative processing and that only conscious thinking is conceptually articulated and happens through the medium of language. In various disciplines Phenomenology Phenomenology is the science of the structure and contents of experience. The term "cognitive phenomenology" refers to the experiential character of thinking or what it feels like to think. Some theorists claim that there is no distinctive cognitive phenomenology. On such a view, the experience of thinking is just one form of sensory experience. According to one version, thinking just involves hearing a voice internally. According to another, there is no experience of thinking apart from the indirect effects thinking has on sensory experience. A weaker version of such an approach allows that thinking may have a distinct phenomenology but contends that thinking still depends on sensory experience because it cannot occur on its own. On this view, sensory contents constitute the foundation from which thinking may arise. An often-cited thought experiment in favor of the existence of a distinctive cognitive phenomenology involves two persons listening to a radio broadcast in French, one who understands French and the other who does not. The idea behind this example is that both listeners hear the same sounds and therefore have the same non-cognitive experience. In order to explain the difference, a distinctive cognitive phenomenology has to be posited: only the experience of the first person has this additional cognitive character since it is accompanied by a thought that corresponds to the meaning of what is said. Other arguments for the experience of thinking focus on the direct introspective access to thinking or on the thinker's knowledge of their own thoughts. Phenomenologists are also concerned with the characteristic features of the experience of thinking. Making a judgment is one of the prototypical forms of cognitive phenomenology. It involves epistemic agency, in which a proposition is entertained, evidence for and against it is considered, and, based on this reasoning, the proposition is either affirmed or rejected. It is sometimes argued that the experience of truth is central to thinking, i.e. that thinking aims at representing how the world is. It shares this feature with perception but differs from it in the way how it represents the world: without the use of sensory contents. One of the characteristic features often ascribed to thinking and judging is that they are predicative experiences, in contrast to the pre-predicative experience found in immediate perception. On such a view, various aspects of perceptual experience resemble judgments without being judgments in the strict sense. For example, the perceptual experience of the front of a house brings with it various expectations about aspects of the house not directly seen, like the size and shape of its other sides. This process is sometimes referred to as apperception. These expectations resemble judgments and can be wrong. This would be the case when it turns out upon walking around the "house" that it is no house at all but only a front facade of a house with nothing behind it. In this case, the perceptual expectations are frustrated and the perceiver is surprised. There is disagreement as to whether these pre-predicative aspects of regular perception should be understood as a form of cognitive phenomenology involving thinking. This issue is also important for understanding the relation between thought and language. The reason for this is that the pre-predicative expectations do not depend on language, which is sometimes taken as an example for non-linguistic thought. Various theorists have argued that pre-predicative experience is more basic or fundamental since predicative experience is in some sense built on top of it and therefore depends on it. Another way how phenomenologists have tried to distinguish the experience of thinking from other types of experiences is in relation to empty intentions in contrast to intuitive intentions. In this context, "intention" means that some kind of object is experienced. In intuitive intentions, the object is presented through sensory contents. Empty intentions, on the other hand, present their object in a more abstract manner without the help of sensory contents. So when perceiving a sunset, it is presented through sensory contents. The same sunset can also be presented non-intuitively when merely thinking about it without the help of sensory contents. In these cases, the same properties are ascribed to objects. The difference between these modes of presentation concerns not what properties are ascribed to the presented object but how the object is presented. Because of this commonality, it is possible for representations belonging to different modes to overlap or to diverge. For example, when searching one's glasses one may think to oneself that one left them on the kitchen table. This empty intention of the glasses lying on the kitchen table are then intuitively fulfilled when one sees them lying there upon arriving in the kitchen. This way, a perception can confirm or refute a thought depending on whether the empty intuitions are later fulfilled or not. Metaphysics The mind–body problem concerns the explanation of the relationship that exists between minds, or mental processes, and bodily states or processes. The main aim of philosophers working in this area is to determine the nature of the mind and mental states/processes, and how—or even if—minds are affected by and can affect the body. Human perceptual experiences depend on stimuli which arrive at one's various sensory organs from the external world and these stimuli cause changes in one's mental state, ultimately causing one to feel a sensation, which may be pleasant or unpleasant. Someone's desire for a slice of pizza, for example, will tend to cause that person to move his or her body in a specific manner and in a specific direction to obtain what he or she wants. The question, then, is how it can be possible for conscious experiences to arise out of a lump of gray matter endowed with nothing but electrochemical properties. A related problem is to explain how someone's propositional attitudes (e.g. beliefs and desires) can cause that individual's neurons to fire and his muscles to contract in exactly the correct manner. These comprise some of the puzzles that have confronted epistemologists and philosophers of mind from at least the time of René Descartes. The above reflects a classical, functional description of how we work as cognitive, thinking systems. However the apparently irresolvable mind–body problem is said to be overcome, and bypassed, by the embodied cognition approach, with its roots in the work of Heidegger, Piaget, Vygotsky, Merleau-Ponty and the pragmatist John Dewey. This approach states that the classical approach of separating the mind and analysing its processes is misguided: instead, we should see that the mind, actions of an embodied agent, and the environment it perceives and envisions, are all parts of a whole which determine each other. Therefore, functional analysis of the mind alone will always leave us with the mind–body problem which cannot be solved. Psychology Psychologists have concentrated on thinking as an intellectual exertion aimed at finding an answer to a question or the solution of a practical problem. Cognitive psychology is a branch of psychology that investigates internal mental processes such as problem solving, memory, and language; all of which are used in thinking. The school of thought arising from this approach is known as cognitivism, which is interested in how people mentally represent information processing. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who provided a theory of stages/phases that describes children's cognitive development. Cognitive psychologists use psychophysical and experimental approaches to understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. They study various aspects of thinking, including the psychology of reasoning, and how people make decisions and choices, solve problems, as well as engage in creative discovery and imaginative thought. Cognitive theory contends that solutions to problems either take the form of algorithms: rules that are not necessarily understood but promise a solution, or of heuristics: rules that are understood but that do not always guarantee solutions. Cognitive science differs from cognitive psychology in that algorithms that are intended to simulate human behavior are implemented or implementable on a computer. In other instances, solutions may be found through insight, a sudden awareness of relationships. In developmental psychology, Jean Piaget was a pioneer in the study of the development of thought from birth to maturity. In his theory of cognitive development, thought is based on actions on the environment. That is, Piaget suggests that the environment is understood through assimilations of objects in the available schemes of action and these accommodate to the objects to the extent that the available schemes fall short of the demands. As a result of this interplay between assimilation and accommodation, thought develops through a sequence of stages that differ qualitatively from each other in mode of representation and complexity of inference and understanding. That is, thought evolves from being based on perceptions and actions at the sensorimotor stage in the first two years of life to internal representations in early childhood. Subsequently, representations are gradually organized into logical structures which first operate on the concrete properties of the reality, in the stage of concrete operations, and then operate on abstract principles that organize concrete properties, in the stage of formal operations. In recent years, the Piagetian conception of thought was integrated with information processing conceptions. Thus, thought is considered as the result of mechanisms that are responsible for the representation and processing of information. In this conception, speed of processing, cognitive control, and working memory are the main functions underlying thought. In the neo-Piagetian theories of cognitive development, the development of thought is considered to come from increasing speed of processing, enhanced cognitive control, and increasing working memory. Positive psychology emphasizes the positive aspects of human psychology as equally important as the focus on mood disorders and other negative symptoms. In Character Strengths and Virtues, Peterson and Seligman list a series of positive characteristics. One person is not expected to have every strength, nor are they meant to fully capsulate that characteristic entirely. The list encourages positive thought that builds on a person's strengths, rather than how to "fix" their "symptoms". Psychoanalysis The "id", "ego" and "super-ego" are the three parts of the "psychic apparatus" defined in Sigmund Freud's structural model of the psyche; they are the three theoretical constructs in terms of whose activity and interaction mental life is described. According to this model, the uncoordinated instinctual trends are encompassed by the "id", the organized realistic part of the psyche is the "ego", and the critical, moralizing function is the "super-ego". For psychoanalysis, the unconscious does not include all that is not conscious, rather only what is actively repressed from conscious thought or what the person is averse to knowing consciously. In a sense this view places the self in relationship to their unconscious as an adversary, warring with itself to keep what is unconscious hidden. If a person feels pain, all he can think of is alleviating the pain. Any of his desires, to get rid of pain or enjoy something, command the mind what to do. For Freud, the unconscious was a repository for socially unacceptable ideas, wishes or desires, traumatic memories, and painful emotions put out of mind by the mechanism of psychological repression. However, the contents did not necessarily have to be solely negative. In the psychoanalytic view, the unconscious is a force that can only be recognized by its effects—it expresses itself in the symptom. The collective unconscious, sometimes known as collective subconscious, is a term of analytical psychology, coined by Carl Jung. It is a part of the unconscious mind, shared by a society, a people, or all humanity, in an interconnected system that is the product of all common experiences and contains such concepts as science, religion, and morality. While Freud did not distinguish between "individual psychology" and "collective psychology", Jung distinguished the collective unconscious from the personal subconscious particular to each human being. The collective unconscious is also known as "a reservoir of the experiences of our species". In the "Definitions" chapter of Jung's seminal work Psychological Types, under the definition of "collective" Jung references representations collectives, a term coined by Lucien Lévy-Bruhl in his 1910 book How Natives Think. Jung says this is what he describes as the collective unconscious. Freud, on the other hand, did not accept the idea of a collective unconscious. Related concepts and theories Laws of thought Traditionally, the term "laws of thought" refers to three fundamental laws of logic: the law of contradiction, the law of excluded middle, and the principle of identity. These laws by themselves are not sufficient as axioms of logic but they can be seen as important precursors to the modern axiomatization of logic. The law of contradiction states that for any proposition, it is impossible that both it and its negation are true: . According to the law of excluded middle, for any proposition, either it or its opposite is true: . The principle of identity asserts that any object is identical to itself: . There are different conceptions of how the laws of thought are to be understood. The interpretations most relevant to thinking are to understand them as prescriptive laws of how one should think or as formal laws of propositions that are true only because of their form and independent of their content or context. Metaphysical interpretations, on the other hand, see them as expressing the nature of "being as such". While there is a very wide acceptance of these three laws among logicians, they are not universally accepted. Aristotle, for example, held that there are some cases in which the law of excluded middle is false. This concerns primarily uncertain future events. On his view, it is currently "not ... either true or false that there will be a naval battle tomorrow". Modern intuitionist logic also rejects the law of excluded middle. This rejection is based on the idea that mathematical truth depends on verification through a proof. The law fails for cases where no such proof is possible, which exist in every sufficiently strong formal system, according to Gödel's incompleteness theorems. Dialetheists, on the other hand, reject the law of contradiction by holding that some propositions are both true and false. One motivation of this position is to avoid certain paradoxes in classical logic and set theory, like the liar's paradox and Russell's paradox. One of its problems is to find a formulation that circumvents the principle of explosion, i.e. that anything follows from a contradiction. Some formulations of the laws of thought include a fourth law: the principle of sufficient reason. It states that everything has a sufficient reason, ground, or cause. It is closely connected to the idea that everything is intelligible or can be explained in reference to its sufficient reason. According to this idea, there should always be a full explanation, at least in principle, to questions like why the sky is blue or why World War II happened. One problem for including this principle among the laws of thought is that it is a metaphysical principle, unlike the other three laws, which pertain primarily to logic. Counterfactual thinking Counterfactual thinking involves mental representations of non-actual situations and events, i.e. of what is "contrary to the facts". It is usually conditional: it aims at assessing what would be the case if a certain condition had obtained. In this sense, it tries to answer "What if"-questions. For example, thinking after an accident that one would be dead if one had not used the seatbelt is a form of counterfactual thinking: it assumes, contrary to the facts, that one had not used the seatbelt and tries to assess the result of this state of affairs. In this sense, counterfactual thinking is normally counterfactual only to a small degree since just a few facts are changed, like concerning the seatbelt, while most other facts are kept in place, like that one was driving, one's gender, the laws of physics, etc. When understood in the widest sense, there are forms of counterfactual thinking that do not involve anything contrary to the facts at all. This is the case, for example, when one tries to anticipate what might happen in the future if an uncertain event occurs and this event actually occurs later and brings with it the anticipated consequences. In this wider sense, the term "subjunctive conditional" is sometimes used instead of "counterfactual conditional". But the paradigmatic cases of counterfactual thinking involve alternatives to past events. Counterfactual thinking plays an important role since we evaluate the world around us not only by what actually happened but also by what could have happened. Humans have a greater tendency to engage in counterfactual thinking after something bad happened because of some kind of action the agent performed. In this sense, many regrets are associated with counterfactual thinking in which the agent contemplates how a better outcome could have been obtained if only they had acted differently. These cases are known as upward counterfactuals, in contrast to downward counterfactuals, in which the counterfactual scenario is worse than actuality. Upward counterfactual thinking is usually experienced as unpleasant, since it presents the actual circumstances in a bad light. This contrasts with the positive emotions associated with downward counterfactual thinking. But both forms are important since it is possible to learn from them and to adjust one's behavior accordingly to get better results in the future. Thought experiments Thought experiments involve thinking about imaginary situations, often with the aim of investigating the possible consequences of a change to the actual sequence of events. It is a controversial issue to what extent thought experiments should be understood as actual experiments. They are experiments in the sense that a certain situation is set up and one tries to learn from this situation by understanding what follows from it. They differ from regular experiments in that imagination is used to set up the situation and counterfactual reasoning is employed to evaluate what follows from it, instead of setting it up physically and observing the consequences through perception. Counterfactual thinking, therefore, plays a central role in thought experiments. The Chinese room argument is a famous thought experiment proposed by John Searle. It involves a person sitting inside a closed-off room, tasked with responding to messages written in Chinese. This person does not know Chinese but has a giant rule book that specifies exactly how to reply to any possible message, similar to how a computer would react to messages. The core idea of this thought experiment is that neither the person nor the computer understands Chinese. This way, Searle aims to show that computers lack a mind capable of deeper forms of understanding despite acting intelligently. Thought experiments are employed for various purposes, for example, for entertainment, education, or as arguments for or against theories. Most discussions focus on their use as arguments. This use is found in fields like philosophy, the natural sciences, and history. It is controversial since there is a lot of disagreement concerning the epistemic status of thought experiments, i.e. how reliable they are as evidence supporting or refuting a theory. Central to the rejection of this usage is the fact that they pretend to be a source of knowledge without the need to leave one's armchair in search of any new empirical data. Defenders of thought experiments usually contend that the intuitions underlying and guiding the thought experiments are, at least in some cases, reliable. But thought experiments can also fail if they are not properly supported by intuitions or if they go beyond what the intuitions support. In the latter sense, sometimes counter thought experiments are proposed that modify the original scenario in slight ways in order to show that initial intuitions cannot survive this change. Various taxonomies of thought experiments have been suggested. They can be distinguished, for example, by whether they are successful or not, by the discipline that uses them, by their role in a theory, or by whether they accept or modify the actual laws of physics. Critical thinking Critical thinking is a form of thinking that is reasonable, reflective, and focused on determining what to believe or how to act. It holds itself to various standards, like clarity and rationality. In this sense, it involves not just cognitive processes trying to solve the issue at hand but at the same time meta-cognitive processes ensuring that it lives up to its own standards. This includes assessing both that the reasoning itself is sound and that the evidence it rests on is reliable. This means that logic plays an important role in critical thinking. It concerns not just formal logic, but also informal logic, specifically to avoid various informal fallacies due to vague or ambiguous expressions in natural language. No generally accepted standard definition of "critical thinking" exists but there is significant overlap between the proposed definitions in their characterization of critical thinking as careful and goal-directed. According to some versions, only the thinker's own observations and experiments are accepted as evidence in critical thinking. Some restrict it to the formation of judgments but exclude action as its goal. A concrete everyday example of critical thinking, due to John Dewey, involves observing foam bubbles moving in a direction that is contrary to one's initial expectations. The critical thinker tries to come up with various possible explanations of this behavior and then slightly modifies the original situation in order to determine which one is the right explanation. But not all forms of cognitively valuable processes involve critical thinking. Arriving at the correct solution to a problem by blindly following the steps of an algorithm does not qualify as critical thinking. The same is true if the solution is presented to the thinker in a sudden flash of insight and accepted straight away. Critical thinking plays an important role in education: fostering the student's ability to think critically is often seen as an important educational goal. In this sense, it is important to convey not just a set of true beliefs to the student but also the ability to draw one's own conclusions and to question pre-existing beliefs. The abilities and dispositions learned this way may profit not just the individual but also society at large. Critics of the emphasis on critical thinking in education have argued that there is no universal form of correct thinking. Instead, they contend that different subject matters rely on different standards and education should focus on imparting these subject-specific skills instead of trying to teach universal methods of thinking. Other objections are based on the idea that critical thinking and the attitude underlying it involve various unjustified biases, like egocentrism, distanced objectivity, indifference, and an overemphasis of the theoretical in contrast to the practical. Positive thinking Positive thinking is an important topic in positive psychology. It involves focusing one's attention on the positive aspects of one's situation and thereby withdrawing one's attention from its negative sides. This is usually seen as a global outlook that applies especially to thinking but includes other mental processes, like feeling, as well. In this sense, it is closely related to optimism. It includes expecting positive things to happen in the future. This positive outlook makes it more likely for people to seek to attain new goals. It also increases the probability of continuing to strive towards pre-existing goals that seem difficult to reach instead of just giving up. The effects of positive thinking are not yet thoroughly researched, but some studies suggest that there is a correlation between positive thinking and well-being. For example, students and pregnant women with a positive outlook tend to be better at dealing with stressful situations. This is sometimes explained by pointing out that stress is not inherent in stressful situations but depends on the agent's interpretation of the situation. Reduced stress may therefore be found in positive thinkers because they tend to see such situations in a more positive light. But the effects also include the practical domain in that positive thinkers tend to employ healthier coping strategies when faced with difficult situations. This effects, for example, the time needed to fully recover from surgeries and the tendency to resume physical exercise afterward. But it has been argued that whether positive thinking actually leads to positive outcomes depends on various other factors. Without these factors, it may lead to negative results. For example, the tendency of optimists to keep striving in difficult situations can backfire if the course of events is outside the agent's control. Another danger associated with positive thinking is that it may remain only on the level of unrealistic fantasies and thereby fail to make a positive practical contribution to the agent's life. Pessimism, on the other hand, may have positive effects since it can mitigate disappointments by anticipating failures. Positive thinking is a recurrent topic in the self-help literature. Here, often the claim is made that one can significantly improve one's life by trying to think positively, even if this means fostering beliefs that are contrary to evidence. Such claims and the effectiveness of the suggested methods are controversial and have been criticized due to their lack of scientific evidence. In the New Thought movement, positive thinking figures in the law of attraction, the pseudoscientific claim that positive thoughts can directly influence the external world by attracting positive outcomes. See also Animal cognition Freethought Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more Rethinking References Further reading Bayne, Tim (21 September 2013), "Thoughts", New Scientist. 7-page feature article on the topic. Fields, R. Douglas, "The Brain Learns in Unexpected Ways: Neuroscientists have discovered a set of unfamiliar cellular mechanisms for making fresh memories", Scientific American, vol. 322, no. 3 (March 2020), pp. 74–79. "Myelin, long considered inert insulation on axons, is now seen as making a contribution to learning by controlling the speed at which signals travel along neural wiring." (p. 79.) Rajvanshi, Anil K. (2010), Nature of Human Thought, . Simon, Herbert, Models of Thought, Vol I, 1979, ; Vol II, 1989, , Yale University Press. External links Concepts in epistemology Concepts in metaphilosophy Concepts in metaphysics Concepts in the philosophy of mind Mental content Neuropsychological assessment Psychological concepts Sensory systems Sources of knowledge Unsolved problems in neuroscience
0.774783
0.998893
0.773925
Psychometrics
Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales. Practitioners are described as psychometricians, although not all who engage in psychometric research go by this title. Psychometricians usually possess specific qualifications, such as degrees or certifications, and most are psychologists with advanced graduate training in psychometrics and measurement theory. In addition to traditional academic institutions, practitioners also work for organizations such as the Educational Testing Service and Psychological Corporation. Some psychometric researchers focus on the construction and validation of assessment instruments, including surveys, scales, and open- or close-ended questionnaires. Others focus on research relating to measurement theory (e.g., item response theory, intraclass correlation) or specialize as learning and development professionals. Historical foundation Psychological testing has come from two streams of thought: the first, from Darwin, Galton, and Cattell, on the measurement of individual differences and the second, from Herbart, Weber, Fechner, and Wundt and their psychophysical measurements of a similar construct. The second set of individuals and their research is what has led to the development of experimental psychology and standardized testing. Victorian stream Charles Darwin was the inspiration behind Francis Galton, a scientist who advanced the development of psychometrics. In 1859, Darwin published his book On the Origin of Species. Darwin described the role of natural selection in the emergence, over time, of different populations of species of plants and animals. The book showed how individual members of a species differ among themselves and how they possess characteristics that are more or less adaptive to their environment. Those with more adaptive characteristics are more likely to survive to procreate and give rise to another generation. Those with less adaptive characteristics are less likely. These ideas stimulated Galton's interest in the study of human beings and how they differ one from another and how to measure those differences. Galton wrote a book entitled Hereditary Genius which was first published in 1869. The book described different characteristics that people possess and how those characteristics make some more "fit" than others. Today these differences, such as sensory and motor functioning (reaction time, visual acuity, and physical strength), are important domains of scientific psychology. Much of the early theoretical and applied work in psychometrics was undertaken in an attempt to measure intelligence. Galton often referred to as "the father of psychometrics," devised and included mental tests among his anthropometric measures. James McKeen Cattell, a pioneer in the field of psychometrics, went on to extend Galton's work. Cattell coined the term mental test, and is responsible for research and knowledge that ultimately led to the development of modern tests. German stream The origin of psychometrics also has connections to the related field of psychophysics. Around the same time that Darwin, Galton, and Cattell were making their discoveries, Herbart was also interested in "unlocking the mysteries of human consciousness" through the scientific method. Herbart was responsible for creating mathematical models of the mind, which were influential in educational practices for years to come. E.H. Weber built upon Herbart's work and tried to prove the existence of a psychological threshold, saying that a minimum stimulus was necessary to activate a sensory system. After Weber, G.T. Fechner expanded upon the knowledge he gleaned from Herbart and Weber, to devise the law that the strength of a sensation grows as the logarithm of the stimulus intensity. A follower of Weber and Fechner, Wilhelm Wundt is credited with founding the science of psychology. It is Wundt's influence that paved the way for others to develop psychological testing. 20th century In 1936, the psychometrician L. L. Thurstone, founder and first president of the Psychometric Society, developed and applied a theoretical approach to measurement referred to as the law of comparative judgment, an approach that has close connections to the psychophysical theory of Ernst Heinrich Weber and Gustav Fechner. In addition, Spearman and Thurstone both made important contributions to the theory and application of factor analysis, a statistical method developed and used extensively in psychometrics. In the late 1950s, Leopold Szondi made a historical and epistemological assessment of the impact of statistical thinking on psychology during previous few decades: "in the last decades, the specifically psychological thinking has been almost completely suppressed and removed, and replaced by a statistical thinking. Precisely here we see the cancer of testology and testomania of today." More recently, psychometric theory has been applied in the measurement of personality, attitudes, and beliefs, and academic achievement. These latent constructs cannot truly be measured, and much of the research and science in this discipline has been developed in an attempt to measure these constructs as close to the true score as possible. Figures who made significant contributions to psychometrics include Karl Pearson, Henry F. Kaiser, Carl Brigham, L. L. Thurstone, E. L. Thorndike, Georg Rasch, Eugene Galanter, Johnson O'Connor, Frederic M. Lord, Ledyard R Tucker, Louis Guttman, and Jane Loevinger. Definition of measurement in the social sciences The definition of measurement in the social sciences has a long history. A current widespread definition, proposed by Stanley Smith Stevens, is that measurement is "the assignment of numerals to objects or events according to some rule." This definition was introduced in a 1946 Science article in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted in the physical sciences, namely that scientific measurement entails "the estimation or discovery of the ratio of some magnitude of a quantitative attribute to a unit of the same attribute" (p. 358) Indeed, Stevens's definition of measurement was put forward in response to the British Ferguson Committee, whose chair, A. Ferguson, was a physicist. The committee was appointed in 1932 by the British Association for the Advancement of Science to investigate the possibility of quantitatively estimating sensory events. Although its chair and other members were physicists, the committee also included several psychologists. The committee's report highlighted the importance of the definition of measurement. While Stevens's response was to propose a new definition, which has had considerable influence in the field, this was by no means the only response to the report. Another, notably different, response was to accept the classical definition, as reflected in the following statement: Measurement in psychology and physics are in no sense different. Physicists can measure when they can find the operations by which they may meet the necessary criteria; psychologists have to do the same. They need not worry about the mysterious differences between the meaning of measurement in the two sciences (Reese, 1943, p. 49). These divergent responses are reflected in alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens's definition of measurement, which requires only that numbers are assigned according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the goal is to construct procedures or operations that provide data that meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether the relevant criteria have been met. Instruments and procedures The first psychometric instruments were designed to measure intelligence. One early approach to measuring intelligence was the test developed in France by Alfred Binet and Theodore Simon. That test was known as the .The French test was adapted for use in the U. S. by Lewis Terman of Stanford University, and named the Stanford-Binet IQ test. Another major focus in psychometrics has been on personality testing. There has been a range of theoretical approaches to conceptualizing and measuring personality, though there is no widely agreed upon theory. Some of the better-known instruments include the Minnesota Multiphasic Personality Inventory, the Five-Factor Model (or "Big 5") and tools such as Personality and Preference Inventory and the Myers–Briggs Type Indicator. Attitudes have also been studied extensively using psychometric approaches. An alternative method involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993). Theoretical approaches Psychometricians have developed a number of different measurement theories. These include classical test theory (CTT) and item response theory (IRT). An approach that seems mathematically to be similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences. Psychometricians have also developed methods for working with large matrices of correlations and covariances. Techniques in this general tradition include: factor analysis, a method of determining the underlying dimensions of data. One of the main challenges faced by users of factor analysis is a lack of consensus on appropriate procedures for determining the number of latent factors. A usual procedure is to stop factoring when eigenvalues drop below one because the original sphere shrinks. The lack of the cutting points concerns other multivariate methods, also. Multidimensional scaling is a method for finding a simple representation for data with a large number of latent dimensions. Cluster analysis is an approach to finding objects that are like each other. Factor analysis, multidimensional scaling, and cluster analysis are all multivariate descriptive methods used to distill from large amounts of data simpler structures. More recently, structural equation modeling and path analysis represent more sophisticated approaches to working with large covariance matrices. These methods allow statistically sophisticated models to be fitted to data and tested to determine if they are adequate fits. Because at a granular level psychometric research is concerned with the extent and nature of multidimensionality in each of the items of interest, a relatively new procedure known as bi-factor analysis can be helpful. Bi-factor analysis can decompose "an item's systematic variance in terms of, ideally, two sources, a general factor and one source of additional systematic variance." Key concepts Key concepts in classical test theory are reliability and validity. A reliable measure is one that measures a construct consistently across time, individuals, and situations. A valid measure is one that measures what it is intended to measure. Reliability is necessary, but not sufficient, for validity. Both reliability and validity can be assessed statistically. Consistency over repeated measures of the same test can be assessed with the Pearson correlation coefficient, and is often called test-retest reliability. Similarly, the equivalence of different versions of the same measure can be indexed by a Pearson correlation, and is called equivalent forms reliability or a similar term. Internal consistency, which addresses the homogeneity of a single test form, may be assessed by correlating performance on two halves of a test, which is termed split-half reliability; the value of this Pearson product-moment correlation coefficient for two half-tests is adjusted with the Spearman–Brown prediction formula to correspond to the correlation between two full-length tests. Perhaps the most commonly used index of reliability is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Other approaches include the intra-class correlation, which is the ratio of variance of measurements of a given target to the variance of all targets. There are a number of different forms of validity. Criterion-related validity refers to the extent to which a test or scale predicts a sample of behavior, i.e., the criterion, that is "external to the measuring instrument itself." That external sample of behavior can be many things including another test; college grade point average as when the high school SAT is used to predict performance in college; and even behavior that occurred in the past, for example, when a test of current psychological symptoms is used to predict the occurrence of past victimization (which would accurately represent postdiction). When the criterion measure is collected at the same time as the measure being validated the goal is to establish concurrent validity; when the criterion is collected later the goal is to establish predictive validity. A measure has construct validity if it is related to measures of other constructs as required by theory. Content validity is a demonstration that the items of a test do an adequate job of covering the domain being measured. In a personnel selection example, test content is based on a defined statement or set of statements of knowledge, skill, ability, or other characteristics obtained from a job analysis. Item response theory models the relationship between latent traits and responses to test items. Among other advantages, IRT provides a basis for obtaining an estimate of the location of a test-taker on a given latent trait as well as the standard error of measurement of that location. For example, a university student's knowledge of history can be deduced from his or her score on a university test and then be compared reliably with a high school student's knowledge deduced from a less difficult test. Scores derived by classical test theory do not have this characteristic, and assessment of actual ability (rather than ability relative to other test-takers) must be assessed by comparing scores to those of a "norm group" randomly selected from the population. In fact, all measures derived from classical test theory are dependent on the sample tested, while, in principle, those derived from item response theory are not. Standards of quality The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any test as a whole within a given context. A consideration of concern in many applied research settings is whether or not the metric of a given psychological inventory is meaningful or arbitrary. Testing standards In 2014, the American Educational Research Association (AERA), American Psychological Association (APA), and National Council on Measurement in Education (NCME) published a revision of the Standards for Educational and Psychological Testing, which describes standards for test development, evaluation, and use. The Standards cover essential topics in testing including validity, reliability/errors of measurement, and fairness in testing. The book also establishes standards related to testing operations including test design and development, scores, scales, norms, score linking, cut scores, test administration, scoring, reporting, score interpretation, test documentation, and rights and responsibilities of test takers and test users. Finally, the Standards cover topics related to testing applications, including psychological testing and assessment, workplace testing and credentialing, educational testing and assessment, and testing in program evaluation and public policy. Evaluation standards In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. The Personnel Evaluation Standards was published in 1988, The Program Evaluation Standards (2nd edition) was published in 1994, and The Student Evaluation Standards was published in 2003. Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing, and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance. Controversy and criticism Because psychometrics is based on latent psychological processes measured through correlations, there has been controversy about some psychometric measures. Critics, including practitioners in the physical sciences, have argued that such definition and quantification is difficult, and that such measurements are often misused by laymen, such as with personality tests used in employment procedures. The Standards for Educational and Psychological Measurement gives the following statement on test validity: "validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests". Simply put, a test is not valid unless it is used and interpreted in the way it is intended. Two types of tools used to measure personality traits are objective tests and projective measures. Examples of such tests are the: Big Five Inventory (BFI), Minnesota Multiphasic Personality Inventory (MMPI-2), Rorschach Inkblot test, Neurotic Personality Questionnaire KON-2006, or Eysenck Personality Questionnaire. Some of these tests are helpful because they have adequate reliability and validity, two factors that make tests consistent and accurate reflections of the underlying construct. The Myers–Briggs Type Indicator (MBTI), however, has questionable validity and has been the subject of much criticism. Psychometric specialist Robert Hogan wrote of the measure: "Most personality psychologists regard the MBTI as little more than an elaborate Chinese fortune cookie." Lee Cronbach noted in American Psychologist (1957) that, "correlational psychology, though fully as old as experimentation, was slower to mature. It qualifies equally as a discipline, however, because it asks a distinctive type of question and has technical methods of examining whether the question has been properly put and the data properly interpreted." He would go on to say, "The correlation method, for its part, can study what man has not learned to control or can never hope to control ... A true federation of the disciplines is required. Kept independent, they can give only wrong answers or no answers at all regarding certain important problems." Non-human: animals and machines Psychometrics addresses human abilities, attitudes, traits, and educational evolution. Notably, the study of behavior, mental processes, and abilities of non-human animals is usually addressed by comparative psychology, or with a continuum between non-human animals and the rest of animals by evolutionary psychology. Nonetheless, there are some advocators for a more gradual transition between the approach taken for humans and the approach taken for (non-human) animals. The evaluation of abilities, traits and learning evolution of machines has been mostly unrelated to the case of humans and non-human animals, with specific approaches in the area of artificial intelligence. A more integrated approach, under the name of universal psychometrics, has also been proposed. See also References Bibliography Michell, J. (1999). Measurement in Psychology. Cambridge: Cambridge University Press. Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests. Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. Chicago: The University of Chicago Press. Reese, T.W. (1943). The application of the theory of physical measurement to the measurement of psychological magnitudes, with three experimental examples. Psychological Monographs, 55, 1–89. Thurstone, L.L. (1929). The Measurement of Psychological Value. In T.V. Smith and W.K. Wright (Eds.), Essays in Philosophy by Seventeen Doctors of Philosophy of the University of Chicago. Chicago: Open Court. Thurstone, L.L. (1959). The Measurement of Values. Chicago: The University of Chicago Press. Further reading External links APA Standards for Educational and Psychological Testing International Personality Item Pool Joint Committee on Standards for Educational Evaluation The Psychometrics Centre, University of Cambridge Psychometric Society and Psychometrika homepage London Psychometric Laboratory Applied psychology Educational research Psychological testing Metrics Educational assessment and evaluation
0.776254
0.996986
0.773915
Attitude (psychology)
An attitude "is a summary evaluation of an object of thought. An attitude object can be anything a person discriminates or holds in mind." Attitudes include beliefs (cognition), emotional responses (affect) and behavioral tendencies (intentions, motivations). In the classical definition an attitude is persistent, while in more contemporary conceptualizations, attitudes may vary depending upon situations, context, or moods. While different researchers have defined attitudes in various ways, and may use different terms for the same concepts or the same term for different concepts, two essential attitude functions emerge from empirical research. For individuals, attitudes are cognitive schema that provide a structure to organize complex or ambiguous information, guiding particular evaluations or behaviors. More abstractly, attitudes serve higher psychological needs: expressive or symbolic functions (affirming values), maintaining social identity, and regulating emotions. Attitudes influence behavior at individual, interpersonal, and societal levels. Attitudes are complex and are acquired through life experience and socialization. Key topics in the study of attitudes include attitude strength, attitude change, and attitude-behavior relationships. The decades-long interest in attitude research is due to the interest in pursuing individual and social goals, an example being the public health campaigns to reduce cigarette smoking. Definitions The term attitude with the psychological meaning of an internal state of preparedness for action was not used until the 19th century. The American Psychological Association (APA) defines attitude as "a relatively enduring and general evaluation of an object, person, group, issue, or concept on a dimension ranging from negative to positive. Attitudes provide summary evaluations of target objects and are often assumed to be derived from specific beliefs, emotions, and past behaviors associated with those objects." For much of the 20th century, the empirical study of attitudes was at the core of social psychology. Attitudes can be derived from affective information (feelings), cognitive information (beliefs), and behavioral information (experiences), often predicting subsequent behavior. Alice H. Eagly and Shelly Chaiken, for example, define an attitude as "a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor." Though it is sometimes common to define an attitude as affect toward an object, affect (i.e., discrete emotions or overall arousal) is generally understood as an evaluative structure used to form an attitude object. Attitude may influence the attention to attitude objects, the use of categories for encoding information and the interpretation, judgement and recall of attitude-relevant information. These influences tend to be more powerful for strong attitudes which are accessible and based on elaborate supportive knowledge structure. The durability and impact of influence depend upon the strength formed from the consistency of heuristics. Attitudes can guide encoding information, attention and behaviors, even if the individual is pursuing unrelated goals. Past research reflected the traditional notion that attitudes are simple tendencies to like or dislike attitude objects, while contemporary research has begun to adopt more complex perspectives. Recent advances on the mental structure of attitudes have suggested that attitudes (and their components) might not always be simply positive or negative, but may include both positivity and negativity. In addition, strong and weak attitudes are associated with many different outcomes. Methodological advances have allowed researchers to consider with greater precision the existence and implications of possessing implicit (unconscious) and explicit (conscious) attitudes. A sociological approach relates attitudes to concepts of values and ideologies that conceptualize the relationship of thought to action at higher levels of analysis. Values represent the social goals which are used by individuals to orient their behaviors. Cross-cultural studies seek to understand cultural differences in terms of differences in values. For example, the individualism-collectivism dimension suggests that Western and Eastern societies differ fundamentally in the priority given to individual vs. group goals. Ideologies represent more generalized orientations that seek to make sense of related attitudes and values, and are the basis for moral judgements. Most contemporary perspectives on attitudes permit that people can also be conflicted or ambivalent toward an object by holding both positive and negative beliefs or feelings toward the same object. Additionally, measures of attitude may include intentions, but are not always predictive of behaviors. Explicit measures are of attitudes at the conscious level that are deliberately formed and easy to self-report. Implicit measures are of attitudes at an unconscious level, that function out of awareness. Both explicit and implicit attitudes can shape an individual's behavior. Implicit attitudes, however, are most likely to affect behavior when the demands are steep and an individual feels stressed or distracted. Measurement An attitude is a latent psychological construct, which consequently can only be measured indirectly. Commonly used measures include Likert scales which records agreement or disagreement with a series of belief statements. The semantic differential uses bipolar adjectives to measure the meaning associated with attitude objects. The Guttman scale focuses on items that vary in their degree of psychological difficulty. Supplementing these are several techniques that do not depend on deliberate responses such as unobtrusive, standard physiological, and neuroscientific measures. Following the explicit-implicit dichotomy, attitudes can be examined different measures. Explicit Explicit measures tend to rely on self-reports or easily observed behaviors. These tend to involve bipolar scales (e.g., good-bad, favorable-unfavorable, support-oppose, etc.). Explicit measures can also be used by measuring the straightforward attribution of characteristics to nominate groups. Explicit attitudes that develop in response to recent information, automatic evaluation were thought to reflect mental associations through early socialization experiences. Once formed, these associations are highly robust and resistant to change, as well as stable across both context and time. Hence the impact of contextual influences was assumed to be obfuscate assessment of a person's "true" and enduring evaluative disposition as well as limit the capacity to predict subsequent behavior. Implicit Implicit measures are not consciously directed and are assumed to be automatic, which may make implicit measures more valid and reliable than explicit measures (such as self-reports). For example, people can be motivated such that they find it socially desirable to appear to have certain attitudes. An example of this is that people can hold implicit prejudicial attitudes, but express explicit attitudes that report little prejudice. Implicit measures help account for these situations and look at attitudes that a person may not be aware of or want to show. Implicit measures therefore usually rely on an indirect measure of attitude. For example, the Implicit Association Test (IAT) examines the strength between the target concept and an attribute element by considering the latency in which a person can examine two response keys when each has two meanings. With little time to carefully examine what the participant is doing they respond according to internal keys. This priming can show attitudes the person has about a particular object. People are often unwilling to provide responses perceived as socially undesirable and therefore tend to report what they think their attitudes should be rather than what they know them to be. More complicated still, people may not even be consciously aware that they hold biased attitudes. Over the past few decades, scientists have developed several measures to avoid these unconscious biases. Structure Intra-attitudinal and inter-attitudinal structures There is also considerable interest in intra-attitudinal and inter-attitudinal structure, which is how an attitude is made (expectancy and value) and how different attitudes relate to one another. Intra-attitudinal structures are how underlying attitudes are consistent with one another. This connects different attitudes to one another and to more underlying psychological structures, such as values or ideology. Unlike intra-attitudinal structures, inter-attitudinal structures involve the strength of relations of more than one attitude within a network. Components The classic, tripartite view offered by Rosenberg and Hovland in 1960 is that an attitude contains cognitive, affective, and behavioral components. Empirical research, however, fails to support clear distinctions between thoughts, emotions, and behavioral intentions associated with a particular attitude. A criticism of the tripartite view of attitudes is that it requires cognitive, affective, and behavioral associations of an attitude to be consistent, but this may be implausible. Thus some views of attitude structure see the cognitive and behavioral components as derivative of affect or affect and behavior as derivative of underlying beliefs. "The cognitive component refers to the beliefs, thoughts, and attributes associated with an object". "The affective component refers to feelings or emotions linked to an attitude object". "The behavioral component refers to behaviors or experiences regarding an attitude object". An influential model of attitude is the multi-component model, where attitudes are evaluations of an object that have affective (relating to moods and feelings), behavioral, and cognitive components (the ABC model). The affective component of attitudes refers to feelings or emotions linked to an attitude object. Affective responses influence attitudes in a number of ways. For example, many people are afraid or scared of spiders. So this negative affective response is likely to cause someone to have a negative attitude towards spiders. The behavioral component of attitudes refers to the way an attitude influences how a person acts or behaves. The cognitive component of attitudes refers to the beliefs, thoughts, and attributes that a person associates with an object. Many times a person's attitude might be based on the negative and positive attributes they associate with an object. As a result of assigning negative or positive attributes to a person, place, or object, individuals may behave negatively or positively towards them. Beliefs Beliefs are cognitive states about the world—subjective probabilities that an object has a particular attribute or that an action will lead to a particular outcome. Beliefs can be patently and unequivocally false. For example, surveys show that a third of U.S. adults think that vaccines cause autism, despite the preponderance of scientific research to the contrary. It was found that beliefs like these are tenaciously held and are highly resistant to change. Another important factor that affects attitude is symbolic interactionism, these are rife with powerful symbols and charged with affect which can lead to a selective perception. Persuasion theories say that in politics, successful persuaders convince its message recipients into a selective perception or attitude polarization for turning against the opposite candidate through a repetitive process that they are in a noncommittal state and it is unacceptable and does not have any moral basis for it and for this they only require to chain the persuading message into a realm of plausibility. Despite debate about the particular structure of attitudes, there is considerable evidence that attitudes reflect more than evaluations of a particular object that vary from positive to negative. Emotions Behaviors The effects of attitudes on behaviors is a growing research enterprise within psychology. Icek Ajzen has led research and helped develop two prominent theoretical approaches within this field: the theory of reasoned action and, its theoretical descendant, the theory of planned behavior. Both theories help explain the link between attitude and behavior as a controlled and deliberative process. Models Theory of reasoned action The theory of reasoned action (TRA) is a model for the prediction of behavioral intention, spanning predictions of attitude and predictions of behavior. The theory of reasoned action was developed by Martin Fishbein and Icek Ajzen, derived from previous research that started out as the theory of attitude, which led to the study of attitude and behavior. Theory of planned behavior The theory of planned behavior suggests that behaviors are primarily influenced by the attitude and other intentions. The theory of planned behavior was proposed by Icek Ajzen in 1985 through his article "From intentions to actions: A theory of planned behavior." The theory was developed from the theory of reasoned action, which was proposed by Martin Fishbein together with Icek Ajzen in 1975. The theory of reasoned action was in turn grounded in various theories of attitude such as learning theories, expectancy-value theories, consistency theories, and attribution theory. According to the theory of reasoned action, if people evaluate the suggested behavior as positive (attitude), and if they think their significant others want them to perform the behavior (subjective norm), this results in a higher intention (motivation) and they are more likely to do so. A high correlation of attitudes and subjective norms to behavioral intention, and subsequently to behavior, has been confirmed in many studies. The theory of planned behavior contains the same component as the theory of reasoned action, but adds the component of perceived behavioral control to account for barriers outside one's own control. Motivation and Opportunity as Determinants (MODE) Russell H. Fazio proposed an alternative theory called "Motivation and Opportunity as Determinants" or MODE. Fazio believes that because there is deliberative process happening, individuals must be motivated to reflect on their attitudes and subsequent behaviors. Simply put, when an attitude is automatically activated, the individual must be motivated to avoid making an invalid judgement as well as have the opportunity to reflect on their attitude and behavior. The MODE (motivation and opportunity as determinants of the attitude-behavior relation) model was developed by Fazio. The MODE model, in short is a theory of attitude evaluation that attempts to predict and explain behavioral outcomes of attitudes. When both are present, behavior will be deliberate. When one is absent, impact on behavior will be spontaneous. A person's attitude can be measured explicitly and implicitly. The model suggests whether attitude activation occurs and, therefore, whether selective perception occurs depends on attitude accessibility. More accessible attitudes are more likely to be activated in a behavioral situation and, therefore, are more likely to influence perceptions and behavior A counter-argument against the high relationship between behavioral intention and actual behavior has also been proposed, as the results of some studies show that, because of circumstantial limitations, behavioral intention does not always lead to actual behavior. Namely, since behavioral intention cannot be the exclusive determinant of behavior where an individual's control over the behavior is incomplete, Ajzen introduced the theory of planned behavior by adding a new component, "perceived behavioral control." By this, he extended the theory of reasoned action to cover non-volitional behaviors for predicting behavioral intention and actual behavior. Function Another classic view of attitudes is that attitudes serve particular functions for individuals. That is, researchers have tried to understand why individuals hold particular attitudes or why they hold attitudes in general by considering how attitudes affect the individuals who hold them. Daniel Katz, for example, writes that attitudes can serve "instrumental, adjustive or utilitarian," "ego-defensive," "value-expressive," or "knowledge" functions. This functional attitude theory suggests that in order for attitudes to change (e.g., via persuasion), appeals must be made to the function(s) that a particular attitude serves for the individual. As an example, the ego-defensive function might be used to influence the racially prejudicial attitudes of an individual who sees themselves as open-minded and tolerant. By appealing to that individual's image of themselves as tolerant and open-minded, it may be possible to change their prejudicial attitudes to be more consistent with their self-concept. Similarly, a persuasive message that threatens self-image is much more likely to be rejected. Daniel Katz classified attitudes into four different groups based on their functions. Utilitarian: provides general approach or avoidance tendencies Knowledge: organizes and interprets new information Ego-defensive: protects self-esteem Value-expressive: expresses central values or beliefs Utilitarian People adopt attitudes that are rewarding and that help them avoid punishment. In other words, any attitude that is adopted in a person's own self-interest is considered to serve a utilitarian function. For example, a person who has a condo would pay property taxes. If that leads to an attitude that "increases in property taxes are bad", then the attitude is serving a utilitarian function. Knowledge Several studies have shown that knowledge increases are associated with heightened attitudes that influence behavior. The framework for knowledge is based on significant values and general principles. Attitudes achieve this goal by making things fit together and make sense. As a result, people can maintain a sense of stability and meaning within their worldview. For example: I believe that I am a good person. I believe that good things happen to good people. Something bad happens to Bob. So, I believe Bob must not be a good person. When a person is relying on a single dimension of knowledge and that dimension is not directly related to their behavior goal, that person might conclude that the attitude is wrong. Ego-Defensive This function involves psychoanalytic principles where people use defense mechanisms to protect themselves from psychological harm. Mechanisms include denial, repression, projection, and rationalization. The ego-defensive notion correlates with Downward Comparison Theory, which argues that derogating a less fortunate other increases a person's own subjective well-being. A person is more likely to use the ego-defensive function when they suffer a frustration or misfortune. Value-Expressive Identity and social approval are established by central values that reveal who we are and what we stand for. Individuals define and interpret situations based on their central values. An example would be attitudes toward a controversial political issue. Formation According to Doob in 1947, learning can account for most of the attitudes a person holds. The study of attitude formation is the study of how people form evaluations of persons, places or things. Theories of classical conditioning, instrumental conditioning and social learning are mainly responsible for formation of attitude. Unlike personality, attitudes are expected to change as a function of experience. In addition, exposure to the 'attitude' objects may have an effect on how a person forms his or her attitude. This concept was seen as the mere-exposure effect. Robert Zajonc showed that people were more likely to have a positive attitude on 'attitude objects' when they were exposed to it frequently than if they were not. Mere repeated exposure of the individual to a stimulus is a sufficient condition for the enhancement of his attitude toward it. Tesser in 1993 argued that hereditary variables may affect attitudes - but believes that they do so indirectly. For example, consistency theories, which imply that beliefs and values must be consistent. As with any type of heritability, to determine if a particular trait has a basis in genetics, twin studies are used. The most famous example of such a theory is Dissonance-reduction theory, associated with Leon Festinger, which explains that when the components of an attitude (including belief and behavior) are at odds an individual may adjust one to match the other (for example, adjusting a belief to match a behavior). Other theories include balance theory, originally proposed by Heider in 1958, and the self-perception theory, originally proposed by Daryl Bem. Change Attitudes can be changed through persuasion and an important domain of research on attitude change focuses on responses to communication. Experimental research into the factors that can affect the persuasiveness of a message include: Target characteristics: These are characteristics that refer to the person who receives and processes a message. One such trait is intelligence - it seems that more intelligent people are less easily persuaded by one-sided messages. Another variable that has been studied in this category is self-esteem. Although it is sometimes thought that those higher in self-esteem are less easily persuaded, there is some evidence that the relationship between self-esteem and persuasibility is actually curvilinear, with people of moderate self-esteem being more easily persuaded than both those of high and low self-esteem levels. Source characteristics: The major source characteristics are expertise, trustworthiness and interpersonal attraction or attractiveness. The credibility of a perceived message has been found to be a key variable here; if one reads a report about health and believes it came from a professional medical journal, one may be more easily persuaded than if one believes it is from a popular newspaper. Some psychologists have debated whether this is a long-lasting effect and Hovland and Weiss found the effect of telling people that a message came from a credible source disappeared after several weeks (the so-called sleeper effect). Whether there is a sleeper effect is controversial. Perceived wisdom is that if people are informed of the source of a message before hearing it, there is less likelihood of a sleeper effect than if they are told a message and then told its source. Message Characteristics: The nature of the message plays a role in persuasion. Sometimes presenting both sides of a story is useful to help change attitudes. When people are not motivated to process the message, simply the number of arguments presented in a persuasive message will influence attitude change, such that a greater number of arguments will produce greater attitude change. Emotion and attitude change Emotion is a common component in persuasion, social influence, and attitude change. Much of attitude research emphasized the importance of affective or emotion components. Emotion works hand-in-hand with the cognitive, or thought, process about an issue or situation. Emotional appeals are commonly found in advertising, health campaigns and political messages. Recent examples include no-smoking health campaigns and political campaign advertising emphasizing the fear of terrorism. Attitudes and attitude objects are functions of cognitive, affective and cognitive components. Attitudes are part of the brain's associative networks, the spider-like structures residing in long-term memory that consist of affective and cognitive nodes. By activating an affective or emotion node, attitude change may be possible, though affective and cognitive components tend to be intertwined. One may be able to change their attitudes with attitude correctness, which varies with the level of confidence they have in their attitude validity and accuracy. In general, the higher the confidence level, the more the person believes others around them should share the same attitude. As we learn other people share those attitudes and how socially acceptable, they are, the importance of attitude correctness becomes even more apparent. Our attitudes can greatly impact our behavior and the manner of how we treat those around us. In primarily affective networks, it is more difficult to produce cognitive counterarguments in the resistance to persuasion and attitude change. The idea of attitude clarity refers to a feeling of security or uncertainty about a particular attitude, a feeling strengthened by the act of reporting one's particular attitude towards an issue or thing, which will make that attitude more crystallized. Affective forecasting, otherwise known as intuition or the prediction of emotion, also impacts attitude change. Research suggests that predicting emotions is an important component of decision making, in addition to the cognitive processes. How a person feels about an outcome may override purely cognitive rationales. In terms of research methodology, the challenge for researchers is measuring emotion and subsequent impacts on attitude. Various models and measurement tools have been constructed to obtain emotion and attitude information. Measures may include the use of physiological cues like facial expressions, vocal changes, and other body rate measures. For instance, fear is associated with raised eyebrows, increased heart rate and increase body tension. Other methods include concept or network mapping and using primes or word cues in the era. Components of emotional appeals Any discrete emotion can be used in a persuasive appeal; this may include jealousy, disgust, indignation, fear, blue, disturbed, haunted, and anger. Fear is one of the most studied emotional appeals in communication and social influence research. Important consequences of fear appeals and other emotional appeals include the possibility of reactance which may lead to either message rejections or source rejection and the absence of attitude change. As the EPPM suggests, there is an optimal emotion level in motivating attitude change. If there is not enough motivation, an attitude will not change; if the emotional appeal is overdone, the motivation can be paralyzed thereby preventing attitude change. Emotions perceived as negative or containing threat are often studied more than perceived positive emotions like humor. Though the inner-workings of humor are not agreed upon, humor appeals may work by creating incongruities in the mind. Recent research has looked at the impact of humor on the processing of political messages. While evidence is inconclusive, there appears to be potential for targeted attitude change is receivers with low political message involvement. Important factors that influence the impact of emotional appeals include self-efficacy, attitude accessibility, issue involvement, and message/source features. Self efficacy is a person's perception of their agency or ability to deal with a situation. It is an important variable in emotional appeal messages because it dictates a person's ability to deal with both the emotion and the situation. For example, if a person is not self-efficacious about their ability to impact the global environment, they are not likely to change their attitude or behavior about global warming. Dillard in 1994 suggested that message features such as source non-verbal communication, message content, and receiver differences can impact the emotion impact of fear appeals. The characteristics of a message are important because one message can elicit different levels of emotion for different people. Thus, in terms of emotional appeals messages, one size does not fit all. Attitude accessibility refers to the activation of an attitude from memory in other words, how readily available is an attitude about an object, issue, or situation. Issue involvement is the relevance and salience of an issue or situation to an individual. Issue involvement has been correlated with both attitude access and attitude strength. Past studies conclude accessible attitudes are more resistant to change. See also References Further reading </ref>
0.776921
0.996127
0.773912
Animal psychopathology
Animal psychopathology is the study of mental or behavioral disorders in non-human animals. Historically, there has been an anthropocentric tendency to emphasize the study of animal psychopathologies as models for human mental illnesses. But animal psychopathologies can, from an evolutionary point of view, be more properly regarded as non-adaptive behaviors due to some sort of a cognitive disability, emotional impairment or distress. This article provides a non-exhaustive list of animal psychopathologies. Eating disorders Animals in the wild appear to be relatively free from eating disorders although their body composition fluctuates depending on seasonal and reproductive cycles. However, domesticated animals including farm, laboratory, and pet animals are prone to disorders. Evolutionary fitness drives feeding behavior in wild animals. The expectation is that farm animals also display this behavior, but questions arise if the same principles apply to laboratory and pet animals. Activity anorexia Activity anorexia (AA) is a condition where rats begin to exercise excessively while simultaneously cutting down on their food intake, similar to human anorexia nervosa or hypergymnasia. When given free access to food and an exercise wheel, rats normally develop a balanced routine between exercise and food intake, which turns them into fit rats. However, if food intake is restricted and wheel access is unrestricted, rats begin to exercise more and eat less, resulting in excessive weight loss and, ultimately, death. The running cycles shift so that most of the running is done in hours before feeding is scheduled. In other conditions, AA does not develop. Unrestricted food access and restricted wheel access will not cause any significant change in either feeding or exercise routine. Also, if rats are restricted both in food intake and wheel access, they will adjust accordingly. In fact, if rats are first trained to the feeding schedule and then given unrestricted access to a running wheel, they will not develop AA behavior. Results support the notion that the running interferes with adaptation to the new feeding schedule and is associated with the reward system in the brain. One theory is that running simulates foraging, a natural behavior in wild rats. Laboratory rats therefore run (forage) more in response to food shortages. The effect of semi-starvation on activity has also been studied in primates. Rhesus macaque males become hyperactive in response to long-term chronic food restriction. Thin sow syndrome Thin sow syndrome (TSS) is a behavior observed in stalled sows that is similar to AA where some sows after early pregnancy are extremely active, eat little, and waste away, resulting very often in death. They experience emaciation, hypothermia, a depraved appetite, restlessness, and hyperactivity. The syndrome may mainly be related to social and environmental stressors. Stress in stalled sows is often perceived as the consequence of the restraint of animals that happens in intensive production units. The sows that experience the most restraining conditions are those lactating or pregnant as they have very little room to move around because they are kept in barred gestation crates or tethered for the 16 weeks of pregnancy which prevents natural and social behaviors. However, increased movement and freedom is also stressful for adult sows, which is usually the case after weaning. When placed into groups they fight vigorously, with one dominant sow emerging that eats voraciously. It is also likely that two subordinate sows make up part of the group who actively avoid competitive feeding situations and are bullied by the dominant sow. Affected sows have poor appetite but often show pica, excessive water intake (polydipsia) and are anemic. Studies on the effects of overcrowding were conducted in the 1940s by placing pregnant Norway rats in a room with plenty of water and food and observing the population growth. The population reached a number of individuals and did not grow thereafter; overcrowding produced stress and psychopathologies. Even though there was plenty of water and food, the rats stopped eating and reproducing. Similar effects have also been observed in dense populations of beetles. When overcrowding occurs, female beetles destroy their eggs and turn cannibalistic, eating each other. Male beetles lose interest in the females and although there is plenty of water and food, there is no population growth. Similar effects have been observed in overcrowded situations in jack rabbits and deer. Pica Pica is the ingestion of non-nutritive substances and has so far been poorly documented. In non-human animals in the laboratory it has been examined through the ingestion of kaolin (a clay mineral) by rats. Rats were induced to intake kaolin by administering various emetic stimuli such as copper sulfate, apomorphine, cisplatin, and motion. Rats are unable to vomit when they ingest a substance that is harmful thus pica in rats is analogous to vomiting in other species; it is a way for rats to relieve digestive distress. In some animals pica seems to be an adaptive trait but in others it seems to be a true psychopathology like in the case of some chickens. Chickens can display a type of pica when they are feed-deprived (feeding restriction has been adopted by the egg industry to induce molting). They increase their non-nutritive pecking, such as pecking structural features of their environment like wood or wire on fences or the feathers of other birds. It is a typical response that occurs when feeding is restricted or is completely withdrawn. Some of the non-nutritive pecking may be due to a redirection of foraging related behavior. Another animal that has displayed a more complex pica example are cattle. Cattle eat bones when they have a phosphorus deficiency. However, in some cases they persist on eating bones even after their phosphorus levels have stabilized and they are getting adequate doses of phosphorus in their diet. In this case evidence supports both a physical and psychological adaptive response. Cattle that continue to eat bones after their phosphorus levels are adequate do it because of a psychological reinforcer. "The persistence of pica in the seeming absence of a physiological cause might be due to the fortuitous acquisition of a conditioned illness during the period of physiological insult." Cats also display pica behavior in their natural environments and there is evidence to support that this behavior has a psychological aspect to it. Some breeds (such as the Siamese cat) are more predisposed to showing this type of behavior than other breeds, but several types of breeds have been documented to show pica. Cats have been observed to start by chewing and sucking on non-nutritive substances like wool, cotton, rubber, plastic and even cardboard and then progress into ingestion of these substances. This type of behavior occurs through the first four years of a cat's life but it is primarily observed during the first two months of life when cats are introduced into new homes is most common. Theories explaining why this behavior becomes active during this time suggest that early weaning and stress as a consequence of separation from the mother and litter-mates and exposure to a new environment are to blame. Eating wool or other substances may be a soothing mechanism that cats develops to cope with the changes. Pica is also observed predominately during 6–8 months of a cat's life when territorial and sexual behaviors emerge. Pica may be induced by these social stressors. Other theories contemplated include pica as a redirection of prey-catching/ingestion behavior as a result of indoor confinement, especially common among oriental breeds due to risk of theft. In natural environments pica has been observed in parrots (such as macaws) and other birds and mammals. Charles Munn has been studying Amazon macaws lick clay from riverbeds in the Amazon to detoxify the seeds they eat. Amazon macaws spend two to three hours a day licking clay. Munn has found that clay helps counter the tannin and alkaloid in the seeds the macaws ingest, a strategy that is also used by native cultures in the Andes Mountains in Peru. Pica also affects domesticated animals. While drugs like Prozac are often able to diminish troublesome behaviors in pet dogs, they don't seem to help with this eating disorder. The following story about Bumbley, a wire fox terrier who appeared on the TV show 20/20 as a result of his eating disorder, is taken from a book by Dr. Nicholas Dodman: Dodman talks about new research relating bulimia and compulsive overeating to seizural behavior in human patients. He suggests that anti-epileptic medication might be a possible treatment for some cases of pica in animals. Behavioral disorders Behavioral disorders are difficult to study in animal models because it is difficult to know what animals are thinking and because animal models used to assess psychopathologies are experimental preparations developed to study a condition. Lacking the ability to use language to study behavioral disorders like depression and stress questions the validity of those studies conducted. It can be difficult to attribute human conditions to non-human animals. Obsessive compulsive disorder (OCD) Obsessive-compulsive behavior in animals, often called "stereotypy" or "stereotypical behavior" can be defined as a specific, unnecessary action (or series of actions) repeated more often than would normally be expected. It is unknown whether animals are able to 'obsess' in the same way as humans, and because the motivation for compulsive acts in non-human animals is unknown, the term "abnormal repetitive behavior" is less misleading. A wide variety of animals exhibit behaviors that can be considered abnormally repetitive. Ritualized and stereotyped behaviors Though obsessive-compulsive behaviors are often considered to be pathological or maladaptive, some ritualized and stereotyped behaviors are beneficial. These are usually known as "fixed action patterns". These behaviors sometimes share characteristics with obsessive-compulsive behavior, including a high degree of similarity in form and use among many individuals and a repetitive dimension. There are many observable animal behaviors with characteristic, highly conserved patterns. One example is grooming behavior in rats. This behavior is defined by a specific sequence of actions that does not normally differ between individual rats. The rat first begins by stroking its whiskers, then expands the stroking motion to include the eyes and the ears, finally moving on to lick both sides of its body. Other behaviors may be added to the end of this chain, but these four actions themselves are fixed. Its ubiquity and high degree of stereotypy suggest that this is a beneficial behavior pattern which has been maintained throughout evolutionary history. Although humans and animals both have pathological stereotyped behaviors, they do not necessarily provide a similar model of OCD. Feather picking in orange-winged amazon parrots has both a genetic component, with the behavior being more likely in one sibling if the other does it, and more common in parrots close to a door when they were housed in groups. The same study found that feather picking was more common in females and that there was no social transmission of the behavior; neighbors of feather picking birds were only more likely to show the behavior as well if they were related. An evolutionary basis Some researchers believe that disadvantageous obsessive compulsive behaviors can be thought of as a normally beneficial process gone too far. Brüne (2006) suggests that change of various origin in striatal and frontal brain circuits, which play a role in predicting needs and threats that may arise in the future, may result in a hyperactive cognitive harm avoidance system, in which a person becomes consciously and unreasonably fearful of an unlikely or impossible event. This may also be true in other animals. Genetic factors Canine compulsions are more common in some breeds and behavioral dispositions are often shared within the same litter. This suggests that there is a genetic factor to the disorder. A questionnaire to dog owners and a blood sample of 181 dogs from four breeds, miniature and standard bull terriers, German shepherds, and Staffordshire bull terriers showed these to be more susceptible to compulsive and repetitive behaviors. It is suggested that the more we learn through studying OCD in dogs, the more we can to understand human biology and the genetics involved in the heredity of susceptibility to disorders such as OCD. A chromosome has been located in dogs that confers a high risk of susceptibility to OCD. Canine chromosome 7 has been found to be most significantly associated with obsessive compulsive disorder in dogs, or more specifically, canine compulsive disorder (CCD). This breakthrough helped further relate OCD in humans to CCD in canines. Canine chromosome 7 is expressed in the hippocampus of the brain, the same area that Obsessive Compulsive Disorder is expressed in human patients. Similar pathways are involved in drug treatment responses for both humans and dogs, offering more research that the two creatures exhibit symptoms and respond to treatment in similar ways. This data can help scientists to discover more effective and efficient ways to treat OCD in humans through the information they find by studying CCD in dogs. Animal models Animals exhibiting obsessive and compulsive behaviors that resemble OCD in humans have been used as a tool for elucidating possible genetic influences on the disease, potential treatments, and to better understand the pathology of this behavior in general. While such models are useful, they are also limited; it is unclear whether the behavior is ego dystonic in animals. That is, it is difficult to evaluate whether an animal is aware that its behavior is excessive and unreasonable and whether this awareness is a source of anxiety. One study done by Simon Vermeier used neuroimaging to investigate serotonergic and dopaminergic neurotransmission in 9 dogs with Canine Compulsive Disorder (CCD) to measure the serotonin 2A receptor availability. When compared to the 15 non-compulsive dogs used as a control group, the dogs with CCD were found to have lower receptor availability as well as lower subcortical perfusion and hypothalamic availability. The results of this study provide evidence that there are imbalanced serotonergic and dopaminergic pathways in dogs. Similarities between other studies about human OCD provide construct validity for this study, which suggests that the research will be valid and useful in continuing to investigate brain activity and drug treatment in Obsessive Compulsive Disorder. Some treatment has been given to dogs with CCD to observe their reactions and how they are similar or different from how humans would react to the same pharmaceutical or behavioral treatment. A combination of the two approaches has been found to be most effective in lowering the intensity and regularity of OCD in both canines and humans. Pharmaceutically, clomipramine was found to be more effective than an alternative chemical, amitriptyline, in treatments for dogs. One study by Karen Overall discovered that by combining behavioral therapy with the more effective clomipramine, the symptoms of Canine Compulsive Disorder decreased by over 50% for all of the dogs involved in the study. Overall acknowledges that OCD is not something that can be completely cured, but studies like this are still important because Obsessive Compulsive Disorder can be controlled effectively enough so it does not interfere with one's life, a valuable and commonly sought after thing for those who have had the disorder. Alicia Graef's article makes several bold claims that dogs are the future in understanding how to better diagnose, recognize, and treat Obsessive Compulsive Disorder in humans. There is evidence supporting her statements, but the connection between CCD and OCD is not clearly understood. So far, studies have proved that effective treatments in dogs are similarly effective for humans, but there are still so many things unknown. Obsessive Compulsive Disorder is a unique mental disorder that cannot be fully cured. It can be controlled and understood, and one possible way of better doing that might be through studying CCD in canines. Studying dogs that exhibit compulsive behaviors has led scientists to genetic breakthroughs in understanding more how biology and genetics factor into Obsessive Compulsive Disorder. By observing and studying how CCD manifests in the brain activity, behaviors, and genes of diagnosed canines, scientists have been able to use their newfound information to develop better diagnostic tests and more readily recognize symptoms and susceptible humans. The similar brain functions and behaviors of dogs with CCD and humans with OCD suggests they have a connection, not only in behavior and symptoms, but in reacting to treatments. Understanding Canine Compulsive Disorder in dogs has helped scientists to better understand and apply their learning to developing new and more effective ways to treat Obsessive Compulsive Disorder in humans. Some examples of ways in which rats and mice, two of the most common animal models, have been used to represent human OCD are provided below. Lever pressing in rats Certain laboratory rat strains that have been created by controlled breeding for many generations show a higher tendency towards compulsive behaviors than other strains. Lewis rats show more compulsive lever pressing behavior than Sprague Dawley or Wistar rats and are less responsive to the anti-compulsive drug paroxetine. In this study, rats were taught to press a lever to receive food in an operant conditioning task. Once food was no longer provided when they pressed the lever, rats were expected to stop pressing it. Lewis rats pressed the lever more often than the other two types, even though they had presumably learned that they would not receive food, and continued to press it more often even after treatment with the drug. An analysis of the genetic differences between the three rat strains might help to identify genes that might be responsible for the compulsive behavior. Rats have also been used to test the possibility of a problem with dopamine levels in the brains of animals that exhibit compulsive checking behavior. After treating rats with quinpirole, a chemical that specifically blocks dopamine D2/D3 receptors, compulsive checking of certain locations in an open field increased. Some components of the checking behavior, such as the level of stereotypy in the path animals took to checked locations, the number of checks, and the length of the checks indicated an increase in compulsivity as doses of quinpirole increased; other components, such as the time taken to return from the checked location to the starting point and the time taken to make that trip remained constant after the initial injection throughout the experiment. This means that there might be both an all-or-none and a sensitization aspect in the biology of the dopamine deficiency model of OCD. In addition, quinpirole might reduce a sense of satisfaction in the rats after they check a location, causing them to return to that location again and again. Estrogen deficiency in male mice Based on findings of changes in OCD symptoms in menstruating women and differences in the development of the disease between men and women, Hill and colleagues set out to research the effect of estrogen deprivation on the development of compulsive behavior in mice. Male mice with an aromatase gene knockout who were unable to produce estrogen showed excessive grooming and wheel running behaviors, but female mice did not. When treated with 17β-estradiol, which replaced estrogen in these mice, the behaviors disappeared. This study also found that COMT protein levels decreased in mice that did not produce estrogen and increased in the hypothalamus after estrogen-replacement treatment. Briefly, the COMT protein is involved in degrading some neurotransmitters, including dopamine, norepinephrine and epinephrine. This data suggests that there may be a hormonal component and a hormone-gene interaction effect that may contribute to obsessive behaviors. Pets Dr. Nicholas Dodman describes a wide variety of OCD-like behaviors in his book Dogs Behaving Badly. Such behaviors typically appear when the dog is placed in a stressful situation, including an environment that is not very stimulating, or in dogs with a history of abuse. Different breeds of dog seem to display different compulsions. Lick granuloma, or licking repeatedly until ulcers form on the skin, affects more large dogs, like Labradors, golden retrievers, Great Danes, and Dobermans, while bull terriers, German shepherds, Old English sheepdogs, Rottweilers, and wire-haired fox terriers, and springer spaniels are more likely to snap at imaginary flies or chase light and shadows. These associations probably have an evolutionary basis, although Dodman does not clearly explain that aspect of the behaviors. Louis Shuster and Nicholas Dodman noticed that dogs often demonstrate obsessive and compulsive behaviors similar to humans. Canine Compulsive Disorder (CCD) is not only specific to certain breeds of dogs, but the breed may affect the specific types of compulsions. For example, bull terriers frequently exhibit obsessively predatory or aggressive behaviors. Breed may factor into the types of compulsions, but some behaviors are more common across the canine spectrum. Most commonly, CCD is seen in canines as they repeat behaviors such as chasing their tails, compulsively chewing on objects, or licking their paws excessively, similar to the common hand-washing compulsion many people with Obsessive Compulsive Disorder have. Hallucinating and attacking the air around their head, as if there were a bug there, is another compulsion that has been seen in some dogs. Circling, hair biting, staring, and sometimes even barking are other examples of behaviors that are considered compulsions in dogs when taken to extreme, repetitive actions. Treatment (pharmaceutical) Dodman advocates the use of exercise, an enriched environment (like providing noises for dogs to listen to while owners are at work), and often Prozac (an SSRI used to treat OCD in humans) as treatments. Shuster and Dodman tested pharmaceutical treatment on canines with CCD to see if it would work as effectively as it does in humans. They used glutamate receptor blockers (memantine) and fluoxetine, commonly known as the antidepressant Prozac, to treat and observe the reactions of 11 dogs with compulsions. Seven of the 11 dogs significantly reduced their compulsions in intensity and frequency after receiving medication. Dodman includes a story about Hogan, a castrated deaf male Dalmatian, and his compulsive behavior. Hogan had a history of neglect and abuse before he was adopted by Connie and Jim, who attempted to improve his behavior by teaching him to respond to American Sign Language. The following are some excerpts from Hogan's file: Addiction Sugar addiction has been examined in laboratory rats and it develops in the same way that drug addiction develops. Eating sugary foods causes the brain to release natural chemicals called opioids and dopamine in the limbic system. Tasty food can activate opioid receptors in the ventral tegmental area and thereby stimulate cells that release dopamine in the nucleus accumbens (NAc). The brain recognizes the intense pleasure derived from the dopamine and opioids release and learns to crave more sugar. Dependence is created through these natural rewards, the sugary treats, and the opioid and dopamine released into the synapses of the mesolimbic system. The hippocampus, the insula and the caudate activate when rats crave sugar, which are the same areas that become active when drug addicts crave the drug. Sugar is good because it provides energy, but if the nervous system goes through a change and the body becomes dependent on the sugar intake, somatic signs of withdrawal begin to appear like chattering teeth, forepaw tremors and head shakes when sugar is not ingested. Morphine tolerance, a measure of addiction, was observed in rats and their tolerance on Morphine was attributed to environmental cues and the systemic effects of the drug. Morphine tolerance does not depend merely on the frequency of pharmacological stimulation, but rather on both the number of pairings of a drug-predictive cue with the systemic effects of the drug. Rats became significantly more tolerant to morphine when they had been exposed to a paired administration than those rats that were not administered a drug-predictive cue along with the morphine. Depression Using dogs, Martin Seligman and his colleagues pioneered the study of depression in the animal model of learned helplessness at the University of Pennsylvania. Dogs were separated into three groups, the control group, group A had control over when they were being shocked and group B had no control over when they were being electrocuted. After the shocking condition, the dogs were tested in a shuttle box where they could escape shock by jumping over a partition. To eliminate an interference effect – that the dogs did not learn responses while being shocked that would interfere with their normal escape behavior – the dogs were immobilized using curare, a paralyzing drug while they were being shocked. Both the control group and group A tended to jump over the partition to escape shock while group B dogs did not jump and would passively take the shock. The dogs in group B perceived that the outcome was not related to their efforts. Consequently, a theory emerged that attributed the behavior of the animals to the effects of the shock as a stressor so extreme that it depleted a neurochemical needed by the animals for movement. After the dogs study the effects of helplessness have been tested in species from fish to cats. Most recently learned helplessness has been studied in rhesus macaques using inescapable shock, evoked through stress situations like forced swimming, behavioral despair tasks, tails suspension and pinch induced catalepsy; situations that render the monkey incapable of controlling the environment. Depression and low mood were found to be of a communicative nature. They signal yielding in a hierarchy conflict or a need for help. Low mood or extreme low mood (also known as depression) can regulate a pattern of engagement and foster disengagement from unattainable goals. "Low mood increases an organism's ability to cope with the adaptive challenges characteristic of unpropitious situations in which effort to pursue a major goal will likely result in danger, loss, bodily damage, or wasted effort." Being apathetic can have a fitness advantage for the organism. Depression has also been studied as a behavioral strategy used by vertebrates to increase their personal or inclusive fitness in the threat of parasites and pathogens. The lack of neurogenesis has been linked to depression. Animals with stress (isolated, cortisol levels) show a decrease in neurogenesis and antidepressants have been discovered to promote neurogenesis. Rene Hen and his colleagues at Columbia University ran a study on rats in which they blocked neurogenesis by applying radiation to the hippocampal area to test the efficacy of antidepressants. Results suggested that antidepressants failed to work when neurogenesis was inhibited. Stress Robert Sapolsky has extensively studied baboons in their natural environment in the Serengeti in Africa. He noticed that baboons have very similar hierarchies in their society as do humans. They spend very few hours searching for food and fulfilling their primary needs, leaving them with time to develop their social network. In primates, mental stresses show up in the body. Primates experience psychological stresses that can elicit physiological responses that, over time, can make them sick. Sapolsky observed the baboons' ranks, personalities and social affiliations, then collected blood samples of the baboons to control the cortisol (stress hormone) levels of the baboons, then matched social position to cortisol levels. Most of the data have been collected from male baboons, because at any given time 80 percent of the females were pregnant. Three factors influenced a baboon's cortisol levels: friendships, perspective, and rank. Baboons had lower levels of cortisol if they 1. played with infants and cultivated friendships, 2. could tell if a situation was a real threat and could tell if they were going to win or lose, and 3. were top ranking. Cortisol levels rise with age and hippocampal cells express fewer hormone receptors on their surface to protect themselves from excess, making it harder to control stress levels. Cortisol levels are elevated in half of people with major depression, it is the hippocampal region that is affected by both. Stress can have negative effects on gastrointestinal function causing ulcers, and it can also decrease sex drive, affect sleeping patterns and elevate blood pressure but it can also stimulate and motivate. When animals experience stress, they are generally more alert than when they are not stressed. It may help them be better aware of unfamiliar environments and possible threats to their life in these environments. Yerkes and Dodson developed a law that explains the empirical relationship between arousal and performance illustrated by an inverted U-shape graph. According to the Yerkes-Dodson Law, performance increases, as does cognitive arousal, but only to a certain point. The downward part of the U-shape is caused by stress and as stress increases so does efficiency and performance, but only to a certain point. When stress becomes too great, performance and efficiency decline. Sapolsky has also studied stress in rats and his results indicate that early experiences in young rats have strong, lasting effects. Rats that were exposed to human handling (a stressful situation) had finely-tuned stress responses that may have lowered their lifetime exposure to stress hormones compared to those that were not handled. In short: stress can be adaptive. The more exposure to stressful situations, the better the rat can handle that situation. Stereotypies Stereotypies are repetitive, sometimes abnormal behaviors like pacing on the perch for birds. There are adaptive stereotypic behaviors such as grooming in cats and preening in birds. Captive parrots commonly perform a range of stereotypies. These behaviors are repeated identically and lack any function or goal. Captive parrots perform striking oral and locomotor stereotypies like pacing on the perch or repetitive play with a certain toy. Feather picking and loud vocalizations can be stereotypies but are not as rigid and may be reactions to confinement, stress, boredom and loneliness as studies have shown that parrots that are in cages closest to the door are the most prone to feather pick or scream. Feather picking is not a true stereotypy and is more like hair pulling in human and loud vocalizations or screaming can be a stereotypy but vocalization is part of a parrot's natural behavior. Captive parrots lack sufficient stimulation. Presumably they suffer from lack of companionship and opportunities to forage. Stereotypies can evolve from the social environment for example the presence or absence of certain social stimuli, social isolation, low feeder space and high stocking density (especially for tail biting in pigs). These behaviors can also be transmitted through social learning. Bank voles, pigeons and pigs when housed next to animals that show stereotypies, pick them up as well as through stimulus enhancement which is what happens in tail biting in pigs and feather pecking by hens. Stereotypies may be coping mechanisms as results suggest from study on tethered and stalled sows. Sows that are tethered and stalled exhibited more stereotypies like licking and rubbing than sows that are in groups outdoors. This abnormal behavior seems to be related to opioid (related to the reward system) receptor density. In sows, prolonged confinement, being tethered or being in gestation crates, results in abnormal behaviors and stereotypies. Mu and kappa receptors are associated with aversion behaviors and mu receptor density is greater in tethered sows than sows that are in groups outdoors. However, sows with stereotypy behaviors experienced a decrease both in Mu and Kappa receptor density in the brain suggesting that inactivity increases Mu receptor density and stereotypy development decrease both kappa and Mu receptor density. It is suggested that captive environment design can help prevent the existence of stereotypies, by creating an enclosure as similar as possible to the animal's natural environment and providing enrichments to stimulate their natural behavior. Self-aggression Rhesus macaques have been observed to display self-aggression (SA) including self-biting, self-clasping, self-slapping, self-rubbing and threatening of body parts. The rhesus macaques observed were individually caged and free of disease. Their self-aggression level rose in stressful and stimulating conditions such as moving from one cage to another. Stump-tailed macaques were studied to examine the source of their SA. SA increased in an impoverished environment and results support that SA may increase sensory input in poor environments. Captive macaques do not socialize the way wild macaques do which may affect SA. When allowed to socialize by putting another macaque in the cage or not putting them in a cage, SA levels in macaques decrease. Results indicate that SA is a form of redirected social aggression. SA is related to frustration and social status, especially in macaques that have an intermediate dominance rank. See also Anthrozoology List of abnormal behaviors in animals References Further reading Anxiety and compulsive disorders in dogs. (2013). PetMD. http://www.petmd.com/dog/conditions/behavioral. Graef, A. (October 2013). Can dogs lead us to a cure for obsessive-compulsive disorder? Care 2 Make a Difference. http://www.care2.com/causes/can-dogs-lead-us-to-a-cure-for-obsessive-compulsive-disorder.html Abnormal behaviour in animals Animal welfare Psychopathology
0.78207
0.989527
0.773879
Physiognomy
Physiognomy (from the Greek , , meaning "nature", and , meaning "judge" or "interpreter") or face reading is the practice of assessing a person's character or personality from their outer appearance—especially the face. The term can also refer to the general appearance of a person, object, or terrain without reference to its implied characteristics—as in the physiognomy of an individual plant (see plant life-form) or of a plant community (see vegetation). Physiognomy as a practice meets the contemporary definition of pseudoscience and is regarded as such by academics because of its unsupported claims; popular belief in the practice of physiognomy is nonetheless still widespread and modern advances in artificial intelligence have sparked renewed interest in the field of study. The practice was well-accepted by ancient Greek philosophers, but fell into disrepute in the 16th century while practised by vagabonds and mountebanks. It revived and was popularised by Johann Kaspar Lavater, before falling from favour in the late 19th century. Physiognomy in the 19th century is particularly noted as a basis for scientific racism. Physiognomy as it is understood today is a subject of renewed scientific interest, especially as it relates to machine learning and facial recognition technology. The main interest for scientists today are the risks, including privacy concerns, of physiognomy in the context of facial recognition algorithms. Physiognomy is sometimes referred to as anthroposcopy, a term originating in the 19th century. Ancient Notions of the relationship between an individual's outward appearance and inner character date back to antiquity, and occasionally appear in early Greek poetry. Siddhars from ancient India defined as identifying personal characteristics with body features. Chinese physiognomy or Chinese face reading dates back to at least the Spring and Autumn period. Early indications of a developed physiognomic theory appear in 5th century BC Athens, with the works of Zopyrus (featured in dialogue by Phaedo of Elis), an expert in the art. By the 4th century BC, the philosopher Aristotle frequently referred to theory and literature concerning the relationship of appearance to character. Aristotle was receptive to such an idea, evidenced by a passage in his Prior Analytics: The first systematic physiognomic treatise is a slim volume, (Physiognomonics), ascribed to Aristotle, but probably of his "school", rather than created by Aristotle himself. The volume is divided into two parts, conjectured as originally two separate works. The first section discusses arguments drawn from nature and describes other races (non-Greek) and concentrates on the concept of human behavior. The second section focuses on animal behavior, dividing the animal kingdom into male and female types. From these are deduced correspondences between human form and character. After Aristotle, the major extant works in physiognomy are: Polemo of Laodicea, (2nd century AD), in Greek Adamantius the Sophist, (4th century), in Greek An anonymous Latin author, (about 4th century) Ancient Greek mathematician, astronomer, and scientist Pythagoras—who some believe originated physiognomics—once rejected a prospective follower named Cylon because, to Pythagoras, his appearance indicated bad character. After inspecting Socrates, a physiognomist announced he was given to intemperance, sensuality, and violent bursts of passion—which was so contrary to Socrates's image, his students accused the physiognomist of lying. Socrates put the issue to rest by saying, originally, he was given to all these vices, but had particularly strong self-discipline. Middle Ages and Renaissance The term 'physiognomy' was common in Middle English, often written as or , as in the Tale of Beryn, a spurious addition to The Canterbury Tales: . Physiognomy's validity was once widely accepted. Michael Scot, a court scholar for Frederick II, Holy Roman Emperor, wrote in the early 13th century concerning the subject. English universities taught physiognomy until Henry VIII of England outlawed "beggars and vagabonds playing 'subtile, crafty and unlawful games such as physnomye or 'palmestrye'" in 1530 or 1531. Around this time, scholastic leaders settled on the more erudite Greek form 'physiognomy' and began to discourage the entire concept of 'fisnamy'. Leonardo da Vinci dismissed physiognomy in the early 16th century as "false", a chimera with "no scientific foundation". Nevertheless, da Vinci believed that facial lines caused by facial expressions could indicate personality traits. For example, he wrote that "those who have deep and noticeable lines between the eyebrows are irascible". Modern Johann Kaspar Lavater The principal promoter of physiognomy in modern times was the Swiss pastor Johann Kaspar Lavater (1741–1801) who was briefly a friend of Goethe. Lavater's essays on physiognomy were first published in German in 1772 and gained great popularity. These influential essays were translated into French and English, and influenced early criminological theory. Lavater's critics Lavater received mixed reactions from scientists, with some accepting his research and others criticizing it. His harshest critic was scientist Georg Christoph Lichtenberg, who said pathognomy, or discovering the character of a person by observing their behavior, was more effective. English religious writer Hannah More (1745–1833) complained to her contemporary writer Horace Walpole, "In vain do we boast ... that philosophy had broken down all the strongholds of prejudice, ignorance, and superstition; and yet, at this very time ... Lavater's physiognomy books sell at fifteen guineas a set." Thomas Browne Lavater found confirmation of his ideas from the English physician-philosopher Sir Thomas Browne (1605–1682), and the Italian Giambattista Della Porta (1535–1615). Browne in his Religio Medici (1643) discusses the possibility of the discernment of inner qualities from the outer appearance of the face, and wrote: Browne reaffirmed his physiognomic beliefs in Christian Morals (circa 1675): Browne also introduced the word caricature into the English language, whence much of physiognomical belief attempted to entrench itself by illustrative means, in particular through visual political satire. Italian scholar Giambattista della Porta's works are well represented in the Library of Sir Thomas Browne including Of Celestial Physiognomy, in which della Porta argued that it was not the stars but a person's temperament that influences their facial appearance and character. In De humana physiognomia (1586), della Porta used woodcuts of animals to illustrate human characteristics. Both della Porta and Browne adhered to the 'doctrine of signatures'—that is, the belief that the physical structures of nature such as a plant's roots, stem, and flower, were indicative keys (or 'signatures') to their medicinal potentials. Period of popularity The popularity of physiognomy grew throughout the first quarter of the 18th century and into the 19th century. It was discussed seriously by academics, who believed in its potential. Use in fiction and art Many European novelists used physiognomy in the descriptions of their characters, notably Balzac, Chaucer and portrait artists, such as Joseph Ducreux. A host of 19th-century English authors were influenced by the idea, notably evident in the detailed physiognomic descriptions of characters in the novels of Charles Dickens, Thomas Hardy, and Charlotte Brontë. In addition to Thomas Browne, other literary authors associated with Norwich who made physiognomical observations in their writings include the romantic novelist Amelia Opie, and the travelogue author George Borrow. Physiognomy is a central, implicit assumption underlying the plot of Oscar Wilde's The Picture of Dorian Gray. In 19th-century American literature, physiognomy figures prominently in the short stories of Edgar Allan Poe. Phrenology Phrenology, a form of physiognomy, measures the bumps on the skull in order to determine mental and personality characteristics, was created around 1800 by German physician Franz Joseph Gall and Johann Spurzheim, and was widely popular in the 19th century in Europe and the United States. In the U.S., physician James W. Redfield published his Comparative Physiognomy in 1852, illustrating with 330 engravings the "Resemblances between Men and Animals". He finds these in appearance and (often metaphorically) character, e.g. Germans to Lions, Negroes to Elephants and Fishes, Chinamen to Hogs, Yankees to Bears, Jews to Goats. In the late 19th century, phrenology became associated with physiognomy and consequently was discredited and rejected. Nevertheless, the German physiognomist Carl Huter (1861–1912) became popular in Germany with his concept of physiognomy, called "psycho-physiognomy". Criminology During the late 19th century, English psychometrician Sir Francis Galton attempted to define physiognomic characteristics of health, disease, beauty, and criminality, via a method of composite photography. Galton's process involved the photographic superimposition of two or more faces by multiple exposures. After averaging together photographs of violent criminals, he found that the composite image appeared "more respectable" than any of the faces comprising it; this was likely due to the irregularities of the skin across the constituent images being averaged out in the final blend. With the advent of computer technology during the early 1990s, Galton's composite technique has been adopted and greatly improved using computer graphics software. Physiognomy also became of use in the field of Criminology through efforts made by Italian army doctor and scientist, Cesare Lombroso. Lombroso, during the mid-19th century, championed the notion that "criminality was inherited and that criminals could be identified by physical attributes such as hawk-like noses and bloodshot eyes". Lombroso took inspiration from Charles Darwin's recently released theories of evolution and carried many of the misunderstandings that he had regarding evolution into the propagation of the use of physiognomy in criminology. His logic stemmed from the idea that "criminals were 'throwbacks' in the phylogenetic tree to early phases of evolution". It is reasonable to conclude that "according to Lombroso, a regressive characteristic united the genius, the madman and the delinquent; they differed in the intensity of this characteristic and, naturally in the degree of development of the positive qualities". He believed that one could determine whether one was of savage nature just by their physical characteristics. Based on his findings, "Lombroso proposed that the "born criminal" could be distinguished by physical atavistic stigmata, such as: Large jaws, forward projection of jaw Low sloping forehead High cheekbones Flattened or upturned nose Handle-shaped ears Hawk-like noses or fleshy lips Hard shifty eyes Scanty beard or baldness Insensitivity to pain Long arms relative to lower limbs This interest in the relationship between criminology and physiognomy began upon Lombroso's first interaction with "a notorious Calabrian thief and arsonist" named Giuseppe Villella. Lombroso was particularly taken by many striking personality characteristics that Villella possessed; agility and cynicism being some of them. Villella's alleged crimes are disputed and Lombroso's research is seen by many as northern Italian racism toward southern Italians. Upon Villella's death, Lombroso "conducted a post-mortem and discovered that his subject had an indentation at the back of his skull, which resembled that found in apes". He later referred to this anomaly as the "median occipital depression". Lombroso used the term "atavism" to describe these primitive, ape-like behaviors that he found in many of those whom he deemed prone to criminality. As he continued analyzing the data he gathered from Villella's autopsy and compared and contrasted those results with previous cases, he inferred that certain physical characteristics allowed for some individuals to have a greater "propensity to offend and were also savage throwbacks to early man". These sorts of examinations yielded far-reaching consequences for various scientific and medical communities at the time, and he wrote, "the natural genesis of crime implied that the criminal personality should be regarded as a particular form of psychiatric disease"., which is an idea still seen today in psychiatry's diagnostic manual, the DSM-5, in its description of antisocial personality disorder. Furthermore, these ideas promoted the concept that when a crime is committed, it is no longer seen as "free will" but instead a result of one's genetic pre-disposition to savagery. Lombroso had numerous case studies to corroborate many of his findings due to the fact that he was the head of an insane asylum at Pesaro. He was easily able to study people from various walks of life and was thus able to further define criminal types. Because his theories primarily focused on anatomy and anthropological information, the idea of degeneracy being a source of atavism was not explored till later on in his criminological theory endeavors. These "new and improved" theories led to the notion "that the born criminal had pathological symptoms in common with the moral imbecile and the epileptic, and this led him to expand his typology to include the insane criminal and the epileptic criminal". In addition, "the insane criminal type [was said to] include the alcoholic, the mattoid, and the hysterical criminal". Lombroso's ideologies are now recognized as flawed and regarded as pseudo-science. Many have remarked on the overt sexist and racist overtones of his research, and denounce it for those reasons alone. In spite of many of his theories being discredited, he is still hailed as the father of "scientific criminology". Contemporary usage In France, the concept was further developed in the 20th century under the name morphopsychology, developed by Louis Corman (1901–1995), a French psychiatrist who argued that the workings of vital forces within the human body resulted in different facial shapes and forms. The term "morphopsychology" is a translation of the French word morphopsychologie, which Louis Corman coined in 1937 when he wrote his first book on the subject, Quinze leçons de morphopsychologie (Fifteen Lessons of Morphopsychology). Social media Discourse around physiognomy has been resurgent on social media among both male and female users, particularly with regards to memes, face filters, and anti-feminist and incel communities. Such content has raised concern about the normalization of pseudoscience and the idea that physical characteristics are inherently associated with one's actions and social status. Examples include the perception of leftists as being unattractive and women's femininity as dependent on their skull shape. Scientific investigation Due to its legacy of racism and junk science masquerading as criminology, scientific study or discussion of the relationship between facial features and character has become taboo. It had previously posited many links. For example, there is evidence that character can influence facial appearance. Also, facial characteristics influences first impressions of others, which influences our expectations and behavior, which in turn influences character. Lastly, there are several biological factors that influence both facial appearance and character traits, such pre- and post-natal hormone levels and gene expression. Recent progress in AI and computer vision has been largely driven by the widespread adoption of deep neural networks (DNNs). DNNs are effective at recognizing patterns in large unstructured data such as digital images, sound, or text, and analyzing such patterns to make predictions. DNNs offer an opportunity to identify links between characteristics and facial features that might be missed or misinterpreted by the human brain. The relationship between facial features and character traits such as political or sexual orientation is complex, but involves the fact that facial features can shape social behavior, partially as a result of the self-fulfilling prophecy effect. The self-fulfilling prophesy effect asserts that people perceived to have a certain attribute will be treated accordingly, and over time may engage in behaviors consistent with others' expectations of them. Conversely, social behavior such as addictions to drugs or alcohol, can shape facial features. Research in the 1990s indicated that three elements of personality in particular – power, warmth and honesty – can be reliably inferred by looking at facial features. Some evidence indicated that the pattern of whorls in the scalp had some correlation to male homosexuality, though subsequent research has largely refuted the findings on hair whorl patterns. A February 2009 article in New Scientist magazine reported that physiognomy is undergoing a small revival, with research papers trying to find links between personality traits and facial traits. A study of 90 ice hockey players found a statistically significant correlation between a wider face—a greater than average cheekbone-to-cheekbone distance relative to the distance between brow and upper lip—and the number of penalty minutes a player received for violent acts like slashing, elbowing, checking from behind, and fighting. This revival has continued in the 2010s with the rise of machine learning for facial recognition. For instance, researchers have claimed that it is possible to predict upper body strength and some personality traits (propensity to aggression) only by looking at the width of the face. Political orientation can also be reliably predicted. In a study that used facial recognition technology by analyzing the faces of over one million individuals, political orientation was predicted correctly 74% of the time; considerably better than chance (50%), human ability (55%) or even personality questionnaires (68%). Other studies have used AI and machine learning techniques to identify facial characteristics that predict honesty, personality, and intelligence. In 2017, a controversial study claimed that an AI algorithm could detect sexual orientation 'more accurately than humans' (in 81% of the tested cases for men and 71% for women). A director of research of the Human Rights Campaign (HRC) accused the study of being "junk science" to the BBC. The director, an 'equity and inclusion strategist' with no scientific background, was criticized by the researchers for "premature judgement". In early 2018, researchers, among them two specialists of AI working at Google (one of the two on face recognition), issued a reportedly contradicting study based on a survey of 8,000 Americans using Amazon's Mechanical Turk crowd-sourcing platform. The survey yielded many traits that were used to discriminate between gay and straight respondents with a series of yes/no questions. These traits had actually less to do with morphology than with grooming, presentation, and lifestyle (makeup, facial hair, glasses, angle of pictures taken of self, etc.). For more information of this sexual orientation issue in general, see gaydar. In 2020, a study on the use of consumer facial images for marketing research purposes concluded that deep learning on facial images can extract a variety of personal information relevant to marketers and so users' facial images could become a basis for ad targeting on Tinder and Facebook. According to the study, while most of facial images' predictive power is attributable to basic demographics (age, gender, race) extracted from the face, image artifacts, observable facial characteristics, and other image features extracted by deep learning all contribute to prediction quality beyond demographics. In media In 2011, the South Korean news agency Yonhap published a physiognomical analysis of the current leader of North Korea, Kim Jong-un. In the TV series Doctor Who, as the Fourth Doctor examines his new face after regenerating in Robot, he comments on his physiognomy saying "As for the physiognomy, well, nothing's perfect." The newspaper Ukrainska Pravda reported, "The fact that Putin uses [body] doubles is suggested by the intelligence data of the Ukrainian secret services and conclusions made by several specialists, in particular physiognomists." Related disciplines Anthropological criminology Anthropometry Characterology Mien Shiang Metoposcopy Onychomancy Palmistry Pathognomy Phrenology Somatotype and constitutional psychology References Further reading Claudia Schmölders, Hitler's Face: The Biography of an Image. Translated by Adrian Daub. University of Pennsylvania Press: 2006. . Liz Gerstein, About Face. SterlingHouse Publisher, Inc. Rüdiger Campe and Manfred Schndier, Geschichten der Physiognomik. Text-Bild-Wissen, (Freiburg: Rombach, 1996). External links Johann Kaspar Lavater On The Nature of Man, Which is the Foundation of the Science Which is called Physiognomy 1775 Selected images from: Della Porta, Giambattista: De humana physiognomonia libri IIII (Vico Equense, 1586). Historical Anatomies on the Web. National Library of Medicine. Women's traits 'written on face' (BBC News Wednesday, 11 February 2009) "On Physiognomy" – An Essay by Arthur Schopenhauer "Composite Portraits", by Francis Galton, 1878 (as published in the Journal of the Anthropological Institute of Great Britain and Ireland, volume 8). "Enquiries into Human Faculty and its Development", book by Francis Galton, 1883. French Society for Morphopsychology Pseudoscience de:Physiognomik
0.77512
0.998102
0.773648
Apophenia
Apophenia is the tendency to perceive meaningful connections between unrelated things. The term (German: from the Greek verb ἀποφαίνειν (apophaínein)) was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential over-interpretations of actual sensory perceptions, as opposed to hallucinations. Apophenia has also come to describe a human propensity to unreasonably seek definite patterns in random information, such as can occur in gambling. Introduction Apophenia can be considered a commonplace effect of brain function. Taken to an extreme, however, it can be a symptom of psychiatric dysfunction, for example, as a symptom in schizophrenia, where a patient sees hostile patterns (for example, a conspiracy to persecute them) in ordinary actions. Apophenia is also typical of conspiracy theories, where coincidences may be woven together into an apparent plot. Examples Pareidolia Pareidolia is a type of apophenia involving the perception of images or sounds in random stimuli. A common example is the perception of a face within an inanimate object—the headlights and grill of an automobile may appear to be "grinning". People around the world see the "Man in the Moon". People sometimes see the face of a religious figure in a piece of toast or in the grain of a piece of wood. There is strong evidence that psychedelic drugs tend to induce or enhance pareidolia. Pareidolia usually occurs as a result of the fusiform face area—which is the part of the human brain responsible for seeing faces—mistakenly interpreting an object, shape or configuration with some kind of perceived "face-like" features as being a face. Gambling Gamblers may imagine that they see patterns in the numbers that appear in lotteries, card games, or roulette wheels, where no such patterns exist. A common example of this is the gambler's fallacy. Statistics In statistics, apophenia is an example of a type I error – the false identification of patterns in data. It may be compared to a so-called false positive in other test situations. Finance The problem of apophenia in finance has been addressed in academic articles. More specifically, within the world of finance itself, the examples most prone to apophenia are trading, structuring, sales, and compensation. Related terms In contrast to an epiphany, an apophany (i.e., an instance of apophenia) does not provide insight into the nature of reality nor its interconnectedness, but is a "process of repetitively and monotonously experiencing abnormal meanings in the entire surrounding experiential field". Such meanings are entirely self-referential, solipsistic, and paranoid—"being observed, spoken about, the object of eavesdropping, followed by strangers". Thus the English term "apophenia" has a somewhat different meaning from that which Conrad defined when he coined the term "Apophänie". Synchronicity Synchronicity can be considered synonymous with correlation, without any statement about the veracity of various causal inferences. Patternicity In 2008, Michael Shermer coined the word patternicity, defining it as "the tendency to find meaningful patterns in meaningless noise". Agenticity In The Believing Brain (2011), Shermer wrote that humans have "the tendency to infuse patterns with meaning, intention, and agency", which he called agenticity. Clustering illusion A clustering illusion is a type of cognitive bias in which a person sees a pattern in a random sequence of numbers or events. Many theories have been disproved as a result of this bias being highlighted. One case, during the early 2000s, involved the occurrence of breast cancer among employees of ABC Studios in Queensland. A study found that the incidence of breast cancer at the studios was six times the rate in the rest of Queensland. An examination found no correlation between the heightened incidence and any factors related to the site, or any genetic or lifestyle factors of the employees. Causes Although there is no confirmed reason as to why apophenia occurs, there are some respected theories. Models of pattern recognition Pattern recognition is a cognitive process that involves retrieving information either from long-term, short-term, or working memory and matching it with information from stimuli. There are three different ways in which this may happen and go wrong, resulting in apophenia. Template matching The stimulus is compared to templates, which are abstracted or partial representations of previously seen stimuli. These templates are stored in long-term memory as a result of past learning or educational experiences. For example, D, d, D, d, D and d are all recognized as the same letter. Template-matching detection processes, when applied to more complex data sets (such as, for example, a painting or clusters of data) can result in the wrong template being matched. A false positive detection will result in apophenia. Prototype matching This is similar to template matching, except for the fact that prototypes are complete representations of a stimulus. The prototype need not be something that has been previously seen—for example it might be an average or amalgam of previous stimuli. Crucially, an exact match is not needed. An example of prototype matching would be to look at an animal such as a tiger and instead of recognizing that it has features that match the definition of a tiger (template matching), recognizing that it's similar to a particular mental image one has of a tiger (prototype matching). This type of pattern recognition can result in apophenia based on the fact that since the brain is not looking for exact matches, it can pick up some characteristics of a match and assume it fits. This is more common with pareidolia than data collection. Feature analysis The stimulus is first broken down into its features and then processed. This model of pattern recognition says that the processing goes through four stages: detection, pattern dissection, feature comparison in memory, and recognition. Evolution One of the explanations put forth by evolutionary psychologists for apophenia is that it is not a flaw in the cognition of human brains but rather something that has come about through years of need. The study of this topic is referred to as error management theory. One of the most accredited studies in this field is Skinner's box. This experiment involved taking a hungry pigeon, placing it in a box and releasing food pellets at random times. The pigeon received a food pellet while performing some action; and so, rather than attributing the arrival of the pellet to randomness, the pigeon repeats that action, and continues to do so until another pellet falls. As the pigeon increases the number of times it performs the action, it gains the impression that it also increased the times it was "rewarded" with a pellet, although the release in fact remained entirely random. See also Alignments of random points Anthropomorphism Barnum effect Causality Clustering illusion Confirmation bias False equivalence Ideas and delusions of reference Ideomotor phenomenon Magical thinking Schizotypal personality disorder Synesthesia Texas sharpshooter fallacy Post hoc ergo propter hoc References Further reading External links Cognitive biases Randomness
0.774695
0.998613
0.773621
Psychophysiology
Psychophysiology (from Greek , psȳkhē, "breath, life, soul"; , physis, "nature, origin"; and , -logia) is the branch of psychology that is concerned with the physiological bases of psychological processes. While psychophysiology was a general broad field of research in the 1960s and 1970s, it has now become quite specialized, based on methods, topic of studies and scientific traditions. Methods vary as combinations of electrophysiological methods (such as EEG), neuroimaging (MRI, PET), and neurochemistry. Topics have branched into subspecializations such as social, sport, cognitive, cardiovascular, clinical and other branches of psychophysiology. Background Some people have difficulty distinguishing a psychophysiologist from a physiological psychologist, which have two very different perspectives. Psychologists are interested in why we may fear spiders and physiologists may be interested in the input/output system of the amygdala. A psychophysiologist will attempt to link the two. Psychophysiologists generally study the psychological/physiological link in intact human subjects. While early psychophysiologists almost always examined the impact of psychological states on physiological system responses, since the 1970s, psychophysiologists also frequently study the impact of physiological states and systems on psychological states and processes. It is this perspective of studying the interface of mind and body that makes psychophysiologists most distinct. Historically, most psychophysiologists tended to examine the physiological responses and organ systems innervated by the autonomic nervous system. More recently, psychophysiologists have been equally, or potentially more, interested in the central nervous system, exploring cortical brain potentials such as the many types of event-related potentials (ERPs), brain waves, and utilizing advanced technology such as functional magnetic resonance imaging (fMRI), MRI, PET, MEG, and other neuroimagery techniques. A psychophysiologist may look at how exposure to a stressful situation will produce a result in the cardiovascular system such as a change in heart rate (HR), vasodilation/vasoconstriction, myocardial contractility, or stroke volume. Overlaps in areas of interest between psychophysiologists and physiological psychologist may consist of observing how one cardiovascular event may influence another cardiovascular or endocrine event; or how activation of one neural brain structure exerts excitatory activity in another neural structure which then induces an inhibitory effect in some other system. Often, physiological psychologists examine the effects that they study in infrahuman subjects using surgical or invasive techniques and processes. Psychophysiology is closely related to the field of neuroscience, which primarily concerns itself with relationships between psychological events and brain processes. Psychophysiology is also related to the medical disciplines, such as endocrinology, psychosomatics and psychopharmacology. While psychophysiology was a discipline off the mainstream of psychological and medical science prior to roughly the 1940s, more recently, psychophysiology has found itself positioned at the intersection of psychological and medical science, and its popularity and importance have expanded commensurately with the realization of the inter-relatedness of mind and body. Measures Psychophysiology measures exist in multiple domains; reports, electrophysiological studies, studies in neurochemistry, neuroimaging and behavioral methods. Evaluative reports involve participant introspection and self-ratings of internal psychological states or physiological sensations, such as self-report of arousal levels on the self-assessment manikin, or measures of interoceptive visceral awareness such as heartbeat detection. Merits to self-report are an emphasis on accurately understand the participants' subjective experience and understanding their perception; however, its pitfalls include the possibility of participants misunderstanding a scale or incorrectly recalling events. Physiological responses also can be measured via instruments that read bodily events such as heart rate change, electrodermal activity (EDA), muscle tension, and cardiac output. Many indices are part of modern psychophysiology, including brain waves (electroencephalography, EEG), fMRI (functional magnetic resonance imaging), electrodermal activity (a standardized term encompassing skin conductance response, SCR, and galvanic skin response, GSR), cardiovascular measures (heart rate, HR; beats per minute, BPM; heart rate variability, HRV; vasomotor activity), muscle activity (electromyography, EMG), electrogastrogram (EGG) changes in pupil diameter with thought and emotion (pupillometry), eye movements, recorded via the electro-oculogram (EOG) and direction-of-gaze methods, cardiodynamics, recorded via impedance cardiography, and grip force. These measures are beneficial because they provide accurate and perceiver-independent objective data recorded by machinery. The downsides, however, are that any physical activity or motion can alter responses, and basal levels of arousal and responsiveness can differ among individuals and even between situations. Neurochemical methods are used to study functionality and processes associated to neurotransmitters and neuropeptides Finally, one can measure overt action or behavior, which involves the observation and recording actual actions, such as running, freezing, eye movement, and facial expression. These are good response measures and easy to record in animals, but they are not as frequently used in human studies. Uses Psychophysiological measures are often used to study emotion and attention responses to stimuli, during exertion, and increasingly, to better understand cognitive processes. Physiological sensors have been used to detect emotions in schools and intelligent tutoring systems. Emotions as example of psychophysiological studies Psychophysiology studies multiple aspects of behavior, and emotions are the most common example. It has long been recognized that emotional episodes are partly constituted by physiological responses. Early work done linking emotions to psychophysiology started with research on mapping consistent autonomic nervous system (ANS) responses to discrete emotional states. For example, anger might be constituted by a certain set of physiological responses, such as increased cardiac output and high diastolic blood pressure, which would allow us to better understand patterns and predict emotional responses. Some studies were able to detect consistent patterns of ANS responses that corresponded to specific emotions under certain contexts, like an early study by Paul Ekman and colleagues in 1983 "Emotion-specific activity in the autonomic nervous system was generated by constructing facial prototypes of emotion muscle by muscle and by reliving past emotional experiences. The autonomic activity produced distinguished not only between positive and negative emotions, but also among negative emotions". However, as more studies were conducted, more variability was found in ANS responses to discrete emotion inductions, not only among individuals but also over time in the same individuals, and greatly between social groups. Some of these differences can be attributed to variables like induction technique, context of the study, or classification of stimuli, which can alter a perceived scenario or emotional response. However it was also found that features of the participant could also alter ANS responses. Factors such as basal level of arousal at the time of experimentation or between test recovery, learned or conditioned responses to certain stimuli, range and maximal level of effect of ANS action, and individual attentiveness can all alter physiological responses in a lab setting. Even supposedly discrete emotional states fail to show specificity. For example, some emotional typologists consider fear to have subtypes, which might involve fleeing or freezing, both of which can have distinct physiological patterns and potentially distinct neural circuitry. As such no definitive correlation can be drawn linking specific autonomic patterns to discrete emotions, causing emotion theorists to rethink classical definitions of emotions. Psychophysiological inference and physiological computer games Physiological computing represents a category of affective computing that incorporates real-time software adaption to the psychophysiological activity of the user. The main goal of this is to build a computer that responds to user emotion, cognition and motivation. The approach is to enable implicit and symmetrical human-computer communication by granting the software access to a representation of the user's psychological status. There are several possible methods to represent the psychological state of the user (discussed in the affective computing page). The advantages of using psychophysiological indices are that their changes are continuous, measures are covert and implicit, and only available data source when the user interacts with the computer without any explicit communication or input device. These systems rely upon an assumption that the psychophysiological measure is an accurate one-to-one representation of a relevant psychological dimension such as mental effort, task engagement and frustration. Physiological computing systems all contain an element that may be termed as an adaptive controller that may be used to represent the player. This adaptive controller represents the decision-making process underlying software adaptation. In their simplest form, adaptive controllers are expressed in Boolean statements. Adaptive controllers encompass not only the decision-making rules, but also the psychophysiological inference that is implicit in the quantification of those trigger points used to activate the rules. The representation of the player using an adaptive controller can become very complex and often only one-dimensional. The loop used to describe this process is known as the biocybernetic loop. The biocybernetic loop describes the closed loop system that receives psychophysiological data from the player, transforms that data into a computerized response, which then shapes the future psychophysiological response from the player. A positive control loop tends towards instability as player-software loop strives towards a higher standard of desirable performance. The physiological computer game may wish to incorporate both positive and negative loops into the adaptive controller. See also Karl U. Smith Vladimir Nebylitsyn Jemma B. King Physiological psychology Search activity concept Behavior change References Citations Bibliography Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology. Heart Rate Variability Standards of Measurement, Physiological Interpretation, and Clinical Use. Circulation. 1996:1043-1065. Heel-Lancing in Newborns: Behavioral and Spectral Analysis Assessment of Pain Control Methods. A. Weissman, M. Aranovitch, S. Blazer, and E. Z. Zimmer (2009) Pediatrics 124, e921-e92 Effects of Low-Intensity Exercise Conditioning on Blood Pressure, Heart Rate, and Autonomic Modulation of Heart Rate in Men and Women with Hypertension. L. P.T. Hua, C. A. Brown, S. J.M. Hains, M. Godwin, and J. L. Parlow (2009) Biol Res Nurs 11, 129-143 Malik M, Camm A. Heart Rate Variability. Futura Publishing Company, 1995. Welcome MO, Pereverzeva EV, and Pereverzev VA. A novel psychophysiological model of the effect of alcohol use on academic performance of male medical students of Belarusian State Medical University. IJCRIMPH 2 (6): 183–197, 2010. External links Society for Psychophysiological Research. The primary American professional organization of psychophysiological research. British Society for Clinical Psychophysiology (BSCP) Clinical Psychophysiology The International Society for the Advancement of Respiratory Psychophysiology (ISARP) The Medipsych Institute Clinical Psychophysiology Brain, Body and Bytes: Psychophysiological User Interaction CHI 2010 Workshop (10-15, April 2010) Neuropsychology
0.785506
0.984792
0.77356
Psychological anthropology
Psychological anthropology is an interdisciplinary subfield of anthropology that studies the interaction of cultural and mental processes. This subfield tends to focus on ways in which humans' development and enculturation within a particular cultural group—with its own history, language, practices, and conceptual categories—shape processes of human cognition, emotion, perception, motivation, and mental health. It also examines how the understanding of cognition, emotion, motivation, and similar psychological processes inform or constrain our models of cultural and social processes. Each school within psychological anthropology has its own approach. History Psychological anthropology emerged during the 20th century as a subfield of anthropology. The formal development of the sub-discipline is often attributed to anthropologist Franz Boas and some of his students, among whom were Margaret Mead, Ruth Benedict, and Edward Sapir. Boas, a founding influence of cultural anthropology, is one of the most important figures in the history of American anthropology. Like many of his contemporaries, Boas was intrigued by questions about the human mind. He likely read and engaged with psychoanalytical theory, such as that of Sigmund Freud, whose work was considered both controversial and groundbreaking during that era. Wilhelm Wundt was a German psychologist and pioneer in folk psychology. His objectives were to form psychological explanations using the reports of ethnologists. He made different contracting stages such as the 'totemic' stage, the 'age of heroes and gods', and the 'enlightened age of humanity'. Unlike most, Wundt believed that the mind of both 'primitive' and civilised groups had equivalent learning capabilities but that they simply used that capacity in different ways. Though intimately connected in many ways, the fields of anthropology and psychology have remained two distinct disciplines, in part because of their differing methodologies and disciplinary objectives. Where anthropology was traditionally geared towards historical and evolutionary trends, what psychology concerned itself with was more ahistorical and acultural in nature. Psychoanalysis joined the two fields together. In 1972 Francis L. K. Hsu suggested that the field of culture and personality be renamed 'psychological anthropology'. Hsu considered the original title old fashioned given that many anthropologists regarded personality and culture as the same, or in need of better explanations. During the 1970s and 1980s, psychological anthropology began to shift its focus towards the study of human behaviour in a natural setting. Schools Psychoanalytic anthropology This school is based upon the insights of Sigmund Freud and other psychoanalysts as applied to social and cultural phenomena. Adherents of this approach often assumed that techniques of child-rearing shaped adult personality and that cultural symbols (including myths, dreams, and rituals) could be interpreted using psychoanalytical theories and techniques. The latter included interviewing techniques based on clinical interviewing, the use of projective tests such as the TAT and the Rorschach, and a tendency towards including case studies of individual interviewees in their ethnographies. A major example of this approach was the Six Cultures Study under John and Beatrice Whiting in Harvard's Department of Social Relations. This study examined child-rearing in six very different cultures (New England Baptist community; a Philippine barrio; an Okinawan village; an Indian village in Mexico; a northern Indian caste group; and a rural tribal group in Kenya). Some practitioners look specifically at mental illness cross-culturally (George Devereux) or at the ways in which social processes such as the oppression of ethnic minorities affect mental health (Abram Kardiner), while others focus on the ways in which cultural symbols or social institutions provide defense mechanisms (Melford Spiro) or otherwise alleviate psychological conflicts (Gananath Obeyesekere). Some have also examined the cross-cultural applicability of psychoanalytic concepts such as the Oedipus complex (Melford Spiro). Others who might be considered part of this school are a number of scholars who, although psychoanalysts, conducted fieldwork (Erich Fromm) or used psychoanalytic techniques to analyze materials gathered by anthropologists (Sigmund Freud, Erik Erikson, Géza Róheim). Because many American social scientists during the first two-thirds of the 20th century had at least a passing familiarity with psychoanalytic theory, it is hard to determine precisely which ones should be considered primarily as psychoanalytic anthropologists. Many anthropologists who studied personality (Cora DuBois, Clyde Kluckhohn, Geoffrey Gorer) drew heavily on psychoanalysis; most members of the "culture and personality school" of psychological anthropology did so. In recent years, psychoanalytic and more broadly psychodynamic theory continues to influence some psychological anthropologists (such as Gilbert Herdt, Douglas Hollan, and Robert LeVine) and have contributed significantly to such approaches as person-centered ethnography and clinical ethnography. It thus may make more sense to consider psychoanalytic anthropology since the latter part of the 20th century as more a style or a set of research agendas that cut across several other approaches within anthropology. See also: Robert I. Levy, Ari Kiev. Jeannette Mageo. Culture and personality Personality is the overall characteristics that a person possesses. All of these characteristics are acquired within a culture. However, when a person changes his or her culture, his or her personality automatically changes because the person learns to follow the norms and values of the new culture, and this, in turn, influences the individual's personal characteristics. Configurationalist approach This approach describes a culture as a personality; that is, interpretation of experiences, guided by symbolic structure, creates personality which is "copied" into the larger culture. Leading figures include Ruth Benedict, A. Irving Hallowell, and Margaret Mead. Basic and modal personality Major figures include John Whiting and Beatrice Whiting, Cora DuBois, and Florence Kluckhohn. National character Leading figures include sociologist Alex Inkeles and anthropologist Clyde Kluckhohn. Ethnopsychology Major figures: Vincent Crapanzano, Georges Devereux, Tobie Nathan, Catherine Lutz, Michelle Zimbalist Rosaldo, Renato Rosaldo, Charles Nuckolls, Bradd Shore, and Dorinne K. Kondo Cognitive anthropology Cognitive anthropology takes a number of methodological approaches, but generally draws on the insights of cognitive science in its model of the mind. A basic premise is that people think with the aid of schemas, units of culturally shared knowledge that are hypothesized to be represented in the brain as networks of neural connections. This entails certain properties of cultural models, and may explain both part of the observed inertia of cultural models (people's assumptions about the way the world works are hard to change) and patterns of association. Roy D'Andrade (1995) sees the history of cognitive anthropology proper as divisible into four phases. The first began in the 1950s with the explicit formulation of culture as knowledge by anthropologists such as Ward Goodenough and Anthony Wallace. From the late 1950s through the mid-1960s, attention focused on categorization, componential analysis (a technique borrowed from structuralist linguistics), and native or folk systems of knowledge (ethnoscience e.g., ethnobotany, ethnolinguistics and so on), as well as discoveries in patterns of color naming by Brent Berlin and Paul Kay. During the 1950s and 1960s, most of the work in cognitive anthropology was carried out at Yale, University of Pennsylvania, Stanford, Berkeley, University of California, Irvine, and the Harvard Department of Social Relations. The third phase looked at types of categories (Eleanor Rosch) and cultural models, drawing on schema theory, linguistic work on metaphor (George Lakoff, Mark Johnson). The current phase, beginning in the 1990s, has seen more focus on the problem of how cultural models are shared and distributed, as well as on motivation, with significant work taking place at UC San Diego, UCLA, UC Berkeley, University of Connecticut, and Australian National University, among others. Currently, different cognitive anthropologists are concerned with how groups of individuals are able to coordinate activities and "thinking" (Edwin Hutchins); with the distribution of cultural models (who knows what, and how people access knowledge within a culture: Dorothy Holland, A. Kimball Romney, Dan Sperber, Marc Swartz); with conflicting models within a culture (Naomi Quinn, Holly Mathews); or the ways in which cultural models are internalized and come to motivate behavior (Roy D'Andrade, Naomi Quinn, Charles Nuckolls, Bradd Shore, Claudia Strauss). Some cognitive anthropologists continue work on ethnoscience (Scott Atran), most notably in collaborative field projects with cognitive and social psychologists on culturally universal versus culturally particular models of human categorization and inference and how these mental models hinder or help social adaptations to natural environments. Others focus on methodological issues such as how to identify cultural models. Related work in cognitive linguistics and semantics also carries forward research on the Sapir–Whorf hypothesis and looks at the relationship between language and thought (Maurice Bloch, John Lucy, Anna Wierzbicka). Psychiatric anthropology While not forming a school in the sense of having a particular methodological approach, a number of prominent psychological anthropologists have addressed significant attention to the interaction of culture and mental health or mental illness (see Janis H. Jenkins), ranging through the description and analysis of culture-bound syndromes (Pow-Meng Yap, Ronald Simons, Charles Hughes); the relationship between cultural values or culturally mediated experiences and the development or expression of mental illness (among immigrants, for instance more particularly) (Thomas Csordas, George Devereux, Robert Edgerton, Sue Estroff, Arthur Kleinman, Janis H. Jenkins, Roberto Beneduce, Robert Lemelson, Theresa O'Nell, Marvin Opler); to the training of mental health practitioners and the cultural construction of mental health as a profession (Charles W. Nuckolls, Tanya Luhrmann), and more recently what Janis H. Jenkins refers to the cultural creation of a "pharmaceutical self" in a globalizing world (Jenkins 2011). Recent research focuses on specific relationships between History, conscience, cultural Self and suffering (Roberto Beneduce, Etnopsichiatria. Sofferenza mentale e alterità fra Storia, dominio e cultura, 2007). Some of these have been primarily trained as psychiatrists rather than anthropologists: Abram Kardiner, Arthur Kleinman, Robert I. Levy, Roberto Beneduce, Roland Littlewood. Further research has been done on genetic predisposition, family's contribution to the genesis of psychopathology, and the contribution of environmental factors such as tropical diseases, natural catastrophes, and occupational hazards. Today During most of the history of modern anthropology (with the possible exception of the 1930s through the 1950s, when it was an influential approach within American social thought), psychological anthropology has been a relatively small though productive subfield. D'Andrade, for instance, estimates that the core group of scholars engaged in active research in cognitive anthropology (one of the smaller sub-subfields), have numbered some 30 anthropologists and linguists, with the total number of scholars identifying with this subfield likely being less than 200 at any one time. At present, relatively few universities have active graduate training programs in psychological anthropology. These include: Centre Georges Devreux, Paris 8 University Australian National University - Linguistics and Applied Linguistics Program Brunel University, West London - MSc program in psychological and psychiatric anthropology Case Western Reserve University - MA, PhD in cultural anthropology Duke University - Cultural Anthropology Emory University - Anthropology London School of Economics - Anthropology University of Bergen, Norway - Social Anthropology University of California, Berkeley - Anthropology and Linguistics University of California, Irvine - Anthropology University of California, Los Angeles - Anthropology University of California, San Diego - Anthropology and Cognitive Science University of Chicago - Human Development University of Connecticut - Anthropology University of North Carolina, Chapel Hill - Anthropology Also, social medicine and cross-cultural/transcultural psychiatry programs at: Harvard - Department of Global Health & Social Medicine McGill - Division of Social and Transcultural Psychiatry Pontificia Universidad Catolica de Valparaiso - Master in Ethnopsychology Università degli Studi di Trieste - Department of Ethnopsychology See also Cognitive anthropology Cognitive science Cultural psychology Egocentrism Enculturation Development of religion Harvard Department of Social Relations Social psychology Symbolic interactionism References Bibliography Selected historical works and textbooks Bock, Philip K. (1999) Rethinking Psychological Anthropology, 2nd Ed., New York: W. H. Freeman D'Andrade, Roy G. (1995). The Development of Cognitive Anthropology. Cambridge, UK: Cambridge University Press. Hsu, Francis L. K., ed. (1972) Psychological Anthropology. Cambridge: Schenkman Publishing Company, Inc. Wilhelm Max Wundt, Völkerpsychologie: Eine Untersuchung der Entwicklungsgesetze von Sprache, Mythus und Sitte, Leipzig (1917); 2002 reprint: . Selected theoretical works in psychological anthropology Bateson, Gregory (1956) Steps to an Ecology of Mind. New York: Ballantine Books. Kilborne, Benjamin and L. L. Langness, eds. (1987). Culture and Human Nature: Theoretical papers of Melford E. Spiro. Chicago: University of Chicago Press. Nuckolls, Charles W. (1996) The Cultural Dialectics of Knowledge and Desire. Madison: University of Wisconsin Press. Nuckolls, Charles W. (1998) Culture: A Problem that Cannot be Solved. Madison: University of Wisconsin Press. Sapir, Edward (1956) Culture, Language, and Personality: selected essays. Edited by D. G. Mandelbaum. Berkeley, CA: University of California Press. Schwartz, Theodore, Geoffrey M. White, and Catherine A. Lutz, eds. (1992) New Directions in Psychological Anthropology. Cambridge, UK: Cambridge University Press. Shore, Bradd (1995) Culture in Mind: cognition, culture, and the problem of meaning. New York: Oxford University Press. Shweder, Richard A. and Robert A. LeVine, eds. (1984). Culture Theory: Essays on mind, self, and emotion. Cambridge, UK: Cambridge University Press. Strauss, Claudia and Naomi Quinn (1997). A Cognitive Theory of Cultural Meaning. Cambridge, UK: Cambridge University Press. Selected ethnographic works in psychological anthropology Benedict, Ruth (1946) The Chrysanthemum and the Sword: Patterns of Japanese Culture. Boston: Houghton Mifflin Company. Boddy, Janice. Wombs and alien spirits: Women, men, and the Zar cult in northern Sudan. Univ of Wisconsin Press, 1989. Briggs, Jean (1970) Never in Anger: Portrait of an Eskimo family. Cambridge, Massachusetts: Harvard University Press. Crapanzano, Vincent. The Hamadsha: A Study in Moroccan Ethnopsychiatry. University of California Pr, 1973. Crapanzano, Vincent. Tuhami: portrait of a Moroccan. University of Chicago Press, 1985. DuBois, Cora Alice (1960) The people of Alor; a social-psychological study of an East Indian island. With analyses by Abram Kardiner and Emil Oberholzer. New York: Harper. Herdt, Gilbert (1981) Guardians of the Flutes. Chicago: University of Chicago Press. Levy, Robert I. (1973) Tahitians: mind and experience in the Society Islands. Chicago: University of Chicago Press. Scheper-Hughes, Nancy (1979) Saints, Scholars, and Schizophrenics: mental illness in rural Ireland. Berkeley, CA: University of California Press. Swartz, Marc J. (1991) The Way the World Is: cultural processes and social relations among the Swahili of Mombasa. Berkeley: University of California Press. Selected works in psychiatric anthropology Beneduce, Roberto (2007) Etnopsichiatria. Sofferenza mentale e alterità fra Storia, dominio e cultura, Roma: Carocci. Jenkins, Janis H. and Robert J. Barrett (2004) Schizophrenia, Culture, and Subjectivity: The Edge of Experience. New York: Cambridge University Press. Jenkins, Janis H. (2011) Pharmaceutical Self: The Global Shaping of Experience in an Age of Psychopharmacology. Santa Fe, NM: School of Advanced Research. Lézé, Samuel (2014) "Anthropology of mental illness", in : Andrew Scull (ed.), Cultural Sociology of Mental Illness : an A-to-Z Guide , Sage, 2014, pp. 31–32 Kardiner, Abram, with the collaboration of Ralph Linton, Cora Du Bois and James West (pseud.) (1945) The psychological frontiers of society. New York: Columbia University Press. Kleinman, Arthur (1980) Patients and healers in the context of culture: an exploration of the borderland between anthropology, medicine, and psychiatry. Berkeley, CA: University of California Press. Kleinman, Arthur (1986) Social origins of distress and disease: depression, neurasthenia, and pain in modern China. New Haven, CT: Yale University Press. Kleinman, Arthur, & Good, Byron, eds. (1985) Culture and Depression: studies in the anthropology and cross-cultural psychology of affect and disorder. Berkeley / Los Angeles: University of California Press. Luhrmann, Tanya M. (2000) Of two minds: The growing disorder in American psychiatry. New York, NY, US: Alfred A. Knopf, Inc. O'Nell, Theresa D. (1996) Disciplined Hearts: History, identity, and depression in an American Indian community. Berkeley, CA: University of California Press. External links Anthropology and Mental Health Special Interest Group (AMHIG),Society of Medical Anthropology, AAA Society for Psychological Anthropology ENPA - European Network for Psychological Anthropology Ethos – journal of the Society for Psychological Anthropology Psychological and Psychiatric Anthropology Resources The Foundation for Psychocultural Research Psychological Anthropology – essay at Indiana University Georges Devereux: Introduction on Ethnopsychiatry Psychological Anthropology - Indiana University Anthropology
0.794032
0.974084
0.773454
Gordon's functional health patterns
Gordon’s functional health patterns is a method devised by Marjory Gordon to be used by nurses in the nursing process to provide a more comprehensive nursing assessment of the patient. The following areas are assessed through questions asked by the nurse and medical examinations to provide an overview of the individual's health status and health practices that are used to reach the current level of health or wellness. Health Perception and Management Nutritional metabolic Elimination-excretion patterns and problems need to be evaluated (constipation, incontinence, diarrhea) Activity exercise-whether one is able to do daily activities normally without any problem, self care activities Sleep rest-do they have hypersomnia, insomnia, do they have normal sleeping patterns Cognitive-perceptual-assessment of neurological function is done to assess, check the person's ability to comprehend information Self perception/self concept Role relationship—This pattern should only be used if it is appropriate for the patient's age and specific situation. Sexual reproductivity Coping-stress tolerance Value-Belief Pattern References Further reading Marjory Gordon. Manual of Nursing Diagnosis - Eleventh Edition. . Nursing theory
0.787078
0.982591
0.773376
Perceptual psychology
Perceptual psychology is a subfield of cognitive psychology that concerns the conscious and unconscious innate aspects of the human cognitive system: perception. A pioneer of the field was James J. Gibson. One major study was that of affordances, i.e. the perceived utility of objects in, or features of, one's surroundings. According to Gibson, such features or objects were perceived as affordances and not as separate or distinct objects in themselves. This view was central to several other fields as software user interface and usability engineering, environmentalism in psychology, and ultimately to political economy where the perceptual view was used to explain the omission of key inputs or consequences of economic transactions, i.e. resources and wastes. Gerard Egan and Robert Bolton explored areas of interpersonal interactions based on the premise that people act in accordance with their perception of a given situation. While behaviour is obvious, a person's thoughts and feelings are masked. This gives rise to the idea that the most common problems between people are based on the assumption that we can guess what the other person is feeling and thinking. They also offered methods, within this scope, for effective communications. This includes reflective listening, assertion skills, conflict resolution etc. Perceptual psychology is often used in therapy to help a patient better their problem-solving skills. Nativism vs. empiricism Nativist and empiricist approaches to perceptual psychology have been researched and debated to find out which is the basis in the development of perception. Nativists believe humans are born with all the perceptual abilities needed. Nativism is the favoured theory on perception. Empiricists believe that humans are not born with perceptual abilities, but instead must learn them. See also Binding problem Psychophysics Physiological psychology Sociophysics Vision science References Cognitive biases Cognitive psychology
0.801771
0.964506
0.773312
Social work
Social work is an academic discipline and practice-based profession concerned with meeting the basic needs of individuals, families, groups, communities, and society as a whole to enhance their individual and collective well-being. Social work practice draws from liberal arts and STEM areas such as psychology, sociology, health, political science, community development, law, and economics to engage with systems and policies, conduct assessments, develop interventions, and enhance social functioning and responsibility. The ultimate goals of social work include the improvement of people's lives, alleviation of biopsychosocial concerns, empowerment of individuals and communities, and the achievement of social justice. Social work practice is often divided into three levels. Micro-work involves working directly with individuals and families, such as providing individual counseling/therapy or assisting a family in accessing services. Mezzo-work involves working with groups and communities, such as conducting group therapy or providing services for community agencies. Macro-work involves fostering change on a larger scale through advocacy, social policy, research development, non-profit and public service administration, or working with government agencies. Starting in the 1960s, a few universities began social work management programmes, to prepare students for the management of social and human service organizations, in addition to classical social work education. The social work profession developed in the 19th century, with some of its roots in voluntary philanthropy and in grassroots organizing. However, responses to social needs had existed long before then, primarily from public almshouses, private charities and religious organizations. The effects of the Industrial Revolution and of the Great Depression of the 1930s placed pressure on social work to become a more defined discipline as social workers responded to the child welfare concerns related to widespread poverty and reliance on child labor in industrial settings. Definition Social work is a broad profession that intersects with several disciplines. Social work organizations offer the following definitions: Social work is a practice-based profession and an academic discipline that promotes social change and development, social cohesion, and the empowerment and liberation of people. Principles of social justice, human rights, collective responsibility and respect for diversities are central to social work. Underpinned by theories of social work, social sciences, humanities, and indigenous knowledge, social work engages people and structures to address life challenges and enhance well-being. —International Federation of Social Workers Social work is a profession concerned with helping individuals, families, groups and communities to enhance their individual and collective well-being. It aims to help people develop their skills and their ability to use their resources and those of the community to resolve problems. Social work is concerned with individual and personal problems but also with broader social issues such as poverty, unemployment, and domestic violence. — Canadian Association of Social Workers Social work practice consists of the professional application of social principles, and techniques to one or more of the following ends: helping people obtain tangible services; counseling and psychotherapy with individuals, families, and groups; helping communities or groups provide or improve social and health services, and participating in legislative processes. The practice of social work requires knowledge of human development and behavior; of social and economic, and cultural institutions; and the interaction of all these factors. —[US] National Association of Social Workers Social workers work with individuals and families to help improve outcomes in their lives. This may be helping to protect vulnerable people from harm or abuse or supporting people to live independently. Social workers support people, act as advocates and direct people to the services they may require. Social workers often work in multi-disciplinary teams alongside health and education professionals. —British Association of Social Workers History The practice and profession of social work has a relatively modern and scientific origin, and is generally considered to have developed out of three strands. The first was individual casework, a strategy pioneered by the Charity Organization Society in the mid-19th century, which was founded by Helen Bosanquet and Octavia Hill in London, England. Most historians identify COS as the pioneering organization of the social theory that led to the emergence of social work as a professional occupation. COS had its main focus on individual casework. The second was social administration, which included various forms of poverty relief – 'relief of paupers'. Statewide poverty relief could be said to have its roots in the English Poor Laws of the 17th century but was first systematized through the efforts of the Charity Organization Society. The third consisted of social action – rather than engaging in the resolution of immediate individual requirements, the emphasis was placed on political action working through the community and the group to improve their social conditions and thereby alleviate poverty. This approach was developed originally by the Settlement House Movement. This was accompanied by a less easily defined movement; the development of institutions to deal with the entire range of social problems. All had their most rapid growth during the nineteenth century, and laid the foundation basis for modern social work, both in theory and in practice. Professional social work originated in 19th century England, and had its roots in the social and economic upheaval wrought by the Industrial Revolution, in particular, the societal struggle to deal with the resultant mass urban-based poverty and its related problems. Because poverty was the main focus of early social work, it was intricately linked with the idea of charity work. Other important historical figures that shaped the growth of the social work profession are Jane Addams, who founded the Hull House in Chicago and won the Nobel Peace Prize in 1931; Mary Ellen Richmond, who wrote Social Diagnosis, one of the first social workbooks to incorporate law, medicine, psychiatry, psychology, and history; and William Beveridge, who created the social welfare state, framing the debate on social work within the context of social welfare provision. United States During the 1840s, Dorothea Lynde Dix, a retired Boston teacher who is considered the founder of the Mental Health Movement, began a crusade that would change the way people with mental disorders were viewed and treated. Dix was not a social worker; the profession was not established until after she died in 1887. However, her life and work were embraced by early psychiatric social workers (mental health social worker/clinical social worker), and she is considered one of the pioneers of psychiatric social work along with Elizabeth Horton, who in 1907 was the first social worker to work in a psychiatric setting as an aftercare agent in the New York hospital systems to provide post-discharge supportive services. The early twentieth century marked a period of progressive change in attitudes towards mental illness. The increased demand for psychiatric services following the First World War led to significant developments. In 1918, Smith College School for Social Work was established, and under the guidance of Mary C. Jarrett at Boston Psychopathic Hospital, students from Smith College were trained in psychiatric social work. She first gave social workers the "Psychiatric Social Worker" designation. A book titled "The Kingdom of Evils," released in 1922, authored by a hospital administrator and the head of the social service department at Boston Psychopathic Hospital, described the roles of psychiatric social workers in the hospital. These roles encompassed casework, managerial duties, social research, and public education. After World War II, a series of mental hygiene clinics were established. The Community Mental Health Centers Act was passed in 1963. This policy encouraged the deinstitutionalisation of people with mental illness. Later, the mental health consumer movement came by 1980s. A consumer was defined as a person who has received or is currently receiving services for a psychiatric condition. People with mental disorders and their families became advocates for better care. Building public understanding and awareness through consumer advocacy helped bring mental illness and its treatment into mainstream medicine and social services. The 2000s saw the managed care movement, which aimed at a health care delivery system to eliminate unnecessary and inappropriate care to reduce costs, and the recovery movement, which by principle acknowledges that many people with serious mental illness spontaneously recover and others recover and improve with proper treatment. Social workers made an impact with 2003 invasion of Iraq and War in Afghanistan (2001–2021); social workers worked out of NATO hospitals in Afghanistan and Iraqi bases. They made visits to provide counseling services at forward operating bases. Twenty-two percent of the clients were diagnosed with posttraumatic stress disorder, 17 percent with depression, and 7 percent with alcohol use disorder. In 2009, there was a high level of suicides among active-duty soldiers: 160 confirmed or suspected Army suicides. In 2008, the Marine Corps had a record 52 suicides. The stress of long and repeated deployments to war zones, the dangerous and confusing nature of both wars, wavering public support for the wars, and reduced troop morale all contributed to escalating mental health issues. Military and civilian social workers served a critical role in the veterans' health care system. Mental health services is a loose network of services ranging from highly structured inpatient psychiatric units to informal support groups, where psychiatric social workers indulges in the diverse approaches in multiple settings along with other paraprofessional workers. Canada A role for psychiatric social workers was established early in Canada's history of service delivery in the field of population health. Native North Americans understood mental trouble as an indication of an individual who had lost their equilibrium with the sense of place and belonging in general, and with the rest of the group in particular. In native healing beliefs, health and mental health were inseparable, so similar combinations of natural and spiritual remedies were often employed to relieve both mental and physical illness. These communities and families greatly valued holistic approaches for preventive health care. Indigenous peoples in Canada have faced cultural oppression and social marginalization through the actions of European colonizers and their institutions since the earliest periods of contact. Culture contact brought with it many forms of depredation. Economic, political, and religious institutions of the European settlers all contributed to the displacement and oppression of indigenous people. The first officially recorded treatment practices were in 1714, when Quebec opened wards for the mentally ill. In the 1830s social services were active through charity organizations and church parishes (Social Gospel Movement). Asylums for the insane were opened in 1835 in Saint John and New Brunswick. In 1841 in Toronto care for the mentally ill became institutionally based. Canada became a self-governing dominion in 1867, retaining its ties to the British crown. During this period, age of industrial capitalism began and it led to social and economic dislocation in many forms. By 1887 asylums were converted to hospitals, and nurses and attendants were employed for the care of the mentally ill. Social work training began at the University of Toronto in 1914. Before that, social workers acquired their training through trial and error methods on the job and by participating in apprenticeship plans offered by charity organization societies. These plans included related study, practical experience, and supervision. In 1918 Dr. Clarence Hincks and Clifford Beers founded the Canadian National Committee for Mental Hygiene, which later became the Canadian Mental Health Association. In the 1930s Hincks promoted prevention and of treating sufferers of mental illness before they were incapacitated (early intervention). World War II profoundly affected attitudes towards mental health. The medical examinations of recruits revealed that thousands of apparently healthy adults suffered mental difficulties. This knowledge changed public attitudes towards mental health, and stimulated research into preventive measures and methods of treatment. In 1951 Mental Health Week was introduced across Canada. For the first half of the twentieth century, with a period of deinstitutionalisation beginning in the late 1960s psychiatric social work succeeded to the current emphasis on community-based care, psychiatric social work focused beyond the medical model's aspects on individual diagnosis to identify and address social inequities and structural issues. In the 1980s Mental Health Act was amended to give consumers the right to choose treatment alternatives. Later the focus shifted to workforce mental health issues and environmental root causes. In Ontario, the regulator, the Ontario College of Social Workers and Social Service Workers (OCSWSSW) regulates two professions: registered social workers (RSW) and registered social service workers (RSSW). Each provinces has similar regulatory bodies. The Canadian Association of Social Workers (CASW) is the national professional body for social workers. Prior to provincial-level politicization, registrants of this professional body were able to engage in inter-provincial practice as registered social workers. India The earliest citing of mental disorders in India are from Vedic Era (2000 BC – AD 600). Charaka Samhita, an ayurvedic textbook believed to be from 400 to 200 BC describes various factors of mental stability. It also has instructions regarding how to set up a care delivery system. In the same era, Siddha was a medical system in south India. The great sage Agastya was one of the 18 siddhas contributing to a system of medicine. This system has included the Agastiyar Kirigai Nool, a compendium of psychiatric disorders and their recommended treatments. In Atharva Veda too there are descriptions and resolutions about mental health afflictions. In the Mughal period Unani system of medicine was introduced by an Indian physician Unhammad in 1222. The existing form of psychotherapy was known then as ilaj-i-nafsani in Unani medicine. The 18th century was a very unstable period in Indian history, which contributed to psychological and social chaos in the Indian subcontinent. In 1745, lunatic asylums were developed in Bombay (Mumbai) followed by Calcutta (Kolkata) in 1784, and Madras (Chennai) in 1794. The need to establish hospitals became more acute, first to treat and manage Englishmen and Indian 'sepoys' (military men) employed by the British East India Company. The First Lunacy Act (also called Act No. 36) that came into effect in 1858 was later modified by a committee appointed in Bengal in 1888. Later, the Indian Lunacy Act, 1912 was brought under this legislation. A rehabilitation programme was initiated between 1870s and 1890s for persons with mental illness at the Mysore Lunatic Asylum, and then an occupational therapy department was established during this period in almost each of the lunatic asylums. The programme in the asylum was called 'work therapy'. In this programme, persons with mental illness were involved in the field of agriculture for all activities. This programme is considered as the seed of origin of psychosocial rehabilitation in India. Berkeley-Hill, superintendent of the European Hospital (now known as the Central Institute of Psychiatry (CIP), established in 1918), was deeply concerned about the improvement of mental hospitals in those days. The sustained efforts of Berkeley-Hill helped to raise the standard of treatment and care and he also persuaded the government to change the term 'asylum' to 'hospital' in 1920. Techniques similar to the current token-economy were first started in 1920 and called by the name 'habit formation chart' at the CIP, Ranchi. In 1937, the first post of psychiatric social worker was created in the child guidance clinic run by the Dhorabji Tata School of Social Work (established in 1936). It is considered as the first documented evidence of social work practice in Indian mental health field. After Independence in 1947, general hospital psychiatry units (GHPUs) were established to improve conditions in existing hospitals, while at the same time encouraging outpatient care through these units. In Amritsar Dr. Vidyasagar instituted active involvement of families in the care of persons with mental illness. This was advanced practice ahead of its times regarding treatment and care. This methodology had a greater impact on social work practice in the mental health field especially in reducing the stigmatisation. In 1948 Gauri Rani Banerjee, trained in the United States, started a master's course in medical and psychiatric social work at the Dhorabji Tata School of Social Work (now TISS). Later the first trained psychiatric social worker was appointed in 1949 at the adult psychiatry unit of Yerwada Mental Hospital, Pune. In various parts of the country, in mental health service settings, social workers were employed—in 1956 at a mental hospital in Amritsar, in 1958 at a child guidance clinic of the college of nursing, and in Delhi in 1960 at the All India Institute of Medical Sciences and in 1962 at the Ram Manohar Lohia Hospital. In 1960, the Madras Mental Hospital (now Institute of Mental Health) employed social workers to bridge the gap between doctors and patients. In 1961 the social work post was created at the NIMHANS. In these settings they took care of the psychosocial aspect of treatment. This system enabled social service practices to have a stronger long-term impact on mental health care. In 1966 by the recommendation Mental Health Advisory Committee, Ministry of Health, Government of India, NIMHANS commenced Department of Psychiatric Social Work started and a two-year Postgraduate Diploma in Psychiatric Social Work was introduced in 1968. In 1978, the nomenclature of the course was changed to MPhil in Psychiatric Social Work. Subsequently, a PhD Programme was introduced. By the recommendations Mudaliar committee in 1962, Diploma in Psychiatric Social Work was started in 1970 at the European Mental Hospital at Ranchi (now CIP). The program was upgraded and other higher training courses were added subsequently. A new initiative to integrate mental health with general health services started in 1975 in India. The Ministry of Health, Government of India formulated the National Mental Health Programme (NMHP) and launched it in 1982. The same was reviewed in 1995 and based on that, the District Mental Health Program (DMHP) was launched in 1996 which sought to integrate mental health care with public health care. This model has been implemented in all the states and currently there are 125 DMHP sites in India. National Human Rights Commission (NHRC) in 1998 and 2008 carried out systematic, intensive and critical examinations of mental hospitals in India. This resulted in recognition of the human rights of the persons with mental illness by the NHRC. From the NHRC's report as part of the NMHP, funds were provided for upgrading the facilities of mental hospitals. As a result of the study, it was revealed that there were more positive changes in the decade until the joint report of NHRC and NIMHANS in 2008 compared to the last 50 years until 1998. In 2016 Mental Health Care Bill was passed which ensures and legally entitles access to treatments with coverage from insurance, safeguarding dignity of the afflicted person, improving legal and healthcare access and allows for free medications. In December 2016, Disabilities Act 1995 was repealed with Rights of Persons with Disabilities Act (RPWD), 2016 from the 2014 Bill which ensures benefits for a wider population with disabilities. The Bill before becoming an Act was pushed for amendments by stakeholders mainly against alarming clauses in the "Equality and Non discrimination" section that diminishes the power of the act and allows establishments to overlook or discriminate against persons with disabilities and against the general lack of directives that requires to ensure the proper implementation of the Act. Mental health in India is in its developing stages. There are not enough professionals to support the demand. According to the Indian Psychiatric Society, there are around 9000 psychiatrists only in the country as of January 2019. Going by this figure, India has 0.75 psychiatrists per 100,000 population, while the desirable number is at least 3 psychiatrists per 100,000. While the number of psychiatrists has increased since 2010, it is still far from a healthy ratio. Lack of any universally accepted single licensing authority compared to foreign countries puts social workers at general in risk. But general bodies/councils accepts automatically a university-qualified social worker as a professional licensed to practice or as a qualified clinician. Lack of a centralized council in tie-up with Schools of Social Work also makes a decline in promotion for the scope of social workers as mental health professionals. Though in this midst the service of social workers has given a facelift to the mental health sector in the country with other allied professionals. Iran State welfare organization was previously part of health and social security ministry. Theoretical models and practices Social work is an interdisciplinary profession, meaning it draws from a number of areas, such as (but not limited to) psychology, sociology, politics, criminology, economics, ecology, education, health, law, philosophy, anthropology, and counseling, including psychotherapy. Field work is a distinctive attribution to social work pedagogy. This equips the trainee in understanding the theories and models within the field of work. Professional practitioners from multicultural aspects have their roots in this social work immersion engagements from the early 19th century in the western countries. As an example, here are some of the models and theories used within social work practice: Empathy Social case work Social group work Community organization Behavioral School social worker Leadership and management Crisis intervention Suicide prevention Mental health Addiction Cognitive-behavioral Critical Social insurance Ecological Equity theory Financial social work Macro social work Motivational interviewing Medical social work Medical terminology Person-centered therapy Psychoanalytic Psychodynamic Existential Humanistic Social work management Sociotherapy Brief psychotherapy or solution-focused approach Recovery approach Reflexivity Social exchange Welfare economics Anti-oppressive practice Psychosocial rehabilitation Cognitive behavioral therapy Dialectical behavior therapy Systems theory Policy Analysis Strength-based practice Task-centered Family therapy Advocacy Prevention science Project management Program evaluation and performance measurement Systems thinking Community development and intervention Positive psychology Social actions Animal-assisted therapy Profession American educator Abraham Flexner in a 1915 lecture, "Is Social Work a Profession?", delivered at the National Conference on Charities and Corrections, examined the characteristics of a profession concerning social work. It is not a 'single model', such as that of health, followed by medical professions such as nurses and doctors, but an integrated profession, and the likeness with medical profession is that social work requires a continued study for professional development to retain knowledge and skills that are evidence-based by practice standards. A social work professional's services lead toward the aim of providing beneficial services to individuals, dyads, families, groups, organizations, and communities to achieve optimum psychosocial functioning. Its eight core functions present in its methods of practice are described by Popple and Leighninger as: Engagement — social worker must first engage the client in early meetings to promote a collaborative relationship Assessment — data gathered must be specifically aimed at guiding and directing a plan of action to help the client Planning — negotiate and formulate an action plan Implementation — promote resource acquisition and enhance role performance Monitoring/Evaluation — ongoing documentation for assessing the extent to which the client is following through on short-term goal attainment Supportive Counseling — affirming, challenging, encouraging, informing, and exploring options Graduated Disengagement — seeking to replace the social worker with a naturally occurring resource Administration — planning and managing social work programs, providing operations management support, and administrating case management services There are six broad ethical principles in National Association of Social Workers' (NASW) Code of Ethics that inform social work practice, they are both prescriptive and proscriptive, and are based on six core values: Service — help people in need and provide pro bono services Social Justice — engage in social change activities for and with people to promote social justice and challenge social injustice Dignity and worth of the person — treat people with care and respect, be sensitive to cultural and ethnic diversity, and promote individuals socially responsible self determination Importance of human relationships — maintain positive client relationships because they play a vital role in driving change, and engage with people as partners who empower them through the helping process Integrity — engage clients with honesty and responsibility to build trust, and you are not only responsible for your own professional ethics and integrity but also of the service organization Competence — practice and build expertise as a social worker, and continually seek to enhance and contribute professional knowledge and skills The International Federation of Social Workers also outlines essential principles for guiding social workers towards high professional standards. These include recognizing the inherent dignity of all people, upholding human rights, striving for social justice, supporting self-determination, encouraging participation, respecting privacy and confidentiality, treating individuals holistically, using technology and social media responsibly, and maintaining professional integrity. A historic and defining feature of social work is the profession's focus on individual well-being in a social context and the well-being of society. Social workers promote social justice and social change with and on behalf of clients. A "client" can be an individual, family, group, organization, or community. In the broadening scope of the modern social worker's role, some practitioners have in recent years traveled to war-torn countries to provide psychosocial assistance to families and survivors. Newer areas of social work practice involve management science. The growth of "social work administration" (sometimes also referred to as "social work management") for transforming social policies into services and directing activities of an organization toward achievement of goals is a related field. Helping clients with accessing benefits such as unemployment insurance and disability benefits, to assist individuals and families in building savings and acquiring assets to improve their financial security over the long-term, to manage large operations, etc. requires social workers to know financial management skills to help clients and organization's to be financially self-sufficient. Financial social work also helps clients with low-income or low to middle-income, people who are either unbanked (do not have a banking account) or underbanked (individuals who have a bank account but tend to rely on high cost non-bank providers for their financial transactions), with better mediation with financial institutions and induction of money management skills. A prominent area in which social workers operate is Behavioral Social Work. They apply principles of learning and social learning to conduct behavioral analysis and behavior management. Empiricism and effectiveness serve as means to ensure the dignity of clients, and focusing on the present is what distinguishes behavioral social work from other types of social work practices. In a multicultural case, the behavior of multiple members from different cultures matters. In such cases, an ecobehavioral perspective is taken due to the external influences. The interpersonal skills that a social worker brings to the job make them stand out from behavioral therapists. Another area that social workers are focusing is risk management, risk in social work is taken as Knight in 1921 defined "If you don't even know for sure what will happen, but you know the odds, that is risk and If you don't even know the odds, that is uncertainty." Risk management in social work means minimizing the risks while increasing potential benefits for clients by analyzing the risks and benefits in the duty of care or decisions. Occupational social work is a field where the trained professionals assist a management with worker's welfare, in their psychosocial wellness, and helps management's policies and protocols to be humanistic and anti-oppressive. In the United States, according to the Substance Abuse and Mental Health Services Administration (SAMHSA), a branch of the U.S. Department of Health and Human Services, professional social workers are the largest group of mental health services providers. There are more clinically trained social workers—over 200,000—than psychiatrists, psychologists, and psychiatric nurses combined. Federal law and the National Institutes of Health recognize social work as one of five core mental health professions. Examples of fields a social worker may be employed in are poverty relief, life skills education, community organizing, community organization, community development, rural development, forensics and corrections, legislation, industrial relations, project management, child protection, elder protection, women's rights, human rights, systems optimization, finance, addictions rehabilitation, child development, cross-cultural mediation, occupational safety and health, disaster management, mental health, psychosocial therapy, disabilities, etc. Roles and functions Social workers play many roles in mental health settings, including those of case manager, advocate, administrator, and therapist. The major functions of a psychiatric social worker are promotion and prevention, treatment, and rehabilitation. Social workers may also practice: Counseling and psychotherapy Case management and support services Crisis intervention Psychoeducation Psychiatric rehabilitation and recovery Care coordination and monitoring Program management/administration Program, policy and resource development Research and evaluation Psychiatric social workers conduct psychosocial assessments of the patients and work to enhance patient and family communications with the medical team members and ensure the inter-professional cordiality in the team to secure patients with the best possible care and to be active partners in their care planning. Depending upon the requirement, social workers are often involved in illness education, counseling and psychotherapy. In all areas, they are pivotal to the aftercare process to facilitate a careful transition back to family and community. Mental health of social workers Several studies have reported that social workers have an increased risk of common mental disorders, long-term sickness absence due to mental illnesses and antidepressant use. A study in Sweden has found that social workers have an increased risk of receiving a diagnosis of depression or anxiety and stress-related disorders in comparison with other workers. The risk for social workers is high even when comparing to other similar human-service professions, and social workers in psychiatric care or in assistance analysis are the most vulnerable. There are multiple explanations for this increased risk. Individual components include secondary traumatic stress, compassion fatigue and selection of vulnerable employees into the profession. On an organizational level, high job strain, organizational culture and work overload are important factors. There is a difference in gender. When comparing to their same-gender counterparts in other professions, men in social work have a higher risk than women. Male social workers, when compared to men in other professions, have a 70% increased risk of being diagnosed with depression or anxiety disorders. Female social workers have an increased risk of 20% when comparing to women in other professions. This might be due the baseline prevalence of common mental disorders, which is high among women and lower among men in the general population. Another potential explanation is that men in gender-balanced workplaces tend to seek help from healthcare providers more often than men in male-dominated industries. Qualifications and license The education of social workers begins with a bachelor's degree (BA, BSc, BSSW, BSW, etc.) or diploma in social work or a Bachelor of Social Services. Some countries offer postgraduate degrees in social work, such as a master's degree (MSW, MSSW, MSS, MSSA, MA, MSc, MRes, MPhil.) or doctoral studies (Ph.D. and DSW (Doctor of Social Work)). Several countries and jurisdictions require registration or license for working as social workers, and there are mandated qualifications. In other places, the professional association sets academic requirements as the qualification for practicing the profession. However, certain types of workers are exempted from needing a registration license. The success of these professionals is based on the recognition of and by the employers that provide social work services. These employers don't require the title of a registered social worker as a necessity for providing social work and related services. North America In the United States, social work undergraduate and master's programs are accredited by the Council on Social Work Education. A CSWE-accredited degree is required for one to become a state-licensed social worker. The CSWE even accredits online master's in social work programs in traditional and advanced standing options. In 1898, the New York Charity Organization Society, which was the Columbia University School of Social Work's earliest entity, began offering formal "social philanthropy" courses, marking both the beginning date for social work education in the United States, as well as the launching of professional social work. However, a CSWE-accredited program doesn't necessarily have to meet ASWB licensing knowledge requirements, and many of them do not meet them. The Association of Social Work Boards (ASWB) is a regulatory organization that provides licensing examination services to social work regulatory boards in the United States and Canada. Due to the limited scope of the organization's objectives, it is not a social work organization that is accountable to the broader social work community or to the ones certified by ASWB exams. ASWB generates an annual profit of $6,000,000 from license examination administration and $800,000 from publishing study materials. As such, it is an organization that is focused on revenue maximization, and by principle, it is only responsible and answerable to its board members. The objective of a social work license is to ensure the public's safety and quality of service. It is intended to ensure that social workers understand and can follow NASW's Code of Ethics in their occupational practices, ascertain social workers' knowledge in service provision, and protect the use of the Social Work title from misuse and unethical practices. However, a study found out that having a social work license is not related to improved service quality for consumers. They substituted paraprofessionals with qualified licensed social workers and found out that there was no improvement in overall facility quality, quality of life, or the provision of social services. The paraprofessionals with training were able to perform similarly to licensed social workers, just like any trained human resource in a workforce would perform a job for which they are trained. Social work graduates gain this knowledge and training through academic and financial investment in earning an accredited social work degree, degree equalization process, and from receiving professional supervision during and post-graduation. For decades, the social work community has called on ASWB for transparency regarding the data on the validity and racial sensitivity of the exams. However, ASWB suppressed this information, leading many critics to assess that if the exams were free from flaws and bias, such data would have been released a long time ago. In 2022, ASWB released the pass rate data, and a Change.org petition called "#StopASWB" highlighted with academic citations that the Association of Social Work Boards' exams are biased with feedback from white social workers. The petition also pointed out that the exams unfairly penalize social workers who practice in other languages, require privileged resources for success, and utilize oppressive standards in formatting the exams, which are inconsistent with social work values. The National Association of Social Workers (NASW) expressed opposition to the social work licensing exams conducted by the Association of Social Work Boards (ASWB). This came after analyzing ASWB data, which revealed considerable discrepancies in pass rates for aspiring social workers of diverse racial backgrounds, older individuals, and those who speak English as a second language (ESL). Pass rates of exams indicate that white test takers are more than twice as likely to pass on their first attempt compared to BIPOC test takers indicating high construct irrelevant variance among other issues. This finding raises questions about the reliability and credibility of social work licensure process through ASWB exams. NASW's firm stance on the matter serves as a significant reckoning moment regarding the systemic racism in the social work profession, particularly within its regulatory system. It also highlights ASWB's silence about the licensure apparatus that perpetuates racial disparities, leading its association members to institutional betrayal. After the release of ASWB data showing race and age-related discrepancies in pass rates, the national accreditation body, the Council on Social Work Education (CSWE), removed the ASWB licensure exam pass rates as an option for social work education programs to meet accreditation requirements. Members from various communities in social work have expressed that discussions about addressing this systemic oppression should be guided by a formal acknowledgment of wrongdoing and a spirit of reconciliation and healing. The state of Illinois passed a landmark bill, HB2365 SA1, marking a significant step in reducing its regulatory body's dependency on ASWB. With this bill, Illinois has addressed the uneven power that ASWB held and its unfettered pursuit of profit, which affected the qualification of educated social workers for practice entry. Now, educated social workers can obtain licensing by completing 3000 hours of professional supervision, eliminating the previous requirement of ASWB exam results for licensure, which often led to issues of unemployment and related emotional, behavioral, and physical health consequences. Since the early 1990s, researchers have critiqued ASWB exams for their lack of content and criterion validity that undermines the test validity all together. In a study conducted in 2023, it was discovered that there are questions in ASWB exams that have rationales based on theories that are not evidence-based, and that have significant item validity issues. The researchers used generative AI application, ChatGPT to test ASWB rationales and found that the rationales provided by ChatGPT were of higher quality. They revealed that ChatGPT exhibited an excellent ability to recognize social work-related text patterns for scenario-based decision-making and offered high-quality rationales while taking into account the safety and ethics in social work practice, even without specific training for such a task. They suggested that it may be necessary and timely to move away from oppressive assessment formats used to evaluate social workers' competence and reconsider licensing exams with serious validity issues that disproportionately exclude individuals based on their race, age, and language. A proposed assessment format is one based on mastery learning, which would lead to competency-based licensing. Due to the accumulated evidence of significant validity flaws in ASWB's tests, its conflict of interest, and other issues, many researchers have urged state legislators and regulators to discontinue the use of ASWB exams for licensure or temporarily suspend them until a novel, anti-oppressive, and validated alternative is established. In the interim, they suggest relying on traditional supervision methods to ensure the safe and ethical practice of social work. They elucidate that supervision not only guides licensure seekers but also allows well-equipped supervisors to assess individuals' capabilities to practice safely and ethically more accurately in contexts, which is a more valid approach to assessing such competence. Professional associations Social workers have several professional associations that provide ethical guidance and other forms of support for their members and social work in general. These associations may be international, continental, semi-continental, national, or regional. The main international associations are the International Federation of Social Workers (IFSW) and the International Association of Schools of Social Work (IASSW). The largest professional social work association in the United States is the National Association of Social Workers, they have instituted a code for professional conduct and a set of principles rooted in six core values: service, social justice, dignity and worth of the person, importance of human relationships, integrity, and competence. There also exist organizations that represent clinical social workers such as the American Association of Psychoanalysis in Clinical Social Work. AAPCSW is a national organization representing social workers who practice psychoanalytic social work and psychoanalysis. There are also several states with Clinical Social Work Societies which represent all social workers who conduct psychotherapy from a variety of theoretical frameworks with families, groups, and individuals. The Association for Community Organization and Social Administration (ACOSA) is a professional organization for social workers who practice within the community organizing, policy, and political spheres. The American Academy of Social Work and Social Welfare (AASWSW) is a national honorific society of scholars and practitioners who focus on social work and social welfare. In the UK, the professional association is the British Association of Social Workers (BASW) with just over 18,000 members (as of August 2015), and the regulatory body for social workers is Social Work England. In Australia, the professional association is the Australian Association of Social Workers (AASW) that ensure social workers meet required standards for social work practice in Australia, founded in 1946 and have more than 10,000 members. Accredited social workers in Australia can also provide services under the Access to Allied Psychological Services (ATAPS) program. In New Zealand, the regulatory body for social workers is Kāhui Whakamana Tauwhiro (SWRB). Trade unions representing social workers In the United Kingdom, just over half of social workers are employed by local authorities, and many of these are represented by UNISON, the public sector employee union. Smaller numbers are members of the Unite the Union and the GMB. The British Union of Social Work Employees (BUSWE) has been a section of the trade union Community since 2008. While at that stage, not a union, the British Association of Social Workers operated a professional advice and representation service from the early 1990s. Social Work qualified staff who are also experienced in employment law and industrial relations provide the kind of representation you would expect from a trade union in the event of a grievance, discipline or conduct matters specifically in respect of professional conduct or practice. However, this service depended on the goodwill of employers to allow the representatives to be present at these meetings, as only trade unions have the legal right and entitlement of representation in the workplace. By 2011 several councils had realized that they did not have to permit BASW access, and those that were challenged by the skilled professional representation of their staff were withdrawing permission. For this reason BASW once again took up trade union status by forming its arms-length trade union section, Social Workers Union (SWU). This gives the legal right to represent its members whether the employer or Trades Union Congress (TUC) recognizes SWU or not. In 2015 the TUC was still resisting SWU application for admission to congress membership and while most employers are not making formal statements of recognition until the TUC may change its policy, they are all legally required to permit SWU (BASW) representation at internal discipline hearings, etc. Use of information technology in social work Information technology is vital in social work, it transforms the documentation part of the work into electronic media. This makes the process transparent, accessible and provides data for analytics. Observation is a tool used in social work for developing solutions. Anabel Quan-Haase in Technology and Society defines the term surveillance as "watching over" (Quan-Haase. 2016. P 213), she continues to explain that the observation of others socially and behaviorally is natural, but it becomes more like surveillance when the purpose of the observation is to keep guard over someone (Quan-Haase. 2016. P 213). Often, at the surface level, the use of surveillance and surveillance technologies within the social work profession is seemingly an unethical invasion of privacy. When engaging with the social work code of ethics a little more deeply, it becomes obvious that the line between ethical and unethical becomes blurred. Within the social work code of ethics, there are multiple mentions of the use of technology within social work practice. The one that seems the most applicable to surveillance or artificial intelligence is 5.02 article f, "When using electronic technology to facilitate evaluation or research" and it goes on to explain that clients should be informed when technology is being used within the practice (Workers. 2008. Article 5.02). Social workers in literature In 2011, a critic stated that "novels about social work are rare", and as recently as 2004, another critic claimed to have difficulty finding novels featuring a main character holding a Master of Social Work degree. However, social workers have been the subject of many novels, including: The basis of the movie Precious. Smith, Ali (2011) There But For The, Hamish Hamilton, Pantheon. Fictional social workers in media See also Addiction medicine Approved mental health professional Clinical social work Child welfare Community development Critical social work Development studies Disaster social work Education in social work Forensic social work Gerontology Humanistic social work Human resource management Human services Integrated social work International Social Work Jocelyn Hyslop Mental health professional Recreational therapy Right to an adequate standard of living Social development Social planning Social psychology Social research Social Scientist Social work with groups Urban development Welfare References Further reading External links Social Work, WCIDWTM - The University of Tennessee Social Work Evaluation and Research Resources Mental health occupations Welfare agencies Welfare and service organizations Academic disciplines Behavioural sciences Branches of psychology Health care occupations Social sciences Caregiving Civil services
0.774886
0.997868
0.773234
Interpersonal communication
Interpersonal communication is an exchange of information between two or more people. It is also an area of research that seeks to understand how humans use verbal and nonverbal cues to accomplish several personal and relational goals. Communication includes utilizing communication skills within one's surroundings, including physical and psychological spaces. It is essential to see the visual/nonverbal and verbal cues regarding the physical spaces. In the psychological spaces, self-awareness and awareness of the emotions, cultures, and things that are not seen are also significant when communicating. Interpersonal communication research addresses at least six categories of inquiry: 1) how humans adjust and adapt their verbal communication and nonverbal communication during face-to-face communication; 2) how messages are produced; 3) how uncertainty influences behavior and information-management strategies; 4) deceptive communication; 5) relational dialectics; and 6) social interactions that are mediated by technology. There is considerable variety in how this area of study is conceptually and operationally defined. Researchers in interpersonal communication come from many different research paradigms and theoretical traditions, adding to the complexity of the field. Interpersonal communication is often defined as communication that takes place between people who are interdependent and have some knowledge of each other: for example, communication between a son and his father, an employer and an employee, two sisters, a teacher and a student, two lovers, two friends, and so on. Although interpersonal communication is most often between pairs of individuals, it can also be extended to include small intimate groups such as the family. Interpersonal communication can take place in face-to-face settings, as well as through platforms such as social media. The study of interpersonal communication addresses a variety of elements and uses both quantitative/social scientific methods and qualitative methods. There is growing interest in biological and physiological perspectives on interpersonal communication. Some of the concepts explored are personality, knowledge structures and social interaction, language, nonverbal signals, emotional experience and expression, supportive communication, social networks and the life of relationships, influence, conflict, computer-mediated communication, interpersonal skills, interpersonal communication in the workplace, intercultural perspectives on interpersonal communication, escalation and de-escalation of romantic or platonic relationships, family relationships, and communication across the life span. Factors such as one's self-concept and perception do have an impact on how humans choose to communicate. Factors such as gender and culture also affect interpersonal communication. History The detailed study of interpersonal communication dates back to the 1970s and was formalized based on aspects of communication that preceded it. Aspects of communication such as rhetoric, persuasion, and dialogue have become a part of interpersonal communication. As writing and language styles developed, humans found ways to transfer messages. Interpersonal communication was one such way. In a world where technologies were not available to communicate, humans used pictures and carvings, which later developed into words and expressions. Interpersonal communication is now seen in a more dyadic way; finding face-to-face interaction as a more distinct form. The dynamics of interpersonal communication began to shift at the break of the Industrial Revolution. The evolution of interpersonal communication is multifaceted and aligns with technological advancements, societal changes, and theories. Traditionally, interpersonal communication is grounded in face-to-face communication between people. As technology changed, the interpersonal communication style adapted from face-to-face interaction to a mediated component. The tools added over the years include the telegraph, telephone, and several media sites facilitating communication. Later in the article, the impacts of media on interpersonal communication are discussed. Interpersonal communication over the years has been aimed at forming relationships and ending relationships. The world has become more reliant on a mediated form of communication, which in turn has become a part of interpersonal communication as it has become an avenue in which most humans have decided to communicate. While this form is not traditional to interpersonal communication, it does fit the cities within the definition of interpersonal communication, which is the exchange between two or more people. Foundation of interpersonal communication Interpersonal communication process principles Human communication is a complex process with many components. And there are principles of communication that guide our understanding of communication. Communication is transactional Communication is a transactional communication—that is, a dynamic process created by the participants through their interaction with each other. In short, communication is an interactive process in which both parties need to participate. A metaphor is dancing. It is more like a process in which you and your partner are constantly running in and working together. Two perfect dancers do not necessarily guarantee the absolute success of a dance, but the perfect cooperation of two not-so-excellent dancers can guarantee a successful dance. Communication can be intentional and unintentional Some communication is intentional and deliberate, for example, before you ask your boss to give you a promotion or a raise, you will do a lot of mental building and practice many times how to talk to your boss so that it will not cause embarrassment. But at the same time, communication can also be unintentional. For example, you are complaining about your unfortunate experience today in the corner of the school, but it happens that your friend overhears your complaint. Even if you do not want others to know about your experience from the bottom of your heart, but unintentionally, this also delivers message and forms communication. Communication Is Irreversible The process of Interpersonal Communication is irreversible, you can wish you had not said something and you can apologise for something you said and later regret - but you can not take it back. Communication Is Unrepeatable Unrepeatability arises from the fact that an act of communication can never be duplicated The reason is that the audience may be different, our mood at the time may be different, or our relationship may be in a different place. In person communication can be invigorating and is often memorable when people are engaged and in the moment. Theories Uncertainty reduction theory Uncertainty reduction theory, developed in 1975, comes from the socio-psychological perspective. It addresses the basic process of how we gain knowledge about other people. According to the theory, people have difficulty with uncertainty. You are not sure what is going to come next, so you are uncertain how you should prepare for the upcoming event. To help predict behavior, they are motivated to seek information about the people with whom they interact. The theory argues that strangers, upon meeting, go through specific steps and checkpoints in order to reduce uncertainty about each other and form an idea of whether they like or dislike each other. During communication, individuals are making plans to accomplish their goals. At highly uncertain moments, they will become more vigilant and rely more on data available in the situation. A reduction in certainty leads to a loss of confidence in the initial plan, such that the individual may make contingency plans. The theory also says that higher levels of uncertainty create distance between people and that non-verbal expressiveness tends to help reduce uncertainty. Constructs include the level of uncertainty, the nature of the relationship and ways to reduce uncertainty. Underlying assumptions include the idea that an individual will cognitively process the existence of uncertainty and take steps to reduce it. The boundary conditions for this theory are that there must be some kind of trigger, usually based on the social situation, and internal cognitive process. According to the theory, we reduce uncertainty in three ways: Passive strategies: observing the person. Active strategies: asking others about the person or looking up information Interactive strategies: asking questions, self-disclosure. Uncertainty reduction theory is most applicable to the initial interaction context. Scholars have extended the uncertainty framework with theories that describe uncertainty management and motivated information management. These extended theories give a broader conceptualization of how uncertainty operates in interpersonal communication as well as how uncertainty motivates individuals to seek information. The theory has also been applied to romantic relationships. Social exchange theory Social exchange theory falls under the symbolic interaction perspective. The theory describes, explains, and predicts when and why people reveal certain information about themselves to others. The social exchange theory uses Thibaut and Kelley's (1959) theory of interdependence. This theory states that "relationships grow, develop, deteriorate, and dissolve as a consequence of an unfolding social-exchange process, which may be conceived as a bartering of rewards and costs both between the partners and between members of the partnership and others". Social exchange theory argues that the major force in interpersonal relationships is the satisfaction of both people's self-interest. According to the theory, human interaction is analogous to an economic transaction, in that an individual may seek to maximize rewards and minimize costs. Actions such as revealing information about oneself will occur when the cost-reward ratio is acceptable. As long as rewards continue to outweigh costs, a pair of individuals will become increasingly intimate by sharing more and more personal information. The constructs of this theory include disclosure, relational expectations, and perceived rewards or costs in the relationship. In the context of marriage, the rewards within the relationship include emotional security and sexual fulfillment. Based on this theory Levinger argued that marriages will fail when the rewards of the relationship lessen, the barriers against leaving the spouse are weak, and the alternatives outside of the relationship are appealing. Symbolic interaction Symbolic interaction comes from the socio-cultural perspective in that it relies on the creation of shared meaning through interactions with others. This theory focuses on the ways in which people form meaning and structure in society through interactions. People are motivated to act based on the meanings they assign to people, things, and events. Symbolic interaction considers the world to be made up of social objects that are named and have socially determined meanings. When people interact over time, they come to shared meaning for certain terms and actions and thus come to understand events in particular ways. There are three main concepts in this theory: society, self, and mind. SocietySocial acts (which create meaning) involve an initial gesture from one individual, a response to that gesture from another, and a result. SelfSelf-image comes from interaction with others. A person makes sense of the world and defines their "self" through social interactions that indicate the value of the self. MindThe ability to use significant symbols makes thinking possible. One defines objects in terms of how one might react to them. Constructs for this theory include creation of meaning, social norms, human interactions, and signs and symbols. An underlying assumption for this theory is that meaning and social reality are shaped from interactions with others and that some kind of shared meaning is reached. For this to be effective, there must be numerous people communicating and interacting and thus assigning meaning to situations or objects. Relational dialectics theory The dialectical approach to interpersonal communication revolves around the notions of contradiction, change, praxis, and totality, with influences from Hegel, Marx, and Bakhtin. The dialectical approach searches for understanding by exploring the tension of opposing arguments. Both internal and external dialectics function in interpersonal relationships, including separateness vs. connection, novelty vs. predictability, and openness vs. closedness. Relational dialectics theory deals with how meaning emerges from the interplay of competing discourses. A discourse is a system of meaning that helps us to understand the underlying sense of a particular utterance. Communication between two parties invokes multiple systems of meaning that are in tension with each other. Relational dialectics theory argues that these tensions are both inevitable and necessary. The meanings intended in our conversations may be interpreted, understood, or misunderstood. In this theory, all discourse, including internal discourse, has competing properties that relational dialectics theory aims to analyze. The three relational dialectics Relational dialectics theory assumes three different types of tensions in relationships: connectedness vs. separateness, certainty vs. uncertainty, and openness vs. closedness. Connectedness vs. separateness Most individuals naturally desire that their interpersonal relationships involve close connections. However, relational dialectics theory argues that no relationship can be enduring unless the individuals involved within it have opportunities to be alone. An excessive reliance on a specific relationship can result in the loss of individual identity. Certainty vs. uncertainty Individuals desire a sense of assurance and predictability in their interpersonal relationships. However, they also desire variety, spontaneity and mystery in their relationships. Like repetitive work, relationships that become bland and monotonous are undesirable. Openness vs. closedness In close interpersonal relationships, individuals may feel a pressure to reveal personal information, as described in social penetration theory. This pressure may be opposed by a natural desire to retain some level of personal privacy. Coordinated management of meaning The coordinated management of meaning theory assumes that two individuals engaging in an interaction each construct their own interpretation and perception of what a conversation means, then negotiate a common meaning by coordinating with each other. This coordination involves the individuals establishing rules for creating and interpreting meaning. The rules that individuals can apply in any communicative situation include constitutive and regulative rules. Constitutive rules are "rules of meaning used by communicators to interpret or understand an event or message". Regulative rules are "rules of action used to determine how to respond or behave". When one individual sends a message to the other the recipient must interpret the meaning of the interaction. Often, this can be done almost instantaneously because the interpretation rules that apply to the situation are immediate and simple. However, there are times when the interpretation of the 'rules' for an interaction is not obvious. This depends on each communicator's previous beliefs and perceptions within a given context and how they can apply these rules to the current interaction. These "rules" of meaning "are always chosen within a context", and the context of a situation can be used as a framework for interpreting specific events. Contexts that an individual can refer to when interpreting a communicative event include the relationship context, the episode context, the self-concept context, and the archetype context. Relationship contextThis context assumes that there are mutual expectations between individuals who are members of a group. Episode contextThis context refers to a specific event in which the communicative act is taking place. Self-concept contextThis context involves one's sense of self, or an individual's personal 'definition' of him/herself. Archetype contextThis context is essentially one's image of what his or her belief consists of regarding general truths within communicative exchanges. Pearce and Cronen argue that these specific contexts exist in a hierarchical fashion. This theory assumes that the bottom level of this hierarchy consists of the communicative act. The relationship context is next in the hierarchy, then the episode context, followed by the self-concept context, and finally the archetype context. Social penetration theory Social penetration theory is a conceptual framework that describes the development of interpersonal relationships. This theory refers to the reciprocity of behaviors between two people who are in the process of developing a relationship. These behaviors can include verbal/nonverbal exchange, interpersonal perceptions, and interactions with the environment. The behaviors vary based on the different levels of intimacy in the relationship. "Onion theory" This theory is often known as the "onion theory". This analogy suggests that like an onion, personalities have "layers". The outside layer is what the public sees, and the core is one's private self. When a relationship begins to develop, the individuals in the relationship may undergo a process of self-disclosure, progressing more deeply into the "layers". Social penetration theory recognizes five stages: orientation, exploratory affective exchange, affective exchange, stable exchange, and de-penetration. Not all of these stages happen in every relationship. Orientation stage: strangers exchange only impersonal information and are very cautious in their interactions. Exploratory affective stage: communication styles become somewhat more friendly and relaxed. Affective exchange: there is a high amount of open communication between individuals. These relationships typically consist of close friends or even romantic or platonic partners. Stable exchange: continued open and personal types of interaction. De-penetration: when the relationship's costs exceed its benefits there may be a withdrawal of information, ultimately leading to the end of the relationship. If the early stages take place too quickly, this may be negative for the progress of the relationship. Example: Jenny and Justin met for the first time at a wedding. Within minutes Jenny starts to tell Justin about her terrible ex-boyfriend and the misery he put her through. This is information that is typically shared at stage three or four, not stage one. Justin finds this off-putting, reducing the chances of a future relationship. Social penetration theory predicts that people decide to risk self-disclosure based on the costs and rewards of sharing information, which are affected by factors such as relational outcome, relational stability, and relational satisfaction. The depth of penetration is the degree of intimacy a relationship has accomplished, measured relative to the stages above. Griffin defines depth as "the degree of disclosure in a specific area of an individual's life" and breadth as "the range of areas in an individual's life over which disclosure takes place." The theory explains the following key observations: Peripheral items are exchanged more frequently and sooner than private information; Self-disclosure is reciprocal, especially in the early stages of relationship development; Penetration is rapid at the start but slows down quickly as the tightly wrapped inner layers are reached; De-penetration is a gradual process of layer-by-layer withdrawal. Computer-mediated social penetration Online communication seems to follow a different set of rules. Because much online communication occurs on an anonymous level, individuals have the freedom to forego the 'rules' of self disclosure. In on-line interactions personal information can be disclosed immediately and without the risk of excessive intimacy. For example, Facebook users post extensive personal information, pictures, information on hobbies, and messages. This may be due to the heightened level of perceived control within the context of the online communication medium. Relational patterns of interaction theory Paul Watzlawick's theory of communication, popularly known as the "Interactional View", interprets relational patterns of interaction in the context of five "axioms". The theory draws on the cybernetic tradition. Watzlawick, his mentor Gregory Bateson and the members of the Mental Research Institute in Palo Alto were known as the Palo Alto Group. Their work was highly influential in laying the groundwork for family therapy and the study of relationships. Ubiquitous communication The theory states that a person's presence alone results in them, consciously or not, expressing things about themselves and their relationships with others (i.e., communicating). A person cannot avoid interacting, and even if they do, their avoidance may be read as a statement by others. This ubiquitous interaction leads to the establishment of "expectations" and "patterns" which are used to determine and explain relationship types. Expectations Individuals enter communication with others having established expectations for their own behavior as well as the behavior of those they are communicating with. During the interaction these expectations may be reinforced, or new expectations may be established that will be used in future interactions. New expectations are created by new patterns of interaction, while reinforcement results from the continuation of established patterns of interaction. Patterns of interaction Established patterns of interaction are created when a trend occurs regarding how two people interact with each other. There are two patterns of particular importance to the theory. In symmetrical relationships, the pattern of interaction is defined by two people responding to one another in the same way. This is a common pattern of interaction within power struggles. In complementary relationships, the participants respond to one another in opposing ways. An example of such a relationship would be when one person is argumentative while the other is quiet. Relational control Relational control refers to who is in control within a relationship. The pattern of behavior between partners over time, not any individual's behavior, defines the control within a relationship. Patterns of behavior involve individuals' responses to others' assertions. There are three kinds of responses: One-down responses are submissive to, or accepting of, another's assertions. One-up responses are in opposition to, or counter, another's assertions. One-across responses are neutral in nature. Complementary exchanges A complementary exchange occurs when a partner asserts a one-up message which the other partner responds to with a one-down response. If complementary exchanges are frequent within a relationship it is likely that the relationship itself is complementary. Symmetrical exchanges Symmetrical exchanges occur when one partner's assertion is countered with a reflective response: a one-up assertion is met with a one-up response, or a one-down assertion is met with a one-down response. If symmetrical exchanges are frequent within a relationship it is likely that the relationship is also symmetrical. Applications of relational control include analysis of family interactions, and also the analysis of interactions such as those between teachers and students. Theory of intertype relationships Socionics proposes a theory of relationships between psychological types (intertype relationships) based on a modified version of C.G. Jung's theory of psychological types. Communication between types is described using the concept of information metabolism proposed by Antoni Kępiński. Socionics defines 16 types of relations, ranging from the most attractive and comfortable to disputed. This analysis gives insight into some features of interpersonal relations, including aspects of psychological and sexual compatibility, and ranks as one of the four most popular models of personality. Identity management theory Falling under the socio-cultural tradition, identity-management theory explains the establishment, development, and maintenance of identities within relationships, as well as changes to identities within relationships. Establishing identities People establish their identities (or faces), and their partners, through a process referred to as "facework". Everyone has a desired identity which they are constantly working towards establishing. This desired identity can be both threatened and supported by attempts to negotiate a relational identity (the identity one shares with one's partner). Thus, a person's desired identity is directly influenced by their relationships, and their relational identity by their desired individual identity. Cultural influence Identity management pays significant attention to intercultural relationships and how they affect the relational and individual identities of those involved, especially the different ways in which partners of different cultures negotiate with each other in an effort to satisfy desires for adequate autonomous identities and relational identities. Tensions within intercultural relationships can include stereotyping, or "identity freezing", and "nonsupport". Relational stages of identity management Identity management is an ongoing process that Imahori and Cupach define as having three relational stages. The trial stage occurs at the beginning of an intercultural relationship when partners are beginning to explore their cultural differences. During this stage, each partner is attempting to determine what cultural identities they want in the relationship. At the trial stage, cultural differences are significant barriers to the relationship and it is critical for partners to avoid identity freezing and nonsupport. During this stage, individuals are more willing to risk face threats to establish a balance necessary for the relationship. The enmeshment stage occurs when a relational identity emerges with established common cultural features. During this stage, the couple becomes more comfortable with their collective identity and the relationship in general. In the renegotiation stage, couples work through identity issues and draw on their past relational history while doing so. A strong relational identity has been established by this stage and couples have mastered dealing with cultural differences. It is at this stage that cultural differences become part of the relationship rather than a tension within it. Communication privacy management theory Communication privacy management theory, from the socio-cultural tradition, is concerned with how people negotiate openness and privacy in relation to communicated information. This theory focuses on how people in relationships manage boundaries which separate the public from the private. Boundaries An individual's private information is protected by the individual's boundaries. The permeability of these boundaries is ever changing, allowing selective access to certain pieces of information. This sharing occurs when the individual has weighed their need to share the information against their need to protect themselves. This risk assessment is used by couples when evaluating their relationship boundaries. The disclosure of private information to a partner may result in greater intimacy, but it may also result in the discloser becoming more vulnerable. Co-ownership of information When someone chooses to reveal private information to another person, they are making that person a co-owner of the information. Co-ownership comes with rules, responsibilities, and rights that must be negotiated between the discloser of the information and the receiver of it. The rules might cover questions such as: Can the information be disclosed? When can the information be disclosed? To whom can the information be disclosed? And how much of the information can be disclosed? The negotiation of these rules can be complex, and the rules can be explicit as well as implicit; rules may also be violated. Boundary turbulence What Petronio refers to as "boundary turbulence" occurs when rules are not mutually understood by co-owners, and when a co-owner of information deliberately violates the rules. This is not uncommon and usually results in some kind of conflict. It often results in one party becoming more apprehensive about future revelations of information to the violator. Cognitive dissonance theory The theory of cognitive dissonance, part of the cybernetic tradition, argues that humans are consistency seekers and attempt to reduce their dissonance, or cognitive discomfort. The theory was developed in the 1950s by Leon Festinger. The theory holds that when individuals encounter new information or new experiences, they categorize the information based on their preexisting attitudes, thoughts, and beliefs. If the new encounter does not fit their preexisting assumptions, then dissonance is likely to occur. Individuals are then motivated to reduce the dissonance they experience by avoiding situations that generate dissonance. For this reason, cognitive dissonance is considered a drive state that generates motivation to achieve consonance and reduce dissonance. An example of cognitive dissonance would be if someone holds the belief that maintaining a healthy lifestyle is important, but maintains a sedentary lifestyle and eats unhealthy food. They may experience dissonance between their beliefs and their actions. If there is a significant amount of dissonance, they may be motivated to work out more or eat healthier foods. They may also be inclined to avoid situations that bring them face to face with the fact that their attitudes and beliefs are inconsistent, by avoiding the gym and avoiding stepping on their weighing scale. To avoid dissonance, individuals may select their experiences in several ways: selective exposure, i.e. seeking only information that is consonant with one's current beliefs, thoughts, or actions; selective attention, i.e. paying attention only to information that is consonant with one's beliefs; selective interpretation, i.e. interpreting ambiguous information in a way that seems consistent with one's beliefs; and selective retention, i.e. remembering only information that is consistent with one's beliefs. Types of cognitive relationships According to cognitive dissonance theory, there are three types of cognitive relationships: consonant relationships, dissonant relationships, and irrelevant relationships. Consonant relationships are when two elements, such as beliefs and actions, are in equilibrium with each other or coincide. Dissonant relationships are when two elements are not in equilibrium and cause dissonance. In irrelevant relationships, the two elements do not possess a meaningful relationship with one another. Attribution theory Attribution theory is part of the socio-psychological tradition and analyzes how individuals make inferences about observed behavior. Attribution theory assumes that we make attributions, or social judgments, as a way to clarify or predict behavior. Steps to the attribution process Observe the behavior or action. Make judgments about the intention of a particular action. Make an attribution of cause, which may be internal (i.e. the cause is related to the person), or external (i.e. the cause of the action is external circumstances). For example, when a student fails a test an observer may choose to attribute that action to 'internal' causes, such as insufficient study, laziness, or having a poor work ethic. Alternatively the action might be attributed to 'external' factors such as the difficulty of the test, or real-world stressors that led to distraction. Individuals also make attributions about their own behavior. The student who received a failing test score might make an internal attribution, such as "I just can't understand this material", or an external attribution, such as "this test was just too difficult." Fundamental attribution error and actor-observer bias Observers making attributions about the behavior of others may overemphasize internal attributions and underestimate external attributions; this is known as the fundamental attribution error. Conversely, when an individual makes an attribution about their own behavior they may overestimate external attributions and underestimate internal attributions. This is called actor-observer bias. Expectancy violations theory Expectancy violations theory is part of the socio-psychological tradition, and addresses the relationship between non-verbal message production and the interpretations people hold for those non-verbal behaviors. Individuals hold certain expectations for non-verbal behavior that are based on social norms, past experience and situational aspects of that behavior. When expectations are either met or violated, we make assumptions about the behaviors and judge them to be positive or negative. Arousal When a deviation of expectations occurs, there is an increased interest in the situation, also known as arousal. This may be either cognitive arousal, an increased mental awareness of expectancy deviations, or physical arousal, resulting in body actions and behaviors as a result of expectancy deviations. Reward valence When an expectation is not met, an individual may view the violation of expectations either positively or negatively, depending on their relationship to the violator and their feelings about the outcome. Proxemics One type of violation of expectations is the violation of the expectation of personal space. The study of proxemics focuses on the use of space to communicate. Edward T. Hall's (1940-2017) theory of personal space defined four zones that carry different messages in the U.S.: Intimate distance (0–18 inches). This is reserved for intimate relationships with significant others, or the parent-child relationship (hugging, cuddling, kisses, etc.) Personal distance (18–48 inches). This is appropriate for close friends and acquaintances, such as significant others and close friends, e.g. sitting close to a friend or family member on the couch. Social distance (4–10 feet). This is appropriate for new acquaintances and for professional situations, such as interviews and meetings. Public distance (10 feet or more). This is appropriate for a public setting, such as a public street or a park. Pedagogical communication Pedagogical communication is a form of interpersonal communication that involves both verbal and nonverbal components. A teacher's nonverbal immediacy, clarity, and socio-communicative style has significant consequences for students' affective and cognitive learning. It has been argued that "companionship" is a useful metaphor for the role of "immediacy", the perception of physical, emotional, or psychological proximity created by positive communicative behaviors, in pedagogy. Social networks A social network is made up of a set of individuals (or organizations) and the links among them. For example, each individual may be treated as a node, and each connection due to friendship or other relationship is treated as a link. Links may be weighted by the content or frequency of interactions or the overall strength of the relationship. This treatment allows patterns or structures within the network to be identified and analyzed, and shifts the focus of interpersonal communication research from solely analyzing dyadic relationships to analyzing larger networks of connections among communicators. Instead of describing the personalities and communication qualities of an individual, individuals are described in terms of their relative location within a larger social network structure. Such structures both create and reflect a wide range of social phenomena. Hurt Interpersonal communications can lead to hurt in relationships. Categories of hurt include devaluation, relational transgressions, and hurtful communication. Devaluation A person can feel devalued at the individual and relational level. Individuals can feel devalued when someone insults their intelligence, appearance, personality, or life decisions. At the relational level, individuals can feel devalued when they believe that their partner does not perceive the relationship to be close, important, or valuable. Relational transgressions Relational transgressions occur when individuals violate implicit or explicit relational rules. For instance, if the relationship is conducted on the assumption of sexual and emotional fidelity, violating this standard represents a relational transgression. Infidelity is a form of hurt that can have particularly strong negative effects on relationships. The method by which the infidelity is discovered influences the degree of hurt: witnessing the partner's infidelity first hand is most likely to destroy the relationship, while partners who confess on their own are most likely to be forgiven. Hurtful communication Hurtful communication is communication that inflicts psychological pain. According to Vangelisti (1994), words "have the ability to hurt or harm in every bit as real a way as physical objects. A few ill-spoken words (e.g. "You're worthless", "You'll never amount to anything", "I don't love you anymore") can strongly affect individuals, interactions, and relationships." Interpersonal conflict Many interpersonal communication scholars have sought to define and understand interpersonal conflict, using varied definitions of conflict. In 2004, Barki and Hartwick consolidated several definitions across the discipline and defined conflict as "a dynamic process that occurs between interdependent parties as they experience negative emotional reactions to perceived disagreements and interference with the attainment of their goals". They note three properties generally associated with conflict situations: disagreement, negative emotion, and interference. In the context of an organization, there are two targets of conflicts: tasks, or interpersonal relationships. Conflicts over events, plans, behaviors, etc. are task issues, while conflict in relationships involves dispute over issues such as attitudes, values, beliefs, behaviors, or relationship status. Technology and interpersonal communication skills Technologies such as email, text messaging and social media have added a new dimension to interpersonal communication. There are increasing claims that over-reliance on online communication affects the development of interpersonal communication skills, in particular nonverbal communication. Psychologists and communication experts argue that listening to and comprehending conversations plays a significant role in developing effective interpersonal communication skills. Others Attachment theory. This theory follows the relationships that builds between a mother and child, and the impact it has on their relationships with others. It resulted from the combined work of John Bowlby and Mary Ainsworth (Ainsworth & Bowlby, 1991). Ethics in personal relations. This considers a space of mutual responsibility between two individuals, including giving and receiving in a relationship. This theory is explored by Dawn J. Lipthrott in the article "What IS Relationship? What is Ethical Partnership?" Deception in communication. This concept is based on the premise that everyone lies and considers how lying impacts relationships. James Hearn explores this theory in his article, "Interpersonal Deception Theory: Ten Lessons for Negotiators." Conflict in couples. This focuses on the impact that social media has on relationships, as well as how to communicate through conflict. This theory is explored by Amanda Lenhart and Maeve Duggan in their paper, "Couples, the Internet, and Social Media." Relevance to mass communication Interpersonal communication has been studied as a mediator for information flow from mass media to the wider population. The two-step flow of communication theory proposes that most people form their opinions under the influence of opinion leaders, who in turn are influenced by the mass media. Many studies have repeated this logic in investigating the effects of personal and mass communication, for example in election campaigns and health-related information campaigns. It is not clear whether or how social networking through sites such as Facebook changes this picture. Social networking is conducted over electronic devices with no face-to-face interaction, resulting in an inability to access the behavior of the communicator and the nonverbal signals that facilitate communication. Side effects of using these technologies for communication may not always be apparent to the individual user, and may involve both benefits and risks. Context Context refers to environmental factors that influence the outcomes of communication. These include time and place, as well as factors like family relationships, gender, culture, personal interest and the environment. Any given situation may involve many interacting contexts, including the retrospective context and the emergent context. The retrospective context is everything that comes before a particular behavior that might help understand and interpret that behavior, while the emergent context refers to relevant events that come after the behavior. Context can include all aspects of social channels and situational milieu, the cultural and linguistic backgrounds of the participants, and the developmental stage or maturity of the participants. Situational milieu Situational milieu can be defined as the combination of the social and physical environments in which something takes place. For example, a classroom, a military conflict, a supermarket checkout, and a hospital would be considered situational milieus. The season, weather, current physical location and environment are also milieus. To understand the meaning of what is being communicated, context must be considered. Internal and external noise can have a profound effect on interpersonal communication. External noise consists of outside influences that distract from the communication. Internal noise is described as cognitive causes of interference in a communication transaction. In the hospital setting, for example, external noise can include the sound made by medical equipment or conversations had by team members outside of patient's rooms, and internal noise could be a health care professional's thoughts about other issues that distract them from the current conversation with a client. Channels of communication also affect the effectiveness of interpersonal communication. Communication channels may be either synchronous or asynchronous. Synchronous communication takes place in real time, for example face-to-face discussions and telephone conversations. Asynchronous communications can be sent and received at different times, as with text messages and e-mails. In a hospital environment, for example, urgent situations may require the immediacy of communication through synchronous channels. Benefits of synchronous communication include immediate message delivery, and fewer chances of misunderstandings and miscommunications. A disadvantage of synchronous communication is that it can be difficult to retain, recall, and organize the information that has been given in a verbal message, especially when copious amounts of data have been communicated in a short amount of time. Asynchronous messages can serve as reminders of what has been done and what needs to be done, which can prove beneficial in a fast-paced health care setting. However, the sender does not know when the other person will receive the message. When used appropriately, synchronous and asynchronous communication channels are both efficient ways to communicate. Mistakes in hospital contexts are often a result of communication problems. Linguistic backgrounds Linguistics is the study of language, and is divided into three broad aspects: the form of language, the meaning of language, and the context or function of language. Form refers to the words and sounds of language and how the words are used to make sentences. Meaning focuses on the significance of the words and sentences that human beings have put together. Function, or context, interprets the meaning of the words and sentences being said to understand why a person is communicating. Culture and Gender Culture Culture is a human concept that encompasses the beliefs, values, attitudes, and customs of groups of people. It is important in communication because of the help it provides in transmitting complex ideas, feelings, and specific situations from one person to another. Culture influences an individual's thoughts, feelings and actions, and therefore affects communication. The more difference there is between the cultural backgrounds of two people, the more different their styles of communication will be. Therefore, it is important to be aware of a person's background, ideas and beliefs and consider their social, economic and political positions before attempting to decode the message accurately and respond appropriately. Five major elements related to culture affect the communication process: Cultural history Religion Value (personal and cultural) Social organization Language Communication between cultures may occur through verbal communication or nonverbal communication. Culture influences verbal communication in a variety of ways, particularly by imposing language barriers. Each individual has their own languages, beliefs and values that must be considered. Factors influencing nonverbal communication include the different roles of eye contact in different cultures. Touching as a form of greeting may be perceived as impolite in some cultures, but normal in others. Acknowledging and understanding these cultural differences improves communication. Gender Gender is considered to be a socially and culturally constructed role assigned to an individual based on their perceived sex. Gender is the behavioral, cultural, or emotional traits typically associated with one's sex. These perceptions and roles humans are assigned and characterized by may impact the expectations of their interpersonal communication and how they choose to display themselves when communicating. How men or women may communicate can stem from how they have developed based on cultural and societal factors, as there are distinctive factors in which men and women are characterized. Society and culture have placed certain expectations on men and women about how they communicate. Society tends to place men in a more assertive and dominant role. This expectation of a dominant nature is also related to men being associated with a lack of emotions. Conversely, women are expected to be more empathic with their communication style to create relationships. A crucial part of interpersonal communication is being able to talk and listen. Society expects men to communicate with a goal-oriented approach, which may negatively impact their effectiveness in active listening. At the same time, women are expected to be more supportive in their interactions. These suggested traits could be stereotypes or generalizations that exist. However, research has found that both diverge from and converge with these stereotypes and generalizations. A study of faculty members compares communication between male and female faculty members. The study found that male faculty were more talkative during the meetings and assertive when making their points. This study does diverge from the stereotype of women being considered the more talkative gender. At the same time, it converges with the generalization that men are more assertive when communicating. Regardless of expectations, some people will reflect, and some will reshape the expectations to fit their social and family interactions as shifts in ideological and societal values change. Interpersonal Communication and Social Media The rise of social media has impacted communication as a whole.  In this age of technology, Communication intended to feel so personal can seem impersonal. Social media can significantly affect how interpersonal communication occurs. Several social media platforms aim to enhance our communication by escaping geographical barriers. Researchers have identified both positive and negative impacts of mediated forms of interpersonal communication: Misinterpretation: Without a physical face-to-face interaction, miscommunication can frequently occur when communicating through a mediated medium. Messages are sent verbally and non-verbally when using interpersonal communication—discerning one's attitudes when it is more complicated due to the lack of feedback and expressions.  Facial expression, a vital part of interpersonal communication as a support for verbal communication, is replaced in this form and reflected through emojis, acronyms, etc. Most of the non-verbal aspects, such as eye contact and posture, cannot be seen through the mediated forum; hence, some feedback is lost regarding our interest level. Usually, when someone is making eye contact, it shows a level of interest in the meditated format. Individuals may instead look at the pacing of the reply to suggest interest, which now does not factor in that life continues to happen around them; hence, there could be several reasons why the lines of communication could affect and not just that they may not be interested which could lead to miscommunication in the future. Relationship Enhancements: There are different modalities in which humans have developed to communicate. Communication is critical to letting the communicator know how to respond to a message. It is foundational to understand and interpret how a message has been received. Social media does entail aspects of feedback, and we have worked in recent years to develop these forms of feedback through quick reply suggestions to keep the conversations going without a physical presence. Through this, social media has created an avenue in which people over extended geographical distances can still engage in interpersonal communication and continue the development of relationships. Decision Making: Research found that social media and interpersonal communication are equally likely to impact one's perceptions. Both social media and interpersonal communication impact decision-making. Interpersonal communication takes a more personal approach, which helps to evoke trust. Social media takes a more diverse approach to the information provided, and sources depend on interactions. Social media provides a medium to see several viewpoints at the same time. Having multiple perspectives helps individuals find or formulate their perception of what is true. It will also allow individuals the opportunity to voice their opinions. Conversely, in an interpersonal setting, the ability to voice an opinion or formulate a decision may be more challenging with a limited pool of information.  A study into the impact of social media and interpersonal communication on one's environmental perceptions found that both could influence the perceptions equally, and people could link both social media as a form of reinforcement to interpersonal communication. Social media acts as an avenue for interpersonal communication. Some aspects of the communication form are altered to fit the technological space and make the space feel as personal as possible. Developmental Progress (maturity) Communication skills develop throughout one's lifetime. The majority of language development happens during infancy and early childhood. The attributes for each level of development can be used to improve communication with individuals of these ages. See also Coordinated Management of Meaning Criticism Decision downloading Face-to-face interaction Friedemann Schulz von Thun I-message Ishin-denshin Interpersonal relationship Nonviolent Communication Organizational communication People skills Rapport Socionics References Bibliography Altman, Irwin; Taylor, Dalmas A. (1973). Social Penetration: The Development of Interpersonal Relationships, New York: Holt, Rinehart, and Winston, p. 3, Floyd, Kory. (2009). Interpersonal Communication: The Whole Story, New York: McGraw-Hill. (bibliographical information) Griffin, E. (2012). A First Look at Communication Theory (9th ed.), New York: McGraw-Hill. pp. 115–117, Heider, F. (1958). The psychology of Interpersonal Relations. Hillsdale, NJ: Lawrence Erlbaum Associates. Mongeau, P., and M. Henningsen. "Stage theories of relationship development." Engaging theories in interpersonal communication: Multiple perspectives (2008): 363–375. Pearce, Barnett. Making Social Worlds: A Communication Perspective, Wiley-Blackwell, January, 2008, Stone, Douglas; Patton, Bruce and Heen, Sheila. Difficult Conversations: How to Discuss What Matters Most, Penguin, 1999, Ury, William. Getting Past No: Negotiating Your Way from Confrontation to Cooperation, revised second edition, Bantam, January 1, 1993, trade paperback, ; 1st edition under the title, Getting Past No: Negotiating with Difficult People, Bantam, September, 1991, hardcover, 161 pages, Ury, William; Fisher, Roger and Patton, Bruce. Getting to Yes: Negotiating Agreement Without Giving in, Revised 2nd edition, Penguin USA, 1991, trade paperback, ; Houghton Mifflin, April, 1992, hardcover, 200 pages, . The first edition, unrevised, Houghton Mifflin, 1981, hardcover, West, R., Turner, L.H. (2007). Introducing Communication Theory. New York: McGraw-Hill. Johnson, Chandra. "Face time vs. screen time: The technological impact on communication." national.deseretnews.com. Deseret Digital Media. 29 Aug. 2014. Web. 29 Mar. 2016. Robinson, Lawrence, Jeanne Segal, and Melinda Smith. "Effective Communication: Improving Communication Skills in Your Work and Personal Relationships." Help Guide. Mar. 2016. Web. 5 April 2016. Tardanico, Susan. "Is Social Media Sabotaging Real Communication?" Forbes: Leadership, 30 April 2012. Web. 10 Mar. 2016. White, Martha C. "The Real Reason New College Grads Can't Get Hired." time.com. EBSCOhost. 11 Nov. 2013. Web. 12 April 2016. Wimer, Jeremy. Manager of Admission Services, Bachelor of Arts in Organizational and Strategic Communication, Master of Science in Management of Organizational Leadership & Change, Colorado Technical University. Personal Email interview. 22 Mar. 2016. Further reading Human communication Communication studies
0.776489
0.995793
0.773223
Compliance (psychology)
Compliance is a response—specifically, a submission—made in reaction to a request. The request may be explicit (e.g., foot-in-the-door technique) or implicit (e.g., advertising). The target may or may not recognize that they are being urged to act in a particular way. Social psychology is centered on the idea of social influence. Defined as the effect that the words, actions, or mere presence of other people (real or imagined) have on our thoughts, feelings, attitudes, or behavior; social influence is the driving force behind compliance. It is important that psychologists and ordinary people alike recognize that social influence extends beyond our behavior—to our thoughts, feelings, and beliefs—and that it takes on many forms. Persuasion and the gaining of compliance are particularly significant types of social influence since they utilize the respective effect's power to attain the submission of others. Studying compliance is significant because it is a type of social influence that affects our everyday behavior—especially social interactions. Compliance itself is a complicated concept that must be studied in depth so that its uses, implications, and both its theoretical and experimental approaches may be better understood. Personality psychology vs. social psychology In the study of personality psychology, certain personality disorders display characteristics involving the need to gain compliance or control over others: Those with antisocial personality disorder tend to display a glibness and grandiose sense of self-worth. Due to their shallow affect and lack of remorse or empathy, they are well suited to con and/or manipulate others into complying with their wishes. Those with histrionic personality disorder need to be the center of attention; and in turn, draw people in so they may use (and eventually dispose of) their relationship. Those with narcissistic personality disorder have inflated self-importance, hypersensitivity to criticism and a sense of entitlement that compels them to persuade others to comply with their requests. Social psychologists view compliance as a means of social influence used to reach goals and attain social or personal gains. Rather than concentrating on an individual's personality or characteristics (that may drive their actions), social psychology focuses on people as a whole and how thoughts, feelings and behaviors allow individuals to attain compliance and/or make them vulnerable to complying with the demands of others. Their gaining of or submission to compliance is frequently influenced by construals—i.e. an individual's interpretation of their social environment and interactions. Major theoretical approaches The study of compliance is often recognized for the overt demonstrations of dramatic experiments such as the Stanford prison experiment and the Stanley Milgram shock experiments. These experiments served as displays of the psychological phenomena of compliance. Such compliance frequently occurred in response to overt social forces and while these types of studies have provided useful insight into the nature of compliance, today's researchers are inclined to concentrate their efforts on subtle, indirect and/or unconscious social influences. Those involved in this modern social-cognitive movement are attempting to discover the ways in which subjects' implicit and explicit beliefs, opinions and goals affect information processing and decision making in settings where influential forces are present. Philosophy vs. social psychology Philosophers view compliance in the context of arguments. Arguments are produced when an individual gives a reason for thinking that a claim is true. In doing so, they utilize premises (claims) to support their conclusion (opinion). Regardless of utilization of fallacy forms (e.g., apple-polishing, ad hominem) to get their point across, individuals engaged in philosophical arguments are overtly and logically expressing their opinion(s). This is an explicit action in which the person on the other side of the argument recognizes that the arguer seeks to gain compliance (acceptance of their conclusion). In studying compliance, social psychologists aim to examine overt and subtle social influences experienced in various forms by all individuals. Implicit and explicit psychological processes are also studied since they shape interactions. This is because these processes explain how certain individuals can make another comply and why someone else succumbs to compliance. As a means of fulfilling needs In complying with the requests of others and/or by following their actions, we seek to maintain the goals of social influence: informative social influence normative social influence Informative social influence (goal of accuracy) People are motivated to achieve their goals in the most efficient and accurate manner possible. When faced with information, an individual needs to correctly interpret and react—particularly when faced with compliance-gaining attempts since an inaccurate behavior could result in great loss. With that being said, people attempt to gain an accurate construal of their situation so they may respond accordingly. Individuals are frequently rewarded for acting in accordance with the beliefs, suggestions and commands of authority figures and/or social norms. Among other sources, authority may be gained on the basis of societal power, setting and size. Individuals are likely to comply with an authority figure's (or group's) orders or replicate the actions deemed correct by social norms because of an assumption that the individual is unaware of some important information. The need to be accurate—and the belief that others know something they do not—often supersedes the individual's personal opinion. Normative social influence (goal of affiliation) Humans are fundamentally motivated by the need to belong—the need for social approval through the maintenance of meaningful social relationships. This need motivates people to engage in behavior that will induce the approval of their peers. People are more likely to take actions to cultivate relationships with individuals they like and/or wish to gain approval from. By complying with others' requests and abiding by norms of social exchange (i.e., the norm of reciprocity), individuals adhere to normative social influence and attain the goal of affiliation. An example of both normative and informational social influence is the Solomon Asch line experiments. As a product of variables Bibb Latané originally proposed the social impact theory that consists of three principles and provides wide-ranging rules that govern these individual processes. The general theory suggests we think of social impact as the result of social forces operating in a social structure (Latané). The theory's driving principles can make directional predictions regarding the effects of strength, immediacy, and number on compliance; however, the principles are not capable of specifying precise outcomes for future events. Strength The stronger a group—the more important it is to an individual—the more likely that individual is to comply with social influence. Immediacy The proximity of the group makes an individual more likely to conform and comply with the group's pressures. These pressures are strongest when the group is closer to the individual and composed of people the individual cares about (e.g., friends, family) and/or authority figures. Number Researches have found that compliance increases as the number of people in the group increases; however, once the group reaches 4 or 5 people, compliance is less likely to occur. After this point, each additional person has less of an influencing effect. However, adding more members to a small group (e.g., 3 to 4 people) has a greater effect than adding more members to a larger group (e.g., 53 to 54 people) (Aronson). Similarity Although this variable is not included in Latané's theory, Burger et al. (2004) conducted studies that examined the effect of similarity and compliance to a request. Note that the shared characteristic (e.g., birthday, first name) had to be perceived as incidental. The findings demonstrated that people were more likely to comply with the requester when they believed the feature they shared was unplanned and rare. Displayed by the SIFT-3M model A theoretical approach uncommon in major psychology literature is David Straker's, SIFT-3M model. It was created to discuss mental functioning in relation to psychological decisions (e.g., compliance). Straker proposes that by gaining a greater understanding of how people make sense of the world, how they think and how they decide to act, people can develop the basic tools needed to change others' minds by gaining compliance. In inducing compliance, requestors must understand the 9 stages or levels: In using this model to understand and change the minds of others, Straker reminds requestors that they must talk to the other individual's internal map (thoughts and beliefs) and familiarize themselves with their inner systems. Gaining techniques The following techniques have been proven to effectively induce compliance from another party. Foot-in-the-door In utilizing this technique, the subject is asked to perform a small request—a favor that typically requires minimal involvement. After this, a larger request is presented. According to "successive approximations", because the subject complied with initial requests, they are more likely to feel obligated to fulfill additional favors. Door-in-the-face This technique begins with an initial grand request. This request is expected to be turned down; thus, it is followed by a second, more reasonable request. This technique is decidedly more effective than foot-in-the-door since foot-in-the-door utilizes a gradual escalation of requests. Low-ball Frequently employed by car salesmen, low-balling gains compliance by offering the subject something at a lower price only to increase the price at the last moment. The buyer is more likely to comply with this price change since they feel like a mental agreement to a contract has occurred. Ingratiation This attempt to obtain compliance involves gaining someone's approval so they will be more likely to appease your demands. Edward E. Jones discusses three forms of ingratiation: flattery opinion conformity and self-presentation (presenting one's own attributes in a manner that appeals to the target) Norm of reciprocity This technique explains that due to the injunctive social norm that people will return a favor when one is granted to them; compliance is more likely to occur when the requestor has previously complied with one of the subject's requests. Estimation of compliance Research also indicates that people tend to underestimate the likelihood that other individuals will comply with requests—called the underestimation of compliance effect. That is, people tend to assume that friends, but not strangers, will comply with requests to seek assistance. Yet, in practice, strangers comply with requests more frequently than expected. Consequently, individuals significantly underestimate the degree to which strangers will comply with requests. Major empirical findings Solomon Asch line experiments In Solomon Asch's experiment, 50 participants were placed in separate ambiguous situations to determine the extent to which they would conform. Aside from a single participant, the 7 other experiment members were confederates—individuals who understood the aim of the study and had been instructed to produce pre-selected responses. In the designated room, a picture of three lines of differing lengths was displayed. Each confederate was asked questions (e.g., which line is the longest, which line matches the reference line). In response, confederates gave largely incorrect answers. Results As a result, 1/3 of the participants gave the incorrect answer when the confederates produced unanimously incorrect answer(s). In accordance to the Goals of Social Influence, participants claimed that even when they knew the unanimous answer was wrong, they felt the group knew something they did not (informational social influence). Asch noted that 74% of subjects conformed to the majority at least once. The rate of conformity was reduced when one or more confederates provided the correct answer and when participants were allowed to write down their responses rather than verbally stating them. Significance The results of these studies support the notion that people comply to fulfill the need to be accurate and the need to belong. Additionally, it supports the social impact theory in that the experiment's ability to produce compliance was strengthened by its status (confederates seen as informational authorities), proximity and group size (7:1). Stanley Milgram's experiment Stanley Milgram's experiment set out to provide an explanation for the horrors being committed against Jews trapped in German concentration camps. The compliance to authority demonstrated by people working in concentration camps ignited the question: "Are Germans actually 'evil' or is it possible to make anyone to comply to the orders of an authority figure?" To test this, Stanley Milgram designed an experiment to see if participants would harm (shock) another individual due to the need to comply with authority. Milgram developed a pseudo-shock generator with labels beginning at 15 volts ("Slight Shock") to 450 volts ("XXX"). Participants took on the role of "teacher" and were informed they would be participating in a learning and memory test. In doing so, they had to teach the "student" (a confederate in a separate room) a list of words. The "teacher" was instructed to increase the voltage by 15 and shock the "student" each time he answered incorrectly. When a subject began to grow uneasy about shocking the confederate (due to voltage level, noises, ethics, etc.) the experimenter would encourage the participant to continue by proclaiming he would assume full responsibility for any harm done to the "student" and by saying phrases such as "It is absolutely essential that you continue." To rule out sadistic tendencies, all 40 "teachers" were male and were screened for competence and intelligence before beginning the experiment. Results 100% of male participants delivered up to 300 volts ("Intense") to their assigned "student". 62% of participants administered 375 volts ("Strong Shock") and 63% participants shocked their "student" at the maximum level (450 volts). When these alterations to the original experiment were made, the rate of compliance was not reduced: The victim claimed to have a heart condition Subjects were told the experiment was being conducted for marketing purposes Before the experiment began, the "student" extracted an explicit agreement from the "teacher" to stop on demand The rate of compliance was reduced when: Two experimenters (conducting the experiment) disagreed about the "teacher" continuing Fellow "teachers" refused to continue (in experiments with multiple "teachers") Experimenter remained in a different room from the "teacher" The "teacher" was instructed to hold the "student's" hand on a shock plate Significance The results of Stanley Milgram's experiments indicate the power of informational and normative aspects of social influence. Participants believed the experimenter was in control and held information he personally did not. "Teachers" also showed a need for affiliation since they appeared to fear deviating from the experimenter's commands. Additionally, authoritative figures appear to have a large impact on the actions of individuals. As previously stated, individuals seeking affiliation and approval are more likely to comply with authority figures' demands. Stanford prison experiment This experiment was conducted to test social influence and compliance to authority through the utilization of a prison life situation. After answering a local newspaper ad (calling for volunteers for a study centered on the effects of prison life), 70 applications were checked for psychological problems, medical disabilities and crime/drug abuse history and reduced to 24 American and Canadian college students from the Stanford area. The all-male participant pool was divided into two groups (guards and prisoners) by flipping a coin. The prison was constructed by boarding up both sides of a corridor in the basement of Stanford's psychology department building. “The Yard” was the only place were prisoners were allowed to walk, eat or exercise—actions that were done blindfolded so they could not identify an exit. Prison cells were located in laboratory rooms where the doors had been removed and replaced with steel bars and cell numbers. The incarcerated individuals believed they were being kept in the “Stanford County Jail” because before the experiment began, they did not know they would be labeled prisoners. On a random day, prisoners were subjected to an authentic police arrest. Cars arrived at the station and suspects were brought inside where they were booked, read their Miranda rights a second time, fingerprinted and taken to a holding cells where they were left blindfolded. Each prisoner received chains around their ankles and a stocking (to simulate a shaved head). Additionally, inmates lost their names and were subsequently referred to by their ID number. Results As the experiment progressed, participants assigned to guard positions escalated their aggression. Although guards were instructed not to hit the prisoners, they found ways to humiliate/disrupt them via systematic searches, strip searches, spraying for lice, sexual harassment, denying them of basic rights (e.g., bathroom use) and waking inmates from their sleep for head counts. Social and moral values initially held by the guards were quickly abandoned as they became immersed in their role. Due to the reality of psychological abuse, prisoners were released 6 days later, after exhibiting pathological behavior and nervous breakdowns. Significance The Stanford Prison Project is a strong example of the power perceived authority can have over others. In this case, the authority was largely perceived; however, the consequences were real. Due to the assumed power held by the guards, even the "good" guards felt helpless to intervene. Additionally, none of the guards came late for a shift, called in sick, demanded extra pay for overtime or requested to be discharged from the study before its conclusion. The guards complied with the alleged demands of the prison while the prisoners complied with the perceived authority of the guards. Aside from certain instances of rebellion, the prisoners were largely compliant with the guards orders—from strip searches to numerous nightly "bed-checks". The Experiment—a 2010 film—tells a version of the Stanford Prison Project. It focuses on 26 men who are chosen/paid to participate in an experiment. After being assigned the roles of guards and prisoners, the psychological study spirals out of control. Compliance effect Extensive research shows that people find it difficult to say "no" to a request, even when this request originates from a perfect stranger. For example, in one study, people were asked by a stranger to vandalize a purported library book. Despite obvious discomfort and reluctance of many individuals to write the world "pickle" in one of the pages, more than 64% complied with this vandalism request—more than double the requesters' prediction of a 28% rate of compliance. In such interactions, people are more likely to comply when asked face-to-face than when asked indirectly or by e-mail. Significance This research shows that we tend to underestimate the influence we have over others, and that our appeal to others is more effective when it is made face to face. It also shows that even a suggestion we make in jest may embolden someone to commit immoral acts. Nuremberg Trials The Nuremberg Trials were a series of tribunals held by the Charter of the International Military Tribunal (IMT) which was made up of members of the Allied Powers – Great Britain, France, the Soviet Union, and the United States – who presided the hearings of twenty-two major Nazi criminals.  In these trials many of the defendants had stated that they had simply been following directions and failure to do so would have resulted in their punishment. By complying to the directions given by those above them in rank they knowingly caused harm and death to those involved in the Holocaust. Results At the end of the trials, 199 defendants were tried at Nuremberg. Of the 199 defendants: 161 were convicted with 37 being sentenced to death and 12 of the defendants were tried to by the IMT (International Military Tribunal). Although many involved in the trials were tried, some of the higher-ranking officials had fled Germany to live abroad with some even coming to the United States. An example of this was Adolf Eichmann who had fled and made refuge for himself in Argentina, He was later caught by Israel's Intelligence Service in which he was later tried, found guilty, and executed in 1962. Significance The information divulged during the event of the Nuremberg Trials suggest strong evidence in the power enforced over others from that of a higher authority. Many officials in the Nazi party pleaded to just have been following orders. Applications Person-to-person interactions The use of persuasion to achieve compliance has numerous applications in interpersonal interactions. One party can utilize persuasion techniques to elicit a preferred response from other individuals. Compliance strategies exploit psychological processes in order to prompt a desired outcome; however, they do not necessarily lead to private acceptance by the targeted individual. Meaning, an individual may comply with a request without truly believing the action(s) they are being asked to complete is acceptable. Because of this, persuasion techniques are often used one-sidedly in immediate situations where one individual wishes to provoke a specific response from another individual. For example, car salesmen frequently use the lowball technique to manipulate customers' psychological functioning by convincing them to comply with a request. By initially estimating a car's price to be lower than actuality, car salesmen recognize that the customer is more likely to accept a higher price at a later time. Compliance strategies (e.g., lowball, foot-in-the-door, etc.) are relevant to numerous person-to-person interactions when persuasion is involved. One individual can use such techniques to gain compliance from the other, swayed person. Other practical examples include: A child asking for an allowance raise with the foot-in-the-door technique A student using ingratiation (e.g., flattery) to ask for a raised grade An individual doing someone a favor, hoping that the norm of reciprocity will influence that someone to lend a hand at a later date A lawyer using ingratiation and their perceived authority to persuade a jury Marketing Research has indicated that compliance techniques have become a major asset to numerous forms of advertising, including Internet shopping sites. Techniques are used to communicate essential information intended to persuade customers. Advertisements and other forms of marketing typically play on the customers' need for informative and normative social influence. The people in the advertisements and the ads themselves serve as a type of authority. They are credible—especially in regards to the product. As a result, customers' need to be accurate drives them to comply with the ad's message and to purchase a product that an authority claims they need. Secondly, people have the need to belong. Customers often comply with ads by purchasing certain merchandise in the hopes of affiliating with a particular group. Because compliance techniques play at psychological needs they are frequently successful in selling a product; the use of fear is often less persuasive. Workplace safety Organizations need to create a safe and healthy work environment for their members. Nevertheless, despite organizations being primarily responsible to enforce workplace safety protocol, employees bear the responsibility for their own safety and safety of those around them. The failure to follow the guidelines can hinder the wellbeing of employees and the organizations. However, organizations must have a thorough understanding of contextual variables to support or hinder compliance of safety guidelines. Researchers showed that awareness of severe consequences positively affect motivation, whereas of mild consequences decreases perceived severity. In addition, in a survey conducted in 16 countries demonstrated that contextual variables (e.g. feeling caged) leads to a lower compliance behaviours (e.g. social distancing). Controversies While there is some debate over the idea and power of compliance as a whole, the main controversy—stemming from the subject of compliance—is that people are capable of abusing persuasion techniques in order to gain advantages over other individuals. Based on the psychological processes of social influence, compliance strategies may enable someone to be more easily persuaded towards a particular belief or action (even if they do not privately accept it). As such, the employment of compliance techniques may be utilized to manipulate an individual without their conscious recognition. A specific issue regarding this controversy has arisen during courtroom proceedings. Studies have shown that lawyers frequently implement these techniques in order to favorably influence a jury. For example, a prosecutor might use ingratiation to flatter a jury or cast an impression of his authority. In such cases, compliance strategies may be unfairly affecting the outcome of trials, which ought to be based on hard facts and justice, not simply persuasiveness. See also References Conformity Persuasion techniques
0.787837
0.981437
0.773212
Mental health nursing
Psychiatric nursing or mental health nursing is the appointed position of a nurse that specialises in mental health, and cares for people of all ages experiencing mental illnesses or distress. These include: neurodevelopmental disorders, schizophrenia, schizoaffective disorder, mood disorders, addiction, anxiety disorders, personality disorders, eating disorders, suicidal thoughts, psychosis, paranoia, and self-harm. Mental health nurses receive specific training in psychological therapies, building a therapeutic alliance, dealing with challenging behaviour, and the administration of psychiatric medication. In most countries, after the 1990s, a psychiatric nurse would have to attain a bachelor's degree in nursing to become a Registered Nurse (RN), and specialise in mental health. Degrees vary in different countries, and are governed by country-specific regulations. In the United States one can become a RN, and a psychiatric nurse, by completing either a diploma program, an associate (ASN) degree, or a bachelor's (BSN) degree. Mental health nurses can work in a variety of services, including: Child and Adolescent Mental Health Services (CAMHS), Acute Medical Units (AMUs), Psychiatric Intensive Care Units (PICUs), and Community Mental Health Services (CMHS). History The history of psychiatry and psychiatric nursing, although disjointed, can be traced back to ancient philosophical thinkers. Marcus Tullius Cicero, in particular, was the first known person to create a questionnaire for the mentally ill using biographical information to determine the best course of psychological treatment and care. Some of the first known psychiatric care centers were constructed in the Middle East during the 8th century. The medieval Muslim physicians and their attendants relied on clinical observations for diagnosis and treatment. In 13th century medieval Europe, psychiatric hospitals were built to house the mentally ill, but there were not any nurses to care for them and treatment was rarely provided. These facilities functioned more as a housing unit for the insane. Throughout the high point of Christianity in Europe, hospitals for the mentally ill believed in using religious intervention. The insane were partnered with "soul friends" to help them reconnect with society. Their primary concern was befriending the melancholy and disturbed, forming intimate spiritual relationships. Today, these soul friends are seen as the first modern psychiatric nurses. In the colonial era of the United States, some settlers adapted community health nursing practices. Individuals with mental defects that were deemed as dangerous were incarcerated or kept in cages, maintained and paid fully by community attendants. Wealthier colonists kept their insane relatives either in their attics or cellars and hired attendants, or nurses, to care for them. In other communities, the mentally ill were sold at auctions as slave labor. Others were forced to leave town. As the population in the colonies expanded, informal care for the community failed and small institutions were established. In 1752 the first "lunatics ward" was opened at the Pennsylvania Hospital which attempted to treat the mentally ill. Attendants used the most modern treatments of the time: purging, bleeding, blistering, and shock techniques. Overall, the attendants caring for the patients believed in treating the institutionalized with respect. They believed if the patients were treated as reasonable people, then they would act as such; if they gave them confidence, then patients would rarely abuse it. The 1790s saw the beginnings of moral treatment being introduced for people with mental distress. The concept of a safe asylum, proposed by Philippe Pinel and William Tuke, offered protection and care at institutions for patients who had been previously abused or enslaved. In the United States, Dorothea Dix was instrumental in opening 32 state asylums to provide quality care for the ill. Dix also was in charge of the Union Army Nurses during the American Civil War, caring for both Union and Confederate soldiers. Although it was a promising movement, attendants and nurses were often accused of abusing or neglecting the residents and isolating them from their families. The formal recognition of psychiatry as a modern and legitimate profession occurred in 1808. In Europe, one of the major advocates for mental health nursing to help psychiatrists was Dr. William Ellis. He proposed giving the "keepers of the insane" better pay and training so more respectable, intelligent people would be attracted to the profession. In his 1836 publication of Treatise on Insanity, he openly stated that an established nursing practice calmed depressed patients and gave hope to the hopeless. However, psychiatric nursing was not formalized in the United States until 1882 when Linda Richards opened Boston City College. This was the first school specifically designed to train nurses in psychiatric care. The discrepancy between the founding of psychiatry and the recognition of trained nurses in the field is largely attributed to the attitudes in the 19th century which opposed training women to work in the medical field. In 1913 Johns Hopkins University was the first college of nursing in the United States to offer psychiatric nursing as part of its general curriculum. The first psychiatric nursing textbook, Nursing Mental Diseases by Harriet Bailey, was not published until 1920. It was not until 1950 when the National League for Nursing required all nursing schools to include a clinical experience in psychiatry to receive national accreditation. The first psychiatric nurses faced difficult working conditions. Overcrowding, under-staffing and poor resources required the continuance of custodial care. They were pressured by an increasing patient population that rose dramatically by the end of the 19th century. As a result, labor organizations formed to fight for better pay and fewer hours. Additionally, large asylums were founded to hold the large number of mentally ill, including the famous Kings Park Psychiatric Center in Long Island, New York. At its peak in the 1950s, the center housed more than 33,000 patients and required its own power plant. Nurses were often called "attendants" to imply a more humanitarian approach to care. During this time, attendants primarily kept the facilities clean and maintained order among the patients. They also carried out orders from the physicians. In 1963, President John F. Kennedy accelerated the trend towards deinstitutionalization with the Community Mental Health Act. In 1964, the Civil Rights Act was passed, which made it illegal for an organization to discriminate if federally funded. Despite this ruling, certain states such as Mississippi and Alabama fought these laws in court, promoting segregation within healthcare. Moreover, since psychiatric drugs were becoming more available allowing patients to live on their own and the asylums were too expensive, institutions began shutting down. Nursing care thus became more intimate and holistic. Expanded roles were also developed in the 1960s allowing nurses to provide outpatient services such as counseling, psychotherapy, consultations, prescribing medications, along with the diagnosis and treatment of mental illnesses. The first developed standard of care was created by the psychiatric division of the American Nurses Association (ANA) in 1973. This standard outlined the responsibilities and expected quality of care of nurses. In 1975, the government published a document called "Better Services for the Mentally Ill" which reviewed the current standards of psychiatric nursing worldwide and laid out better plans for the future of mental health nursing. Global health care underwent huge expansions in the 1980s; this was due to the government's reaction from the fast increasing demand on health care services. The expansion was continued until the economic crisis of the 1970s. In 1982, the Area Health Authorities was terminated. In 1983, better structure of hospitals was implemented. General managers were introduced to make decisions, thus creating a better system of operation. The year 1983 also saw a lot of staff cuts which were heavily felt by all the mental health nurses. However, a new training syllabus was introduced in 1982, which offered suitable knowledgeable nurses. The 2000s have seen major educational upgrades for nurses to specialize in mental health as well as various financial opportunities. Interventions Nursing interventions may be divided into the following categories: Physical and biological interventions Psychiatric medication Psychiatric medication is a commonly used intervention and many psychiatric mental health nurses are involved in the administration of medicines, both in oral (e.g. tablet or liquid) form or by intramuscular injection. Nurse practitioners can prescribe medication. Nurses will monitor for side effects and response to these medical treatments by using assessments. Nurses will also offer information on medication so that, where possible, the person in care can make an informed choice, using the best medical-based evidence available. Electroconvulsive therapy Psychiatric mental health nurses are also involved in the administration of the treatment of electroconvulsive therapy and assist with the preparation and recovery from the treatment, which involves anesthesia. This treatment is only used in a tiny proportion of cases and only after all other possible treatments have been exhausted. Nurses may also be involved in gaining consent for this procedure. However, consent arrangements vary depending on the jurisdiction in which the treatment takes place. Physical care Along with other nurses, psychiatric mental health nurses will intervene in areas of physical need to ensure that people have good levels of personal hygiene, nutrition, sleep, etc., as well as tending to any concomitant physical ailments. In mental health patients, obesity is not rare because some medications can have a side effect of gaining weight which can cause the patient to have low confidence and lead to other health issues. To fix this problem, mental health nurses are urged to encourage patients to get more exercise to enhance their physical health, along with their mental health by improving the patients confidence and lowering stress levels, improving their mental health which has been a focus for mental health nurses because many patients do not get enough exercise. Nurses may also need to help the patients with alcohol or drug abuse because mental health patients are at a higher risk for this behavior. Mental health nurses need to be able to communicate to patients about this. The alcohol and drug abuse could cause the patient to also have a higher risk of sexually transmitted diseases because alcohol and drugs can lead to more sexual behavior. Psychosocial interventions Psychosocial interventions are increasingly delivered by nurses in mental health settings. These include psychotherapy interventions, such as cognitive behavioural therapy, family therapy, and less commonly other interventions, such as milieu therapy or psychodynamic approaches. These interventions can be applied to a broad range of problems including psychosis, depression, and anxiety. Nurses will work with people over a period of time and use psychological methods to teach the person psychological techniques that they can then use to aid recovery and help manage any future crisis in their mental health. In practice, these interventions will be used often, in conjunction with psychiatric medications. Psychosocial interventions are based on evidence-based practice, and therefore the techniques tend to follow set guidelines based upon what has been demonstrated to be effective by nursing research. There has been some criticism that evidence based practice is focused primarily on quantitative research and should reflect also a more qualitative research approach that seeks to understand the meaning of people's experience. Spiritual interventions The basis of this approach is to look at mental illness or distress from the perspective of a spiritual crisis. Spiritual interventions focus on developing a sense of meaning, purpose, and hope for the person in their current life experience. Spiritual interventions involve listening to the person's story and facilitating the person to connect to God, a greater power or greater whole, perhaps by using meditation or prayer. This may be a religious or non-religious experience depending on the individual's own spirituality. Spiritual interventions, along with psychosocial interventions, emphasize the importance of engagement, however, spiritual interventions focus more on caring and 'being with' the person during their time of crisis, rather than intervening and trying to 'fix' the problem. Spiritual interventions tend to be based on qualitative research and share some similarities with the humanistic approach to psychotherapy. Therapeutic relationship As with other areas of nursing practice, psychiatric mental health nursing works within nursing models, utilising nursing care plans, and seeks to care for the whole person. However, the emphasis of mental health nursing is on the development of a therapeutic alliance. In practice, this means that the nurse should seek to engage with the person in care in a positive and collaborative way that will empower the patient to draw on his or her inner resources in addition to any other treatment they may be receiving. Therapeutic relationship aspects of psychiatric nursing The most important duty of a psychiatric nurse is to maintain a positive therapeutic relationship with patients in a clinical setting. The fundamental elements of mental health care revolve around the interpersonal relations and interactions established between professionals and clients. Caring for people with mental illnesses demands an intensified presence and a strong desire to be supportive. Understanding and empathy Understanding and empathy from psychiatric nurses reinforces a positive psychological balance for patients. Conveying an understanding is important because it provides patients with a sense of importance. The expression of thoughts and feelings should be encouraged without blaming, judging, or belittling. Feeling important is significant to the lives of people who live in a structured society, who often stigmatise the mentally ill because of their disorder. Empowering patients with feelings of importance will bring them closer to the normality they had before the onset of their disorder. When subjected to fierce personal attacks, the psychiatric nurse retained the desire and ability to understand the patient. The ability to quickly empathise with unfortunate situations proves essential. Involvedness is also required when patients expect nursing staff to understand even when they are unable to express their needs verbally. When a psychiatric nurse gains understanding of the patient, the chances of improving overall treatment greatly increases. Individuality Individualised care becomes important when nurses need to get to know the patient. To lives this knowledge the psychiatric nurse must see patients as individual people with lives beyond their mental illness. Seeing people as individuals with lives beyond their mental illness is imperative in making patients feel valued and respected. In order to accept the patient as an individual, the psychiatric nurse must not be controlled by his or her own values, or by ideas, and pre-understanding of mental health patients. Individual needs of patients are met by bending the rules of standard interventions and assessment. Psychiatric nurses spoke of the potential to 'bend the rules', which required an interpretation of the unit rules, and the ability to evaluate the risks associated with bending them. Providing support Successful therapeutic relationships between nurses and patients need to have positive support. Different methods of providing patients with support include many active responses. Minor activities, such as shopping, reading the newspaper together, or taking lunch or dinner breaks with patients can improve the quality of support provided. Physical support may also be used and is manifested through the use of touch. Patients described feelings of connection when nurses hugged them or put a hand on their shoulder. Psychiatric nurses in Berg and Hallberg's study described an element of a working relationship as comforting through holding a patient's hand. Patients with depression described relief when the nurse embraced them. Physical touch is intended to comfort and console patients who are willing to embrace these sensations and share mutual feelings with nurses. Being there and being available In order to make patients feel more comfortable, the patient care providers make themselves more approachable, therefore more readily open to multiple levels of personal connections. Such personal connections have the ability to uplift patients' spirits and secure confidentiality. Utilisation of the quality of time spent with the patient proves to be beneficial. By being available for a proper amount of time, patients open up and disclose personal stories, which enable nurses to understand the meaning behind each story. The outcome results in nurses making every effort to attain a non-biased point of view. A combination of being there and being available allows empirical connections to quell any negative feelings within patients. Being genuine The act of being genuine must come from within and be expressed by nurses without reluctance. Genuineness requires the nurse to be natural or authentic in their interactions with the patient. In his article about pivotal moments in therapeutic relationships, Welch found that nurses must be in accordance with their values and beliefs. Along with the previous concept, O'Brien concluded that being consistent and reliable in both punctuality and character makes for genuinity. Schafer and Peternelj-Taylor believe that a nurses 'genuineness' is determined through the level of consistency displayed between their verbal and non-verbal behaviour. Similarly, Scanlon found that genuineness was expressed by fulfilling intended tasks. Self-disclosure proves to be the key to being open and honest. It involves the nurse sharing life experiences and is essential to the development of the therapeutic relationship, because as the relationship grows patients are reluctant to give any more information if they feel the relationship is too one sided. Multiple authors found genuine emotion, such as tearfulness, blunt feedback, and straight talk facilitated the therapeutic relationship in the pursuit of being open and honest. The friendship of a therapeutic relationship is different from a sociable friendship because the therapeutic relationship friendship is asymmetrical in nature. The basic concept of genuineness is centered on being true to one's word. Patients would not trust nurses who fail in complying with what they say or promise. Promoting equality For a successful therapeutic relationship to form, a beneficial co-dependency between the nurse and patient must be established. A derogatory view of the patient's role in the clinical setting dilapidates a therapeutic alliance. While patients need nurses to support their recovery, psychiatric nurses need patients to develop skills and experience. Psychiatric nurses convey themselves as team members or facilitators of the relationship, rather than the leaders. By empowering the patient with a sense of control and involvement, nurses encourage the patient's independence. Sole control of certain situations should not be embedded in the nurse. Equal interactions are established when nurses talk to patients one-on-one. Participating in activities that do not make one person more dominant over the other, such as talking about a mutual interest or getting lunch together strengthen the levels of equality shared between professionals and patients. This can also create the "illusion of choice"; giving the patient options, even if limited or confined within structure. Demonstrating respect To develop a quality therapeutic relationship, nurses need to make patients feel respected and important. Accepting patient faults and problems is vital to convey respect—helping the patient see themselves as worthy and worthwhile. Demonstrating clear boundaries Boundaries are essential for protecting both the patient and the nurse, and maintaining a functional therapeutic relationship. Limit setting helps to shield the patient from embarrassing behaviour, and instills the patient with feelings of safety and containment. Limit setting also protects the nurse from "burnout", preserving personal stability—thus promoting a quality relationship. Demonstrating self-awareness Psychiatric nurses recognise personal vulnerability in order to develop professionally. Humanistic insight, basic human values, and self-knowledge improves the depth of understanding the self. Different personalities affect the way psychiatric nurses respond to their patients. The more self-aware, the more knowledge on how to approach interactions with patients nurses have. Interpersonal skills needed to form relationships with patients were acquired through learning about oneself. Clinical supervision was found to provide the opportunity for nurses to reflect on patient relationships, to improve clinical skills, and to help repair difficult relationships. The reflections articulated by nurses through clinical supervision help foster self-awareness. Pediatric mental health nursing Nurses are vital to the evaluation and treatment of children with mental illness. Pediatric mental health nursing is the treatment/nursing of mental illness in pediatric patients. Family nurse practitioners (FNPs) are typically expected to evaluate and treat pediatric patients struggling with their mental health. One out of five children experience a mental disorder in a given year, but only 20% receive treatment of said disorder. Profession status Canada The registered psychiatric nurse is a distinct nursing profession in all of the four western provinces. Such nurses carry the designation "RPN". In Eastern Canada, an Americanized system of psychiatric nursing is followed. Registered Psychiatric Nurses can also work in all three of the territories in Canada; although, the registration process to work in the territories varies as the psychiatric nurses must be licensed by one of the four provinces. Ireland In Ireland, mental health nurses undergo a 4-year honors degree training programme. Nurses that trained under the diploma course in Ireland can do a post graduation course to bring their status from diploma to degree. New Zealand Mental Health Nurses in New Zealand require a diploma or degree in nursing. All nurses are now trained in both general and mental health, as part of their three-year degree training programme. Mental health nurses are often requested to complete a graduate diploma or a post graduate certificate in mental health, if they are employed by a District Health Board. This gives additional training that is specific to working with people with mental health issues. Sweden In Sweden, to become a registered psychiatric nurse one must first become a registered nurse which requires a BSc. (Bachelor of Science) in Nursing (three years of full-time study, 180 higher education credits). Then, one must complete one year of graduate studies in psychiatric/mental health nursing (60 higher education credits), which also includes writing a MSc. (Master of Science) thesis. The registered psychiatric nurse is an evolving profession in Sweden. However, unlike in countries such as the US, there is no psychiatric-mental health nurse practitioner, so in Sweden, the profession cannot for example prescribe pharmacological treatment. United Kingdom In the UK and Ireland the term psychiatric nurse has now largely been replaced with mental health nurse. Mental health nurses undergo a 3–4 year training programme at bachelor's degree level, or a 2-year training programme at master's degree level, in common with other nurses. However, most of their training is specific to caring for clients with mental health issues. RMNs can continue into further training as Advanced Nurse Practitioners (ANPs): this requires completion of a 9-month Master's programme. The role includes prescribing medications, being on call for hospital wards and delivering psychosocial interventions to clients. United States In North America, there are three levels of psychiatric nursing. The licensed vocational nurse (licensed practical nurse in some states) and the licensed psychiatric technician may dispense medication and assist with data collection regarding psychiatric and mental health clients. The registered nurse or registered psychiatric nurse has the additional scope of performing assessments and may provide other therapies such as counseling and milieu therapy. The advanced practice registered nurse (APRN) either practices as a clinical nurse specialist or a nurse practitioner after obtaining a master's degree in psychiatric-mental health nursing. Psychiatric-mental health nursing (PMHN) is a nursing specialty. The course work in a master's degree program includes specialty practice. APRNs assess, diagnose, and treat individuals or families with psychiatric problems/disorders or the potential for such disorders, as well as performing the functions associated with the basic level. They provide a full range of primary mental health care services to individuals, families, groups and communities, function as psychotherapists, educators, consultants, advanced case managers, and administrators. In many states, APRNs have the authority to prescribe medications. Qualified to practice independently, psychiatric-mental health APRNs offer direct care services in a variety of settings: mental health centers, community mental health programs, homes, offices, HMOs, etc. Psychiatric nurses who earn doctoral degrees (PhD, DNSc, EdD) often are found in practice settings, teaching, doing research, or as administrators in hospitals, agencies or schools of nursing. Australia In Australia, to be a psychiatric nurse a bachelor's degree of nursing need to be obtained in order to become a registered nurse (RN) and this degree takes three years full-time. Then a diploma in mental health or something similar will need to also be obtained, this is an additional year of study. An Australian psychiatric nurse has duties that may include assessing patients who are mentally ill, observation, helping patients take part in activities, giving medication, observing if the medication is working, assisting in behaviour change programs or visiting patients who are at home. Australian nurses can work in public or private hospitals, institutes, correctional institutes, mental care facilities and homes of the patients. See also List of counseling topics Mental health professional Psychiatric and mental health nurse practitioner Tom Main - author of seminal paper on psychiatric nursing Hildegard Peplau - psychiatric nurse theorist Tidal Model - model developed for mental health nursing References External links Psychiatric nursing Counseling
0.785431
0.984352
0.773141
Clinical formulation
A clinical formulation, also known as case formulation and problem formulation, is a theoretically-based explanation or conceptualisation of the information obtained from a clinical assessment. It offers a hypothesis about the cause and nature of the presenting problems and is considered an adjunct or alternative approach to the more categorical approach of psychiatric diagnosis. In clinical practice, formulations are used to communicate a hypothesis and provide framework for developing the most suitable treatment approach. It is most commonly used by clinical psychologists and is deemed to be a core component of that profession. Mental health nurses, social workers, and some psychiatrists may also use formulations. Types of formulation Different psychological schools or models utilize clinical formulations, including cognitive behavioral therapy (CBT) and related therapies: systemic therapy, psychodynamic therapy, and applied behavior analysis. The structure and content of a clinical formulation is determined by the psychological model. Most systems of formulation contain the following broad categories of information: symptoms and problems; precipitating stressors or events; predisposing life events or stressors; and an explanatory mechanism that links the preceding categories together and offers a description of the precipitants and maintaining influences of the person's problems. Behavioral case formulations used in applied behavior analysis and behavior therapy are built on a rank list of problem behaviors, from which a functional analysis is conducted, sometimes based on relational frame theory. Such functional analysis is also used in third-generation behavior therapy or clinical behavior analysis such as acceptance and commitment therapy and functional analytic psychotherapy. Functional analysis looks at setting events (ecological variables, history effects, and motivating operations), antecedents, behavior chains, the problem behavior, and the consequences, short- and long-term, for the behavior. A model of formulation that is more specific to CBT is described by Jacqueline Persons. This has seven components: problem list, core beliefs, precipitants and activating situations, origins, working hypothesis, treatment plan, and predicted obstacles to treatment. A psychodynamic formulation would consist of a summarizing statement, a description of nondynamic factors, description of core psychodynamics using a specific model (such as ego psychology, object relations or self psychology), and a prognostic assessment which identifies the potential areas of resistance in therapy. One school of psychotherapy which relies heavily on the formulation is cognitive analytic therapy (CAT). CAT is a fixed-term therapy, typically of around 16 sessions. At around session four, a formal written reformulation letter is offered to the patient which forms the basis for the rest of the treatment. This is usually followed by a diagrammatic reformulation to amplify and reinforce the letter. Many psychologists use an integrative psychotherapy approach to formulation. This is to take advantage of the benefits of resources from each model the psychologist is trained in, according to the patient's needs. Critical evaluation of formulations The quality of specific clinical formulations, and the quality of the general theoretical models used in those formulations, can be evaluated with criteria such as: Clarity and parsimony: Is the model understandable and internally consistent, and are key concepts discrete, specific, and non-redundant? Precision and testability: Does the model produce testable hypotheses, with operationally defined and measurable concepts? Empirical adequacy: Are the posited mechanisms within the model empirically validated? Comprehensiveness and generalizability: Is the model holistic enough to apply across a range of clinical phenomena? Utility and applied value: Does it facilitate shared meaning-making between clinician and client, and are interventions based on the model shown to be effective? Formulations can vary in temporal scope from case-based to episode-based or moment-based, and formulations may evolve during the course of treatment. Therefore, ongoing monitoring, testing, and assessment during treatment are necessary: monitoring can take the form of session-by-session progress reviews using quantitative measures, and formulations can be modified if an intervention is not as effective as hoped. History Psychologist George Kelly, who developed personal construct theory in the 1950s, noted his complaint against traditional diagnosis in his book The Psychology of Personal Constructs (1955): "Much of the reform proposed by the psychology of personal constructs is directed towards the tendency for psychologists to impose preemptive constructions upon human behaviour. Diagnosis is all too frequently an attempt to cram a whole live struggling client into a nosological category." In place of nosological categories, Kelly used the word "formulation" and mentioned two types of formulation: a first stage of structuralization, in which the clinician tentatively organizes clinical case information "in terms of dimensions rather than in terms of disease entities" while focusing on "the more important ways in which the client can change, and not merely ways in which the psychologist can distinguish him from other persons", and a second stage of construction, in which the clinician seeks a kind of negotiated integration of the clinician's organization of the case information with the client's personal meanings. Psychologists Hans Eysenck, Monte B. Shapiro, Vic Meyer, and Ira Turkat were also among the early developers of systematic individualized alternatives to diagnosis. Meyer has been credited with providing perhaps the first training course of behaviour therapy based on a case formulation model, at the Middlesex Hospital Medical School in London in 1970. Meyer's original choice of words for clinical formulation were "behavioural formulation" or "problem formulation". See also Clinical decision support system Clinical guideline Clinical pathway Common factors theory Problem structuring methods SOAP note Therapeutic assessment Treatment decision support (tools for clients) References Further reading Medical terminology Psychiatric assessment Psychotherapy
0.803467
0.9622
0.773096
Sexology
Sexology is the scientific study of human sexuality, including human sexual interests, behaviors, and functions. The term sexology does not generally refer to the non-scientific study of sexuality, such as social criticism. Sexologists apply tools from several academic fields, such as anthropology, biology, medicine, psychology, epidemiology, sociology, and criminology. Topics of study include sexual development (puberty), sexual orientation, gender identity, sexual relationships, sexual activities, paraphilias, and atypical sexual interests. It also includes the study of sexuality across the lifespan, including child sexuality, puberty, adolescent sexuality, and sexuality among the elderly. Sexology also spans sexuality among those with mental or physical disabilities. The sexological study of sexual dysfunctions and disorders, including erectile dysfunction and anorgasmia, are also mainstays. History Early Sex manuals have existed since antiquity, such as Ovid's , the Kama Sutra of Vatsyayana, the Ananga Ranga, and The Perfumed Garden for the Soul's Recreation. (Prostitution in the City of Paris), an early 1830s study on 3,558 registered prostitutes in Paris, written by Alexander Jean Baptiste Parent-Duchatelet (published in 1837, a year after he died), has been called the first work of modern sex research. In England, James Graham was an early sexologist who lectured on topics such as the process of sex and conception. The scientific study of sexual behavior in human beings began in the 19th century with Heinrich Kaan, whose book Psychopathia Sexualis (1844) Michel Foucault describes as marking "the date of birth, or in any case the date of the emergence of sexuality and sexual aberrations in the psychiatric field." The term sexology was coined for the first time in the United States by Elizabeth Osgood Goodrich Willard in 1867. Roughly simultaneously a group of homophile activists, not yet identifying themselves as sexologists, were responding to shifts in Europe's national borders, a crisis that brought into conflict laws that were sexually liberal and laws that criminalized behaviors such as homosexual activity. Victorian era to WWII Despite the prevailing social attitude of sexual repression in the Victorian era, the movement towards sexual emancipation began towards the end of the nineteenth century in England and Germany. In 1886, Richard Freiherr von Krafft-Ebing published Psychopathia Sexualis. That work is considered as having established sexology as a scientific discipline. In England, the founding father of sexology was the doctor and sexologist Havelock Ellis who challenged the sexual taboos of his era regarding masturbation and homosexuality and revolutionized the conception of sex in his time. His seminal work was the 1897 Sexual Inversion, which describes the sexual relations of homosexual males, including men with boys. Ellis wrote the first objective study of homosexuality (the term was coined by Karl-Maria Kertbeny), as he did not characterize it as a disease, immoral, or a crime. The work assumes that same-sex love transcended age taboos as well as gender taboos. Seven of his twenty-one case studies are of inter-generational relationships. He also developed other important psychological concepts, such as autoerotism and narcissism, both of which were later developed further by Sigmund Freud. Ellis pioneered transgender phenomena alongside the German Magnus Hirschfeld. He established it as new category that was separate and distinct from homosexuality. Aware of Hirschfeld's studies of transvestism, but disagreeing with his terminology, in 1913 Ellis proposed the term sexo-aesthetic inversion to describe the phenomenon. In 1908, the first scholarly journal of the field, Journal of Sexology (Zeitschrift für Sexualwissenschaft), began publication and was published monthly for one year. Those issues contained articles by Freud, Alfred Adler, and Wilhelm Stekel. In 1913, the first academic association was founded: the Society for Sexology. Freud developed a theory of sexuality. These stages of development include: Oral, Anal, Phallic, Latency and Genital. These stages run from infancy to puberty and onwards. based on his studies of his clients, between the late 19th and early 20th centuries. Wilhelm Reich and Otto Gross were disciples of Freud, but rejected his theories because of their emphasis on the role of sexuality in the revolutionary struggle for the emancipation of mankind. Pre-Nazi Germany, under the sexually liberal Napoleonic code, organized and resisted the anti-sexual, Victorian cultural influences. The momentum from those groups led them to coordinate sex research across traditional academic disciplines, bringing Germany to the leadership of sexology. Physician Magnus Hirschfeld was an outspoken advocate for sexual minorities, founding the Scientific Humanitarian Committee, the first advocacy for homosexual and transgender rights. Hirschfeld also set up the first Institut für Sexualwissenschaft (Institute for Sexology) in Berlin in 1919. Its library housed over 20,000 volumes, 35,000 photographs, a large collection of art and other objects. People from around Europe visited the institute to gain a clearer understanding of their sexuality and to be treated for their sexual concerns and dysfunctions. Hirschfeld developed a system which identified numerous actual or hypothetical types of sexual intermediary between heterosexual male and female to represent the potential diversity of human sexuality, and is credited with identifying a group of people that today are referred to as transsexual or transgender as separate from the categories of homosexuality, he referred to these people as transvestiten (transvestites). Germany's dominance in sexual behavior research ended with the Nazi regime. The Institute and its library were destroyed by the Nazis less than three months after they took power, May 8, 1933. The institute was shut down and Hirschfeld's books were burned. Other sexologists in the early gay rights movement included Ernst Burchard and Benedict Friedlaender. Ernst Gräfenberg, after whom the G-spot is named, published the initial research developing the intrauterine device (IUD). Post WWII After World War II, sexology experienced a renaissance, both in the United States and Europe. Large scale studies of sexual behavior, sexual function, and sexual dysfunction gave rise to the development of sex therapy. Post-WWII sexology in the U.S. was influenced by the influx of European refugees escaping the Nazi regime and the popularity of the Kinsey studies. Until that time, American sexology consisted primarily of groups working to end prostitution and to educate youth about sexually transmitted infections. Alfred Kinsey founded the Institute for Sex Research at Indiana University at Bloomington in 1947. This is now called the Kinsey Institute for Research in Sex, Gender and Reproduction. He wrote in his 1948 book that more was scientifically known about the sexual behavior of farm animals than of humans. Psychologist and sexologist John Money developed theories on sexual identity and gender identity in the 1950s. His work, notably on the David Reimer case has since been regarded as controversial, even while the case was key to the development of treatment protocols for intersex infants and children. Kurt Freund developed the penile plethysmograph in Czechoslovakia in the 1950s. The device was designed to provide an objective measurement of sexual arousal in males and is currently used in the assessment of pedophilia and hebephilia. This tool has since been used with sex offenders. In 1966 and 1970, Masters and Johnson released their works Human Sexual Response and Human Sexual Inadequacy, respectively. Those volumes sold well, and they were founders of what became known as the Masters & Johnson Institute in 1978. Vern Bullough was a historian of sexology during this era, as well as being a researcher in the field. The emergence of HIV/AIDS in the 1980s caused a dramatic shift in sexological research efforts towards understanding and controlling the spread of the disease. 21st century Technological advances have permitted sexological questions to be addressed with studies using behavioral genetics, neuroimaging, and large-scale Internet-based surveys. Sexology is a regulated profession in some jurisdictions. In Quebec, sexologists must be members of the Ordre professionnel des sexologues du Québec. They are one of the professions eligible to receive psychotherapy permits from the Ordre des psychologues du Québec. Notable contributors This is a list of sexologists and notable contributors to the field of sexology, by year of birth: Karl Heinrich Ulrichs (1825–1895) Karl Friedrich Otto Westphal (1833–1890) Richard Freiherr von Krafft-Ebing (1840–1902) Albert Eulenburg (1840–1917) Auguste Henri Forel (1848–1931) Sigmund Freud (1856–1939) Wilhelm Fliess (1858–1928) Havelock Ellis (1858–1939) Eugen Steinach (1861–1944) Robert Latou Dickinson (1861–1950) Albert Moll (1862–1939) Edvard Westermarck (1862–1939) Clelia Duel Mosher (1863–1940) Eugene Wilhelm (aka Numa Praetorius) (1866–1951) Magnus Hirschfeld (1868–1935) Iwan Bloch (1872–1922) Theodoor Hendrik van de Velde (1873–1937) Gaston Vorberg (1875–1947) Max Marcuse (1877–1963) Otto Gross (1877–1920) Ernst Gräfenberg (1881–1957) Bronisław Malinowski (1884–1942) Harry Benjamin (1885–1986) Hans Blüher (1888–1955) Theodor Reik (1888–1969) Alfred Kinsey (1894–1956) Wilhelm Reich (1897–1957) Mary Calderone (1904–1998) Wardell Pomeroy (1913–2001) Albert Ellis (1913–2007) Kurt Freund (1914–1996) Ernest Borneman (1915–1995) William Masters (1915–2001) Gershon Legman (1917–1999) Harold I. Lief (1917–2007) Paul H. Gebhard (1917–2015) John Money (1921–2006) Robert Stoller (1924–1991) Ira Reiss (1925–2024) Virginia Johnson (1925–2013) Preben Hertoft (1928–2017) Oswalt Kolle (1928–2010) Vern Bullough (1928–2006) Ruth Westheimer (1928–2024) John Gagnon (1931–2016) Fritz Klein (1932–2006) Milton Diamond (1934–2024) Erwin J. Haeberle (1936–2021) Marco Aurelio Denegri (1938–2018) Gunter Schmidt (1938–present) Rolf Gindorf (1939–2016) Volkmar Sigusch (1940–2023) Beverly Whipple (1941–present) Martin Dannecker (1942–present) Shere Hite (1943–2020) Ray Blanchard (1945–present) Pepper Schwartz (1945–present) Gilbert Herdt (1949–present) Pan Suiming (1950–present) Kenneth Zucker (1950–present) Ava Cadell (1956–present) Carol Queen (1957–present) James Cantor (1966–present) Marta Crawford (1969–present) See also Certified sex therapist Gender and sexuality studies List of academic journals in sexology List of sexology organizations Philosophy of sex Sex education Sexological testing Sexophobia Porn Studies References External links International Society for Sexual Medicine (archived) Archive for Sexology American Board of Sexology Human sexuality Sexually transmitted diseases and infections
0.77523
0.997237
0.773088
Insight
Insight is the understanding of a specific cause and effect within a particular context. The term insight can have several related meanings: a piece of information the act or result of understanding the inner nature of things or of seeing intuitively (called in Greek) an introspection the power of acute observation and deduction, discernment, and perception, called intellection or an understanding of cause and effect based on the identification of relationships and behaviors within a model, system, context, or scenario (see artificial intelligence) An insight that manifests itself suddenly, such as understanding how to solve a difficult problem, is sometimes called by the German word . The term was coined by the German psychologist and theoretical linguist Karl Bühler. It is also known as an epiphany, eureka moment, or (for crossword solvers) the penny dropping moment (PDM). Sudden sickening realisations often identify a problem rather than solving it, so Uh-oh rather than Aha moments are seen in negative insight. A further example of negative insight is chagrin which is annoyance at the obviousness of a solution that was missed up until the (perhaps too late) point of insight, an example of this being Homer Simpson's catchphrase exclamation, D'oh!. Psychology In psychology, insight occurs when a solution to a problem presents itself quickly and without warning. It is the sudden discovery of the correct solution following incorrect attempts based on trial and error. Solutions via insight have been proven to be more accurate than non-insight solutions. Insight was first studied by Gestalt psychology, in the early part of the 20th century, during the search for an alternative to associationism and the associationistic view of learning. Some proposed potential mechanisms for insight include: suddenly seeing the problem in a new way, connecting the problem to another relevant problem/solution pair, releasing past experiences that are blocking the solution, or seeing problem in a larger, coherent context. Classic methods Generally, methodological approaches to the study of insight in the laboratory involve presenting participants with problems and puzzles that cannot be solved in a conventional or logical manner. Problems of insight commonly fall into three types: Breaking functional fixedness The first type of problem forces participants to use objects in a way they are not accustomed to (thus, breaking their functional fixedness). An example is the "Duncker candle problem", in which people are given matches and a box of tacks and asked to find a way to attach a candle to the wall to light the room. The solution requires the participants to empty the box of tacks, set the candle inside the box, tack the box to the wall, and light the candle with the matches. Spatial ability The second type of insight problem requires spatial ability to solve. An example is the "Nine-dot problem" which requires participants to draw four lines, through nine dots, without picking their pencil up. Using verbal ability The third and final type of problem requires verbal ability to solve. An example is the Remote Associates Test (RAT), in which people must think of a word that connects three, seemingly unrelated, words. RAT are often used in experiments, because they can be solved both with and without insight. Specific results Versus non-insight problems Two clusters of problems, those solvable by insight and those not requiring insight to solve, have been observed. A person's cognitive flexibility, fluency, and vocabulary ability are predictive of performance on insight problems, but not on non-insight problems. In contrast, fluid intelligence is mildly predictive of performance on non-insight problems, but not on insight problems. More recent research suggests that rather than , that the subjective feeling of insight varies, with some solutions experienced with a stronger feeling of Aha than others. Emotion People in a better mood are more likely to solve problems using insight. Self-reported positive affect of participants increased insight before and during the solving of a problem, . People experiencing anxiety showed the opposite effect, and solved fewer problems by insight. Emotion can also be considered: whether this is a positive Aha or negative Uh-oh moment. In order to have insights it is important to have access to one's emotions and sensations, as these can cause insights. To the degree that individuals have limited introspective access to these underlying causes, they have only limited control over these processes as well. Incubation Using a geometric and spatial insight problem, it was found that providing participants with breaks improved their performance when compared to participants who did not receive a break. However, the length of incubation between problems did not matter. Thus, participants' performance on insight problems improved just as much with a short break (4 minutes) as it did with a long break (12 minutes). Sleep Research has shown sleep to help produce insight. People were initially trained on insight problems. Following training, one group was tested on the insight problems after sleeping for eight hours at night, one group was tested after staying awake all night, and one group was tested after staying awake all day. Those that slept performed twice as well on the insight problems than those who stayed awake. In the brain Differences in brain activation in the left and right hemisphere seem to be indicative of insight versus non-insight solutions. Presenting RATs either to the left or right visual field, it was shown that participants having solved the problem with insight were more likely to have been shown the RAT on the left visual field, indicating right hemisphere processing. This provides evidence that the right hemisphere plays a special role in insight. fMRI and EEG scans of participants completing RATs demonstrated particular brain activity corresponding to problems solved by insight. For example, there is high EEG activity in the alpha- and gamma-band about 300 milliseconds before participants indicated a solution to insight problems, but not to non-insight problems. Additionally, problems solved by insight corresponded to increased activity in the temporal lobes and mid-frontal cortex, while more activity in the posterior cortex corresponded to non-insight problems. The data suggests there is something different occurring in the brain when solving insight versus non-insight problems that happens right before the solving of the problem. This conclusion has been supported also by eye-tracking data that shows an increased eye blink duration and frequency when people solve problems via insight. This latter result, (such as looking at blank wall, or out the window at the sky) proves different attention involvement in insight problem solving vs. problem solving via analysis. Group insight Groups typically perform better on insight problems (in the form of rebus puzzles with either helpful or unhelpful clues) than individuals. Additionally, while incubation improves insight performance for individuals, it improves insight performance for groups even more. Thus, after a 15-minute break, individual performance improved for the rebus puzzles with unhelpful clues, and group performance improved for rebus puzzles with both unhelpful and helpful clues. Individual differences Participants who ranked lower on emotionality and higher on openness to experience performed better on insight problems. Men outperformed women on insight problems, and women outperformed men on non-insight problems. Higher intelligence (higher IQ) is associated with better performance on insight problems. However, those of lower intelligence benefit more than those of higher intelligence from being provided with cues and hints for insight problems. A large-scale study in Australia suggests that insight may not be universally experienced, with almost 20% of respondents reporting that they had not experienced insight. Metacognition People are poorer at predicting their own metacognition for insight problems, than for non-insight problems. People were asked to indicate how "hot" or "cold" to a solution they felt. Generally, they were able to predict this fairly well for non-insight problems, but not for insight problems. This provides evidence for the suddenness involved during insight. Naturalistic settings Accounts of insight that have been reported in the media, such as in interviews, etc., were examined and coded. Insights that occur in the field are typically reported to be associated with a sudden "change in understanding" and with "seeing connections and contradictions" in the problem. Insight in nature differed from insight in the laboratory. For example, insight in nature was often rather gradual, not sudden, and incubation was not as important. Other studies used online questionnaires to explore insight outside of the laboratory, verifying the notion that insight often happens in situations such as in the shower, and echoing the idea that creative ideas occur in situations where divergent thought is more likely, sometimes called the Three "B"s of Creativity, in Bed, on the Bus, or in the Bath. Non-Human Animals Studies on primate cognition have provided evidence of what may be interpreted as insight in animals. In 1917, Wolfgang Köhler published his book The Mentality of Apes, having studied primates on the island of Tenerife for six years. In one of his experiments, apes were presented with an insight problem that required the use of objects in new and original ways, in order to win a prize (usually, some kind of food). He observed that the animals would continuously fail to get the food, and this process occurred for quite some time; however, rather suddenly, they would purposefully use the object in the way needed to get the food, as if the realization had occurred out of nowhere. He interpreted this behavior as something resembling insight in apes. A more recent study suggested that elephants might also experience insight, showing that a young male elephant was able to identify and move a large cube under food that was out of reach so that he could stand on it to get the reward. Theories There are a number of theories about insight; no single theory dominates interpretation. Dual-process theory According to the dual-process theory, there are two systems that people use to solve problems. The first involves logical and analytical thought processes based on reason, while the second involves intuitive and automatic processes based on experience. Research has demonstrated that insight probably involves both processes; however, the second process is more influential. Three-process theory According to the three-process theory, intelligence plays a large role in insight. Specifically, insight involves three processes that require intelligence to apply them to problems: selective encoding focusing attention on ideas relevant to a solution, while ignoring features that are irrelevant selective combination combining the information previously deemed relevant selective comparison the use of past experience with problems and solutions that are applicable to the current problem and solution Four-stage model According to the four-stage model of insight, there are four stages to problem solving: The person prepares to solve a problem. The person incubates on the problem, which encompasses trial-and-error, etc. The insight occurs, and the solution is illuminated. The verification of the solution to the problem is experienced. Since this model was proposed, other similar models have been explored that contain two or three similar stages. Psychiatry In psychology and psychiatry, insight can mean the ability to recognize one's own mental illness. Psychiatric insight is typically measured with the Beck cognitive insight scale (BCIS), named after American psychiatrist Aaron Beck. This form of insight has multiple dimensions, such as recognizing the need for treatment, and recognizing consequences of one's behavior as stemming from an illness. A person with very poor recognition or acknowledgment is referred to as having "poor insight" or "lack of insight". The most extreme form is anosognosia, the total absence of insight into one's own mental illness. Mental illnesses are associated with a variety of levels of insight. For example, people with obsessive compulsive disorder and various phobias tend to have relatively good insight that they have a problem and that their thoughts and/or actions are unreasonable, although they feel compelled to carry out the thoughts and actions regardless. Patients with schizophrenia, and various psychotic conditions tend to have very poor awareness that anything is wrong with them. Psychiatric insight cognitive behavioural therapy for people with psychosis. Some psychiatrists believe psychiatric medication may contribute to the patient's lack of insight. Spirituality The Pali word for "insight" is , which has been adopted as the name of a variety of Buddhist mindfulness meditation. Research indicates that mindfulness meditation facilitates solving of insight problems with dosage of 20 minutes. Similar concepts in Zen Buddhism are and . See also References Further reading External links Critical thinking Concepts in epistemology Sources of knowledge
0.776177
0.995974
0.773052
Psychological testing
Psychological testing refers to the administration of psychological tests. Psychological tests are administered or scored by trained evaluators. A person's responses are evaluated according to carefully prescribed guidelines. Scores are thought to reflect individual or group differences in the construct the test purports to measure. The science behind psychological testing is psychometrics. Psychological tests According to Anastasi and Urbina, psychological tests involve observations made on a "carefully chosen sample [emphasis authors] of an individual's behavior." A psychological test is often designed to measure unobserved constructs, also known as latent variables. Psychological tests can include a series of tasks, problems to solve, and characteristics (e.g., behaviors, symptoms) the presence of which the respondent affirms/denies to varying degrees. Psychological tests can include questionnaires and interviews. Questionnaire- and interview-based scales typically differ from psychoeducational tests, which ask for a respondent's maximum performance. Questionnaire- and interview-based scales, by contrast, ask for the respondent's typical behavior. Symptom and attitude tests are more often called scales. A useful psychological test/scale must be both valid, i.e., show evidence that the test or scale measures what it is purported to measure,) and reliable, i.e., show evidence of consistency across items and raters and over time, etc. It is important that people who are equal on the measured construct (e.g., mathematics ability, depression) have an approximately equal probability of answering a test item accurately or acknowledging the presence of a symptom. An example of an item on a mathematics test that might be used in the United Kingdom but not the United States could be the following: "In a football match two players get a red card; how many players are left on the pitch?" This item requires knowledge of football (soccer) to be answered correctly, not just mathematical ability. Thus, group membership can influence the probability of correctly answering items, as encapsulated in the concept of differential item functioning. Often tests are constructed for a specific population and the nature of that population should be taken into account when administering tests outside that population. A test should be invariant between relevant subgroups (e.g., demographic groups) within a larger population. For example, for a test to be used in the United Kingdom, the test and its items should have approximately the same meaning for British males and females. That invariance does not necessarily apply to similar groups in another population, such as males and females in the United States or between populations, for example, the populations of the UK and the US. In test construction, it is important to establish invariance at least for the subgroups of the population of interest. Psychological assessment is similar to psychological testing but usually involves a more comprehensive assessment of the individual. According to the American Psychological Association, psychological assessment involves the collection and integration of data for the purpose of evaluating an individual’s "behavior, abilities, and other characteristics." Each assessment is a process that involves integrating information from multiple sources, such as personality inventories, ability tests, symptom scales, interest inventories, and attitude scales, as well as information from personal interviews. Collateral information can also be collected from occupational records or medical histories; information can also be obtained from parents, spouses, teachers, friends, or past therapists or physicians. One or more psychological tests are sources of information used within the process of assessment. Many psychologists conduct assessments when providing services. Psychological assessment is a complex, detailed, in-depth process. Examples of assessments include providing a diagnosis, identifying a learning disability in schoolchildren, determining if a defendant is mentally competent, and selecting job applicants. History The first large-scale tests may have been part of the imperial examination system in China. The tests, an early form of psychological testing, assessed candidates based on their proficiency in topics such as civil law and fiscal policies. Early tests of intelligence were made for entertainment rather than analysis. Modern mental testing began in France in the 19th century. It contributed to identifying individuals with intellectual disabilities for the purpose of humanely providing them with an alternative form of education. Englishman Francis Galton coined the terms psychometrics and eugenics. He developed a method for measuring intelligence based on nonverbal sensory-motor tests. The test was initially popular but was abandoned. In 1905 French psychologists Alfred Binet and Théodore Simon published the Échelle métrique de l'Intelligence (Metric Scale of Intelligence), known in English-speaking countries as the Binet–Simon test. The test focused heavily on verbal ability. Binet and Simon intended that the test be used to aid in identifying schoolchildren who were intellectually challenged, which in turn would pave the way for providing the children with professional help. The Binet-Simon test became the foundation for the later-developed Stanford–Binet Intelligence Scales. The origins of personality testing date back to the 18th and 19th centuries, when phrenology was the basis for assessing personality characteristics. Phrenology, a pseudoscience, involved assessing personality by way of skull measurement. Early pseudoscientific techniques eventually gave way to empirical methods. One of the earliest modern personality tests was the Woodworth Personal Data Sheet, a self-report inventory developed during World War I to be used by the United States Army for the purpose of screening potential soldiers for mental health problems and identifying victims of shell shock (the instrument was completed too late to be used for the purposes it was designed for). The Woodworth Inventory, however, became the forerunner of many later personality tests and scales. Principles The development of a psychological test requires careful research. Some of the elements of test development involve the following: Standardization - All procedures and steps must be conducted with consistency from one testing site/testing occasion to another. Examiner subjectivity is minimized (see objectivity next). Major standardized tests are normed on large try-out samples in order to understand what constitutes high, low, and intermediate scores. Objectivity - Scoring such that subjective judgments and biases are minimized; scores are obtained in a similar manner for every test taker (see below). Discrimination - Scores on a test should discriminate members of extreme groups; for example, each subscale of the original MMPI distinguished hospitalized patients suffering from mental illness and members of a well comparison group. Test Norms - Part of the standardization of large-scale tests (see above). Norms help psychologists learn about individual differences. For example, a normed personality scale can help psychologists understand how some people are high in negative affectivity (NA) and others are low or intermediate in NA. With many psychoeducational tests, test norms allow educators and psychologists obtain an age- or grade-referenced percentile rank, for example, in reading achievement. Reliability - Refers to test or scale consistency. It is important that individuals score about the same if they take a test and an alternate form of the test or if they take the same test twice, within a short time window. Reliability also refers to response consistency from test item to test item. Validity - Refers to evidence that demonstrates that a test or scale measures what it is purported to measure. Sample of behavior The term sample of behavior refers to an individual's performance on tasks that have usually been prescribed beforehand. For example, a spelling test for middle school students cannot include all the words in the vocabularies of middle schoolers because there are thousands of words in their lexicon; a middle school spelling test must include only a sample of words in their vocabulary. The samples of behavior must be reasonably representative of the behavior in question. The samples of behavior that make up a paper-and-pencil test, the most common type of psychological test, are written into the test items. Total performance on the items produces a test score. A score on a well-constructed test is believed to reflect a psychological construct such as achievement in a school subject like vocabulary or mathematics knowledge, cognitive ability, dimensions of personality such as introversion/extraversion, etc. Differences in test scores are thought to reflect individual differences in the construct the test is purported to measure. Types There are several broad categories of psychological tests: Achievement tests Achievement tests assess an individual's knowledge in a subject domain. Some academic achievement tests are designed to be administered by a trained evaluator. By contrast, group achievement tests are often administered by a teacher. A score on an achievement test is believed to reflect the individual's knowledge of a subject area. There are generally two types of achievement tests, norm-referenced and criterion-referenced tests. Most achievement tests are norm-referenced. The individual's responses are scored according to standardized protocols and the results can be compared to the results of a norming group. Norm-referenced tests can be used to underline individual differences, that is to say, to compare each test-taker to every other test-taker. By contrast, the purpose of criterion referenced achievement tests is ascertain whether the test-taker mastered a predetermined body of knowledge rather than to compare the test-taker to everyone else who took the test. These types of tests are often a component of a mastery-based classroom. The Kaufman Test of Educational Achievement is an example of an individually administered achievement test for students. Aptitude tests Psychological tests have been designed to measure abilities, both specific (e.g., clerical skill like the Minnesota Clerical Test) and general abilities (e.g., traditional IQ tests such as the Stanford-Binet or the Wechsler Adult Intelligence Scale). A widely used, but brief, aptitude test used in business is the Wonderlic Test. Aptitude tests have been used in assessing specific abilities or the general ability of potential new employees (the Wonderlic was once used by the NFL). Aptitude tests have also been used for career guidance. Evidence suggests that aptitude tests like IQ tests are sensitive to past learning and are not pure measures of untutored ability. The SAT, which used to be called the Scholastic Aptitude Test, had its named changed because performance on the test is sensitive to training. Attitude scales An attitude scale assesses an individual's disposition regarding an event (e.g., a Supreme Court decision), person (e.g., a governor), concept (e.g., wearing face masks during a pandemic), organization (e.g., the Boy Scouts), or object (e.g., nuclear weapons) on a unidimensional favorable-unfavorable attitude continuum. Attitude scales are used in marketing to determine individuals' preferences for brands. Historically social psychologists have developed attitude scales to assess individuals' attitudes toward the United Nations and race relations. Typically Likert scales are used in attitude research. Historically, the Thurstone scale was used prior to the development of the Likert scale. The Likert scale has largely supplanted the Thurstone scale. Biographical Information Blank The Biographical Information Blanks or BIB is a paper-and-pencil form that includes items that ask about detailed personal and work history. It is used to aid in the hiring of employees by matching the backgrounds of individuals to requirements of the job. Clinical tests The purpose of clinical tests is to assess the presence of symptoms of psychopathology . Examples of clinical assessments include the Minnesota Multiphasic Personality Inventory (MMPI), Millon Clinical Multiaxial Inventory-IV, Child Behavior Checklist, Symptom Checklist 90 and the Beck Depression Inventory. Many large-scale clinical tests are normed. For example, scores on the MMPI are rescaled such that 50 is the middlemost score on the MMPI Depression scale and 60 is a score that places the individual one standard deviation above the mean for depressive symptoms; 40 represents a symptom level that is one standard deviation below the mean. Criterion-referenced A criterion-referenced test is an achievement test in a specific knowledge domain. An individual's performance on the test is compared to a criterion. Test-takers are not compared to each other. A passing score, i.e., the criterion performance, is established by the teacher or an educational institution. Criterion-referenced tests are part and parcel of mastery based education. Direct observation Psychological assessment can involve the observation of people as they engage in activities. This type of assessment is usually conducted with families in a laboratory or at home. Sometimes the observation can involve children in a classroom or the schoolyard. The purpose may be clinical, such as to establish a pre-intervention baseline of a child's hyperactive or aggressive classroom behaviors or to observe the nature of parent-child interaction in order to understand a relational disorder. Time sampling methods are also part of direct observational research. The reliability of observers in direct observational research can be evaluated using Cohen's kappa. The Parent-Child Interaction Assessment-II (PCIA) is an example of a direct observation procedure that is used with school-age children and parents. The parents and children are video recorded playing at a make-believe zoo. The Parent-Child Early Relational Assessment is used to study parents and young children and involves a feeding and a puzzle task. The MacArthur Story Stem Battery (MSSB) is used to elicit narratives from children. The Dyadic Parent-Child Interaction Coding System-II tracks the extent to which children follow the commands of parents and vice versa and is well suited to the study of children with Oppositional Defiant Disorders and their parents. Interest inventories Psychological tests include interest inventories. These tests are used primarily for career counseling. Interest inventories include items that ask about the preferred activities and interests of people seeking career counseling. The rationale is that if the individual's activities and interests are similar to the modal pattern of activities and interests of people who are successful in a given occupation, then the chances are high that the individual would find satisfaction in that occupation. A widely used instrument is the Strong Interest Inventory, which is used in career assessment, career counseling, and educational guidance. Neuropsychological tests Neuropsychological tests are designed to assess behaviors that are linked to brain structure and function. An examiner, following strict pre-set procedures, administers the test to a single person in a quiet room largely free of distractions. An example of a widely-used neuropsychological test is the Stroop test. Norm-referenced tests Items on norm-referenced tests have been tried out on a norming group and scores on the test can be classified as high, medium, or low and the gradations in between. These tests allow for the study of individual differences. Scores on norm-referenced achievement tests are associated with percentile ranks vis-á-vis other individuals who are the test-taker's age or grade. Personality tests Personality tests assess constructs that are thought to be the constituents of personality. Examples of personality constructs include traits in the Big Five, such as introversion-extroversion and conscientiousness. Personality constructs are thought to be dimensional. Personality measures are used in research and in the selection of employees. They include self-report and observer-report scales. Examples of norm-referenced personality tests include the NEO-PI, the 16PF Questionnaire, the Occupational Personality Questionnaires, and the Five-Factor Personality Inventory. The International Personality Item Pool (IPIP) scales assess the same traits that the NEO and other personality scales assess. All IPIP scales and items are in the public domain and, therefore, are available free of charge. Projective tests Projective testing originated in the first half of the 1900s. The idea animating projective tests is that the examinee is thought to project hidden aspects of his or her personality, including unconscious content, onto the ambiguous stimuli presented in the test. Examples of projective tests include Rorschach test, Thematic apperception test, and the Draw-A-Person test. Available evidence, however, suggests that projective tests have limited validity. Psychological symptom scales Beck Depression Inventory (BDI-II), there is a fee to use the BDI. Beck Hopelessness Scale, there is a fee to use the scale. Bortner Type A Scale Center for Epidemiologic Studies Depression Scale (CES-D) Children's Depression Inventory (CDI & CDI-2) Depression Anxiety Stress Scales (DASS) General Health Questionnaire (GHQ) Generalized Anxiety Disorder scale (GAD-7) Hamilton Rating Scale for Anxiety (HAM-A) Unlike most other psychological symptom scales listed in this section, clinicians use this scale to help evaluate the mental health of people, usually under treatment, who have been diagnosed with an anxiety disorder; it is not used with the general population samples. Hamilton Rating Scale for Depression (HAM-D) Unlike most other psychological symptom scales listed in this section, clinicians use this scale to help evaluate the mental health of people, usually under treatment, who have been diagnosed with a depressive disorder; it is not used with the general population samples. Harburg Anger-In/Anger-Out Scale Hopkins Symptom Checklist (HSCL) Hospital Anxiety and Depression Scale (HADS) Jenkins Activity Survey (JAS) Assesses Type A/B behavior Kessler Psychological Distress Scale (K6 and K10, 6- and 10-item symptom scales) Midtown Study Screening Instrument Multidimensional Anger Inventory (MAI) Occupational Depression Inventory Perceived Stress Scale Patient Health Questionnaire–nine-item depression scale (PHQ-9) Penn State Worry Questionnaire Positive and Negative Affect Schedule (PANAS) Profile of Mood States (POMS) Psychiatric Epidemiology Research Interview (PERI) Psychosomatic Complaints Scale Psychotic Symptoms Subscale PTSD Checklist for DSM-5 (PCL-5) Rosenberg Self-Esteem Scale Although first designed for adolescents, the scale has been extensively used with adults. UCLA Loneliness Scale Zung Self-Rating Anxiety Scale Zung Self-Rating Depression Scale Public safety employment tests Vocations within the public safety field (e.g., fire service, law enforcement, corrections, emergency medical services) are often required to take industrial or organizational psychological tests for initial employment and promotion. The National Firefighter Selection Inventory, the National Criminal Justice Officer Selection Inventory, and the Integrity Inventory are prominent examples of these tests. Sources of psychological tests Thousands of psychological tests have been developed. Some were produced by commercial testing companies that charge for their use. Others have been developed by researchers, and can be found in the academic research literature. Tests to assess specific psychological constructs can be found by conducting a database search. Some databases are open access, for example, Google Scholar (although many tests found in the Google Scholar database are not free of charge). Other databases are proprietary, for example, PsycINFO, but are available through university libraries and many public libraries (e.g., the Brooklyn Public Library and the New York Public Library). There are online archives available that contain tests on various topics. APA PsycTests. Requires subscription Mental Measurements Yearbook- a non-profit that provides independent reviews of thousands of distinct psychological tests. Assessment Psychology Online has links to dozens of tests for clinical assessment. International Personality Item Pool (IPIP) contains items to assess more than 100 personality traits including Five Factor Model. Organization of Work: Measurement Tools for Research and Practice. NIOSH site devoted to Occupational Health and Safety Test security Many psychological and psychoeducational tests are not available to the public. Test publishers put restrictions on who has access to the test. Psychology licensing boards also restrict access to the tests used in licensing psychologists. Test publishers hold that both copyright and professional ethics require them to protect the tests. Publishers sell tests only to people who have proved their educational and professional qualifications. Purchasers are legally bound not to give test answers or the tests themselves to members of the public unless permitted by the publisher. The International Test Commission (ITC), an international association of national psychological societies and test publishers, publishes the International Guidelines for Test Use, which prescribes measures to take to "protect the integrity" of the tests by not publicly describing test techniques and by not "coaching individuals" so that they "might unfairly influence their test performance." See also References External links American Psychological Association webpage on testing and assessment British Psychological Society Psychological Testing Centre Guidelines of the International Test Commission International Item Pool, an alternative and free source of items available for research on personality List of mental health tests - a web directory with links to many assessments related to mental health and substance abuse Clinical psychology
0.777368
0.994395
0.773012
Medical history
The medical history, case history, or anamnesis (from Greek: ἀνά, aná, "open", and μνήσις, mnesis, "memory") of a patient is a set of information the physicians collect over medical interviews. It involves the patient, and eventually people close to them, so to collect reliable/objective information for managing the medical diagnosis and proposing efficient medical treatments. The medically relevant complaints reported by the patient or others familiar with the patient are referred to as symptoms, in contrast with clinical signs, which are ascertained by direct examination on the part of medical personnel. Most health encounters will result in some form of history being taken. Medical histories vary in their depth and focus. For example, an ambulance paramedic would typically limit their history to important details, such as name, history of presenting complaint, allergies, etc. In contrast, a psychiatric history is frequently lengthy and in depth, as many details about the patient's life are relevant to formulating a management plan for a psychiatric illness. The information obtained in this way, together with the physical examination, enables the physician and other health professionals to form a diagnosis and treatment plan. If a diagnosis cannot be made, a provisional diagnosis may be formulated, and other possibilities (the differential diagnoses) may be added, listed in order of likelihood by convention. The treatment plan may then include further investigations to clarify the diagnosis. The method by which doctors gather information about a patient's past and present medical condition in order to make informed clinical decisions is called the history and physical ( the H&P). The history requires that a clinician be skilled in asking appropriate and relevant questions that can provide them with some insight as to what the patient may be experiencing. The standardized format for the history starts with the chief concern (why is the patient in the clinic or hospital?) followed by the history of present illness (to characterize the nature of the symptom(s) or concern(s)), the past medical history, the past surgical history, the family history, the social history, their medications, their allergies, and a review of systems (where a comprehensive inquiry of symptoms potentially affecting the rest of the body is briefly performed to ensure nothing serious has been missed). After all of the important history questions have been asked, a focused physical exam (meaning one that only involves what is relevant to the chief concern) is usually done. Based on the information obtained from the H&P, lab and imaging tests are ordered and medical or surgical treatment is administered as necessary. Process A practitioner typically asks questions to obtain the following information about the patient: Identification and demographics: name, age, height, weight. The "chief complaint (CC)" – the major health problem or concern, and its time course (e.g. chest pain for past 4 hours). History of the present illness (HPI) – details about the complaints, enumerated in the CC (also often called history of presenting complaint or HPC). Past medical history (PMH) (including major illnesses, any previous surgery/operations (sometimes distinguished as past surgical history or PSH), any current ongoing illness, e.g. diabetes). Review of systems (ROS) Systematic questioning about different organ systems Family diseases – especially those relevant to the patient's chief complaint. Childhood diseases – this is very important in pediatrics. Social history (medicine) – including living arrangements, occupation, marital status, number of children, drug use (including tobacco, alcohol, other recreational drug use), recent foreign travel, and exposure to environmental pathogens through recreational activities or pets. Regular and acute medications (including those prescribed by doctors, and others obtained over-the-counter or alternative medicine) Allergies – to medications, food, latex, and other environmental factors Sexual history, obstetric/gynecological history, and so on, as appropriate. Conclusion & closure History-taking may be comprehensive history taking (a fixed and extensive set of questions are asked, as practiced only by health care students such as medical students, physician assistant students, or nurse practitioner students) or iterative hypothesis testing (questions are limited and adapted to rule in or out likely diagnoses based on information already obtained, as practiced by busy clinicians). Computerized history-taking could be an integral part of clinical decision support systems. A follow-up procedure is initiated at the onset of the illness to record details of future progress and results after treatment or discharge. This is known as a catamnesis in medical terms. Review of systems Whatever system a specific condition may seem restricted to, all the other systems are usually reviewed in a comprehensive history. The review of systems often includes all the main systems in the body that may provide an opportunity to mention symptoms or concerns that the individual may have failed to mention in the history. Health care professionals may structure the review of systems as follows: Cardiovascular system (chest pain, dyspnea, ankle swelling, palpitations) are the most important symptoms and you can ask for a brief description for each of the positive symptoms. Respiratory system (cough, haemoptysis, epistaxis, wheezing, pain localized to the chest that might increase with inspiration or expiration). Gastrointestinal system (change in weight, flatulence and heartburn, dysphagia, odynophagia, hematemesis, melena, hematochezia, abdominal pain, vomiting, bowel habit). Genitourinary system (frequency in urination, pain with micturition (dysuria), urine color, any urethral discharge, altered bladder control like urgency in urination or incontinence, menstruation and sexual activity). Nervous system (Headache, loss of consciousness, dizziness and vertigo, speech and related functions like reading and writing skills and memory). Cranial nerves symptoms (Vision (amaurosis), diplopia, facial numbness, deafness, oropharyngeal dysphagia, limb motor or sensory symptoms and loss of coordination). Endocrine system (weight loss, polydipsia, polyuria, increased appetite (polyphagia) and irritability). Musculoskeletal system (any bone or joint pain accompanied by joint swelling or tenderness, aggravating and relieving factors for the pain and any positive family history for joint disease). Skin (any skin rash, recent change in cosmetics and the use of sunscreen creams when exposed to sun). Inhibiting factors Factors that inhibit taking a proper medical history include a physical inability of the patient to communicate with the physician, such as unconsciousness and communication disorders. In such cases, it may be necessary to record such information that may be gained from other people who know the patient. In medical terms, this is known as a heteroanamnesis, or collateral history, in contrast to a self-reporting anamnesis. Medical history taking may also be impaired by various factors impeding a proper doctor-patient relationship, such as transitions to physicians that are unfamiliar to the patient. History taking of issues related to sexual or reproductive medicine may be inhibited by a reluctance of the patient to disclose intimate or uncomfortable information. Even if such an issue is on the patient's mind, they often do not start talking about such an issue without the physician initiating the subject by a specific question about sexual or reproductive health. Some familiarity with the doctor generally makes it easier for patients to talk about intimate issues such as sexual subjects, but for some patients, a very high degree of familiarity may make the patient reluctant to reveal such intimate issues. When visiting a health provider about sexual issues, having both partners of a couple present is often necessary, and is typically a good thing, but may also prevent the disclosure of certain subjects, and, according to one report, increases the stress level. Computer-assisted history taking Computer-assisted history taking or computerized history taking systems have been available since the 1960s. However, their use remains variable across healthcare delivery systems. One advantage of using computerized systems as an auxiliary or even primary source of medically related information is that patients may be less susceptible to social desirability bias. For example, patients may be more likely to report that they have engaged in unhealthy lifestyle behaviors. Another advantage of using computerized systems is that they allow easy and high-fidelity portability to a patient's electronic medical record. Also an advantage is that it saves money and paper. One disadvantage of many computerized medical history systems is that they cannot detect non-verbal communication, which may be useful for elucidating anxieties and treatment plans. Another disadvantage is that people may feel less comfortable communicating with a computer as opposed to a human. In a sexual history-taking setting in Australia using a computer-assisted self-interview, 51% of people were very comfortable with it, 35% were comfortable with it, and 14% were either uncomfortable or very uncomfortable with it. The evidence for or against computer-assisted history taking systems is sparse. As of 2011, there were no randomized control trials comparing computer-assisted versus traditional oral-and-written family history taking to identifying patients with an elevated risk of developing type 2 diabetes mellitus. In 2021, a substudy of a large prospective cohort trial showed that a majority (70%) of patients with acute chest pain could, with computerized history taking, provide sufficient data for risk stratification with a well-established risk score (HEART score). See also Genogram Medical record Medicine Physical examination Psychoanalysis (Freud uses the term anamnesis to describe neurotics' recounting of their symptoms) References Practice of medicine Medical terminology Athletic training History of science by discipline
0.778729
0.992619
0.772981
Educational psychologist
An educational psychologist is a psychologist whose differentiating functions may include diagnostic and psycho-educational assessment, psychological counseling in educational communities (students, teachers, parents, and academic authorities), community-type psycho-educational intervention, and mediation, coordination, and referral to other professionals, at all levels of the educational system. Many countries use this term to signify those who provide services to students, their teachers, and families, while other countries use this term to signify academic expertise in teaching Educational Psychology. Specific facts Psychology is a well-developed discipline that allows different specializations, which include; clinical and health psychology, work and organizational psychology, educational psychology, etc. What differentiates an educational psychologist from other psychologists or specialists is constituted by an academic triangle whose vertexes are represented by three categories: teachers, students, and curricula (see diagram). The use of plural in these three cases assumes two meanings; the traditional or official one and other more general derived from our information and knowledge society. The plural also indicates that nowadays, we can no longer consider the average student or teacher, or a closed curriculum, but the enormous variety found in our students, teachers, and curricula. The triangle vertexes are connected by two-directional arrows, allowing four-fold typologies instead of the traditional two-way relationships (e.g., teacher-student). In this way, we can find, in different educational contexts, groups of good teachers and students (excellent teaching/learning processes and products), groups of good teachers but bad students, and groups of bad teachers and good students, producing in both cases lower levels of academic achievements. In addition, we can find groups of bad teachers and bad students (school failure). This specific work of an educational psychologist takes place in different contexts: micro-, meso- and macro-systems. Microsystems refer to family contexts, where atmosphere, hidden curriculum, and expectations and behaviors of all family members determine, to a large extent, the educational development of each student. The term mesosystem refers to all variety of contexts found in educational institutions, knowing that different variables such as geographical location, institution marketing or type of teachers and students, etc., can influence the academic results of students. Macrosystem has a much more general and global nature, leading us, for example, to consider the influence that the different societies or countries have on educational final products. One illustrative example of this level can be the analyses carried out on data gathered by the PISA reports. This approach would be the essence of educational psychology versus school psychology for many of U.S. educational researchers and for Division 15 of APA. Specific functions There are four specific functions that are the essence of educational psychology. These are evaluation, psychological counseling, communitarian interventions, and referral to other professionals. Evaluation involves collecting information, in a valid and reliable way, about the three target groups of the triangle diagram (in their respective contexts): teachers, students and curricula. (Not to be confused with curriculum vitae). The most noteworthy function is, without a doubt, formal (rather than informal) assessment. Evaluation is divided into least two main types: diagnosis (dysfunctions detection such as physical, sensory and intellectual impairments, dyslexia, attention-deficit/hyperactivity disorder, pervasive development disorders or autism spectrum disorders) and psycho-educational evaluation (detection of curriculum difficulties, poor school atmosphere or family problems, etc.). Evaluation implies detection, and, thanks to this, Prevention. A second function, very relevant too, is psychological counseling. This must be directed to students, in their various dimensions (intellectual, obviously, but also their social, affective and professional dimensions), parents, as ‘paraprofessionals’ who may implement programs, selected or developed by educational psychologists, to solve their child/student problems, teachers, to whom will be offered psycho-educational support to face psychological difficulties that may be found when implementing and adapting curricula to diversity shown by students, academic authorities, who will be helped in their decision-making, regarding the teaching (teaching process) and administrative duties (providing necessary support for students with specific educational needs, decisions about promotion to the next level, and so on). A third function is based on communitarian interventions, with three main facets: corrective, preventative, and optimizing interventions. If disruptive behavior occurs in particular moments and contexts, then a corrective intervention is required. If the aim is school violence reduction, then tertiary preventive intervention programs are needed. If an early diagnosis of learning difficulties is carried out, then psychologist has undertaken secondary prevention. If the aim is to use psycho-educational programs to prevent future school failure, then a primary preventative intervention program is put into practice. The complement to all of these interventions is constituted by a series of optimizing activities, meant for the academic, professional, social, family, and personal improvement of all agents in an educational community, especially learners. A fourth function, or specific activity, is a referral of those with dysfunctions to other professionals, following a previous diagnostic evaluation, with the aim to coordinate future treatment implementation. This coordination will take place with parents, teachers, and other professionals, promoting collaboration among all educational agents in order to get the fastest and best case resolution. This second triangle represents the essential components of school psychology, for some European researchers or division 16 of APA. Academic requirements Recently a specific Doctoral degree (Masters in Scotland) is generally required for the professional preparation of educational psychologists in the UK. In this Doctorate in Educational Psychology, it is essential the main course which prepares educational psychologists for carrying out a diagnostic and psycho-educational assessment, psychological counseling to the educational communities, and all types of communitarian interventions (corrective, preventive and optimizing). Trainees also develop external professional practices (where the specific coordination, evaluation, counseling, and intervention functions will be put into practice) on placement in local authorities, as well as a final thesis. Equally, there are a series of theoretical areas that, due to their relevancy in the teaching/learning contexts, should be included, such as: classroom diversity, drug-dependency prevention, developmental disorders, learning difficulties, new technologies applied to educational contexts, and data analysis and interpretation. In sum, taking into account all of this, perhaps educational psychologists will be able to meet adequately the demands found in different educational institutions. The following qualifications are required: an undergraduate degree in psychology (or approved postgraduate conversion course which confers the BPS Graduate Basis for Registration) and a BPS accredited Doctorate in Educational Psychology (3 years), or, for Scotland only, an accredited master's degree in Educational Psychology. Whilst teaching experience is relevant, it is no longer an entry requirement. At least one year's full-time experience working with children in educational, childcare, or community settings is required, and for some courses, this may be two years' experience. To use the term Educational Psychologist in the UK, one will need to be registered with the Health Care Professionals Council (HCPC), which involves completing a course (Doctorate or Masters) approved by the HCPC. In the United States In the most basic sense of standards for education requirements in the United States, an educational psychologist needs a bachelor's degree, followed by a master's degree, and commonly finishing with a PhD or a PsyD in Educational Psychology. Specifically in California, an educational psychologist candidate (commonly referred to as a LEP or Licensed Educational Psychologist) must have a minimum of a master's degree in psychology or a related field in educational psychology. This degree must be coupled with a minimum of three years of experience, including two years as a credential school psychologist and one year of supervised professional experience in an accredited school psychology program. After completing these requirements, a candidate will then take an LEP examination to determine if the applicant will be approved. These requirements are widely accepted by the Board of Behavioral Sciences (BBS) and are considered the common standard. States may have varying standards, but the aforementioned standards are a commonality when working in a school setting. Another route that can be followed is in the research field. It involves many of the same standards without the direct link of being in a school setting. Those with a research setting are typically employed through a university and do research based on their own and others' findings. They may also teach at the university in their respective field. Handbooks, application forms, and board reviews can be found at various websites: http://apadiv15.org/wp-content/uploads/2014/01/Division15Bylaws2012.pdf http://www.bbs.ca.gov/pdf/forms/lep/lepapp.pdf http://www.caspwebcasts.org/new/index.php?option=com_content&view=article&id=325&Itemid=140 Job availability/outlook and salary The average salary of an educational psychologist is variable dependent on where the psychologist depends on practicing. In a school setting, the professional can expect to make around $68,000 a year; however, these professionals are commonly school psychologists who have a different background than educational psychologists. An educational psychologist in the research and development field could expect to make around $84,000 per year. Both of these averages could be considered inflated, with another source listing the average income of an educational psychologist at around $57,000 per year. However, the resounding majority seems to sit at the $67,000 per year range, making the previous income average considerably modest. The latest statistics released in 2010 by the Bureau of Labor Statistics place the median annual salary at $72,540 – showing an increase over a four-year period – compared to the median household income of the United States which is currently at $51,000. Educational psychologists make approximately 40% more than the average American, making it an advantageous field of study. Job outlook in the field of educational psychology is considered in good condition. By national estimates (US) growth in the field ranges from 11 to 15% between 2006 and 2022. A report released in 2006 the rate of growth was listed as 15% from 2006 to 2016, and a separate report released put the growth percentage at a modest 11% from 2012 to 2022. Considering most job outlook growth percentages of the time, educational psychologists had the highest in the psychology field and was also considered the highest amongst all occupations at the time of its release in 2006. See also References External links British Psychological Society Division 15 of the American Psychological Association Division 16 of the American Psychological Association Journal of Educational Psychology National Association of Principal Educational Psychologists National Educational Psychological Service Northern Arizona University Educational Psychology program Standards for Educational and Psychological Testing
0.791594
0.976445
0.772948
Emic and etic
In anthropology, folkloristics, linguistics, and the social and behavioral sciences, emic and etic refer to two kinds of field research done and viewpoints obtained. The "emic" approach is an insider's perspective, which looks at the beliefs, values, and practices of a particular culture from the perspective of the people who live within that culture. This approach aims to understand the cultural meaning and significance of a particular behavior or practice, as it is understood by the people who engage in it. The "etic" approach, on the other hand, is an outsider's perspective, which looks at a culture from the perspective of an outside observer or researcher. This approach tends to focus on the observable behaviors and practices of a culture, and aims to understand them in terms of their functional or evolutionary significance. The etic approach often involves the use of standardized measures and frameworks to compare different cultures and may involve the use of concepts and theories from other disciplines, such as psychology or sociology. The emic and etic approaches each have their own strengths and limitations, and each can be useful in understanding different aspects of culture and behavior. Some anthropologists argue that a combination of both approaches is necessary for a complete understanding of a culture, while others argue that one approach may be more appropriate depending on the specific research question being addressed. Definitions "The emic approach investigates how local people think...". How they perceive and categorize the world, their rules for behavior, what has meaning for them, and how they imagine and explain things. "The etic (scientist-oriented) approach shifts the focus from local observations, categories, explanations, and interpretations to those of the anthropologist. The etic approach realizes that members of a culture often are too involved in what they are doing... to interpret their cultures impartially. When using the etic approach, the ethnographer emphasizes what he or she considers important." Although emics and etics are sometimes regarded as inherently in conflict and one can be preferred to the exclusion of the other, the complementarity of emic and etic approaches to anthropological research has been widely recognized, especially in the areas of interest concerning the characteristics of human nature as well as the form and function of human social systems. Emic and etic approaches of understanding behavior and personality fall under the study of cultural anthropology. Cultural anthropology states that people are shaped by their cultures and their subcultures, and we must account for this in the study of personality. One way is looking at things through an emic approach. This approach "is culture specific because it focuses on a single culture and it is understood on its own terms." As explained below, the term "emic" originated from the specific linguistic term "phonemic", from phoneme, which is a language-specific way of abstracting speech sounds. An 'emic' account is a description of behavior or a belief in terms meaningful (consciously or unconsciously) to the actor; that is, an emic account comes from a person within the culture. Almost anything from within a culture can provide an emic account. An 'etic' account is a description of a behavior or belief by a social analyst or scientific observer (a student or scholar of anthropology or sociology, for example), in terms that can be applied across cultures; that is, an etic account attempts to be 'culturally neutral', limiting any ethnocentric, political or cultural bias or alienation by the observer. When these two approaches are combined, the "richest" view of a culture or society can be understood. On its own, an emic approach would struggle with applying overarching values to a single culture. The etic approach is helpful in enabling researchers to see more than one aspect of one culture, and in applying observations to cultures around the world. History The terms were coined in 1954 by linguist Kenneth Pike, who argued that the tools developed for describing linguistic behaviors could be adapted to the description of any human social behavior. As Pike noted, social scientists have long debated whether their knowledge is objective or subjective. Pike's innovation was to turn away from an epistemological debate, and turn instead to a methodological solution. Emic and etic are derived from the linguistic terms phonemic and phonetic, respectively, where a phone is a distinct speech sound or gesture, regardless of whether the exact sound is critical to the meanings of words, whereas a phoneme is a speech sound in a given language that, if swapped with another phoneme, could change one word to another. The possibility of a truly objective description was discounted by Pike himself in his original work; he proposed the emic-etic dichotomy in anthropology as a way around philosophic issues about the very nature of objectivity. The terms were also championed by anthropologists Ward Goodenough and Marvin Harris with slightly different connotations from those used by Pike. Goodenough was primarily interested in understanding the culturally specific meaning of specific beliefs and practices; Harris was primarily interested in explaining human behavior. Pike, Harris, and others have argued that cultural "insiders" and "outsiders" are equally capable of producing emic and etic accounts of their culture. Some researchers use "etic" to refer to objective or outsider accounts, and "emic" to refer to subjective or insider accounts. Margaret Mead was an anthropologist who studied the patterns of adolescence in Samoa. She discovered that the difficulties and the transitions that adolescents faced are culturally influenced. The hormones that are released during puberty can be defined using an etic framework, because adolescents globally have the same hormones being secreted. However, Mead concluded that how adolescents respond to these hormones is greatly influenced by their cultural norms. Through her studies, Mead found that simple classifications about behaviors and personality could not be used because peoples’ cultures influenced their behaviors in such a radical way. Her studies helped create an emic approach of understanding behaviors and personality. Her research deduced that culture has a significant impact in shaping an individual's personality. Carl Jung, a Swiss psychoanalyst, is a researcher who took an emic approach in his studies. Jung studied mythology, religion, ancient rituals, and dreams, leading him to believe that there are archetypes that can be identified and used to categorize people's behaviors. Archetypes are universal structures of the collective unconscious that refer to the inherent way people are predisposed to perceive and process information. The main archetypes that Jung studied were the persona (how people choose to present themselves to the world), the anima and animus (part of people experiencing the world in viewing the opposite sex, that guides how they select their romantic partner), and the shadow (dark side of personalities because people have a concept of evil; well-adjusted people must integrate both good and bad parts of themselves). Jung looked at the role of the mother and deduced that all people have mothers and see their mothers in a similar way; they offer nurture and comfort. His studies also suggest that "infants have evolved to suck milk from the breast, it is also the case that all children have inborn tendencies to react in certain ways." This way of looking at the mother is an emic way of applying a concept cross-culturally and universally. Importance as regards personality Emic and etic approaches are important to understanding personality because problems can arise "when concepts, measures, and methods are carelessly transferred to other cultures in attempts to make cross-cultural generalizations about personality." It is hard to apply certain generalizations of behavior to people who are so diverse and culturally different. One example of this is the F-scale (Macleod). The F-scale, which was created by Theodor Adorno, is used to measure authoritarian personality, which can, in turn, be used to predict prejudiced behaviors. This test, when applied to Americans accurately depicts prejudices towards black individuals. However, when a study was conducted in South Africa using the F-Scale, (Pettigrew and Friedman) results did not predict any prejudices towards black individuals. This study used emic approaches of study by conducting interviews with the locals and etic approaches by giving participants generalized personality tests. See also Exonym and endonym Other explorations of the differences between reality and humans' models of it: Blind men and an elephant Emic and etic units Internalism and externalism Map–territory relation References Further reading External links Emic and Etic Standpoints for the Description of Behavior, chapter 2 in Language in Relation to a Unified Theory of the Structure of Human Behavior, vol 2, by Kenneth Pike (published in 1954 by Summer Institute of Linguistics) Anthropology Dichotomies Ethnography Folklore Metatheory
0.776832
0.994972
0.772926
Socialization
In sociology, socialization (Modern English; or socialisation - see spelling differences) is the process of internalizing the norms and ideologies of society. Socialization encompasses both learning and teaching and is thus "the means by which social and cultural continuity are attained". Socialization is strongly connected to developmental psychology. Humans need social experiences to learn their culture and to survive. Socialization essentially represents the whole process of learning throughout the life course and is a central influence on the behavior, beliefs, and actions of adults as well as of children. Socialization may lead to desirable outcomes—sometimes labeled "moral"—as regards the society where it occurs. Individual views are influenced by the society's consensus and usually tend toward what that society finds acceptable or "normal". Socialization provides only a partial explanation for human beliefs and behaviors, maintaining that agents are not blank slates predetermined by their environment; scientific research provides evidence that people are shaped by both social influences and genes. Genetic studies have shown that a person's environment interacts with their genotype to influence behavioral outcomes. It is the process by which individuals learn their own societies culture. History Notions of society and the state of nature have existed for centuries. In its earliest usages, socialization was simply the act of socializing or another word for socialism. Socialization as a concept originated concurrently with sociology, as sociology was defined as the treatment of "the specifically social, the process and forms of socialization, as such, in contrast to the interests and contents which find expression in socialization". In particular, socialization consisted of the formation and development of social groups, and also the development of a social state of mind in the individuals who associate. Socialization is thus both a cause and an effect of association. The term was relatively uncommon before 1940, but became popular after World War II, appearing in dictionaries and scholarly works such as the theory of Talcott Parsons. Stages of moral development Lawrence Kohlberg studied moral reasoning and developed a theory of how individuals reason situations as right from wrong. The first stage is the pre-conventional stage, where a person (typically children) experience the world in terms of pain and pleasure, with their moral decisions solely reflecting this experience. Second, the conventional stage (typical for adolescents and adults) is characterized by an acceptance of society's conventions concerning right and wrong, even when there are no consequences for obedience or disobedience. Finally, the post-conventional stage (more rarely achieved) occurs if a person moves beyond society's norms to consider abstract ethical principles when making moral decisions. Stages of psychosocial development Erik H. Erikson (1902–1994) explained the challenges throughout the life course. The first stage in the life course is infancy, where babies learn trust and mistrust. The second stage is toddlerhood where children around the age of two struggle with the challenge of autonomy versus doubt. In stage three, preschool, children struggle to understand the difference between initiative and guilt. Stage four, pre-adolescence, children learn about industriousness and inferiority. In the fifth stage called adolescence, teenagers experience the challenge of gaining identity versus confusion. The sixth stage, young adulthood, is when young people gain insight into life when dealing with the challenge of intimacy and isolation. In stage seven, or middle adulthood, people experience the challenge of trying to make a difference (versus self-absorption). In the final stage, stage eight or old age, people are still learning about the challenge of integrity and despair.< This concept has been further developed by Klaus Hurrelmann and Gudrun Quenzel using the dynamic model of "developmental tasks". Behaviorism George Herbert Mead (1863–1931) developed a theory of social behaviorism to explain how social experience develops an individual's self-concept. Mead's central concept is the self: It is composed of self-awareness and self-image. Mead claimed that the self is not there at birth, rather, it is developed with social experience. Since social experience is the exchange of symbols, people tend to find meaning in every action. Seeking meaning leads us to imagine the intention of others. Understanding intention requires imagining the situation from the other's point of view. In effect, others are a mirror in which we can see ourselves. Charles Horton Cooley (1902-1983) coined the term looking glass self, which means self-image based on how we think others see us. According to Mead, the key to developing the self is learning to take the role of the other. With limited social experience, infants can only develop a sense of identity through imitation. Gradually children learn to take the roles of several others. The final stage is the generalized other, which refers to widespread cultural norms and values we use as a reference for evaluating others. Contradictory evidence to behaviorism Behaviorism makes claims that when infants are born they lack social experience or self. The social pre-wiring hypothesis, on the other hand, shows proof through a scientific study that social behavior is partly inherited and can influence infants and also even influence foetuses. Wired to be social means that infants are not taught that they are social beings, but they are born as prepared social beings. The social pre-wiring hypothesis refers to the ontogeny of social interaction. Also informally referred to as, "wired to be social". The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social. Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behavior. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behavior cannot be contributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behavior and identity through genetics. Principal evidence of this theory is uncovered by examining Twin pregnancies. The main argument is, if there are social behaviors that are inherited and developed before birth, then one should expect twin foetuses to engage in some form of social interaction before they are born. Thus, ten foetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin foetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins was not accidental but specifically aimed. The social pre-wiring hypothesis was proved correct, "The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin foetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behavior: when the context enables it, as in the case of twin foetuses, other-directed actions are not only possible but predominant over self-directed actions." Types Primary socialization Primary socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture. Primary socialization for a child is very important because it sets the groundwork for all future socialization. It is mainly influenced by immediate family and friends. For example, if a child's mother expresses a discriminatory opinion about a minority or majority group, then that child may think this behavior is acceptable and could continue to have this opinion about that minority or majority group. Secondary socialization Secondary socialization refers to the process of learning what is the appropriate behavior as a member of a smaller group within the larger society. Basically, it involves the behavioral patterns reinforced by socializing agents of society. Secondary socialization takes place outside the home. It is where children and adults learn how to act in a way that is appropriate for the situations they are in. Schools require very different behavior from the home, and children must act according to new rules. New teachers have to act in a way that is different from pupils and learn the new rules from people around them. Secondary socialization is usually associated with teenagers and adults and involves smaller changes than those occurring in primary socialization. Examples of secondary socialization may include entering a new profession or relocating to a new environment or society. Anticipatory socialization Anticipatory socialization refers to the processes of socialization in which a person "rehearses" for future positions, occupations, and social relationships. For example, a couple might move in together before getting married in order to try out, or anticipate, what living together will be like. Research by Kenneth J. Levine and Cynthia A. Hoffner identifies parents as the main source of anticipatory socialization in regard to jobs and careers. Resocialization Resocialization refers to the process of discarding former behavior-patterns and reflexes while accepting new ones as part of a life transition. This can occur throughout the human life-span. Resocialization can be an intense experience, with individuals experiencing a sharp break with their past, as well as a need to learn and be exposed to radically different norms and values. One common example involves resocialization through a total institution, or "a setting in which people are isolated from the rest of society and manipulated by an administrative staff". Resocialization via total institutions involves a two step process: 1) the staff work to root out a new inmate's individual identity; and 2) the staff attempt to create for the inmate a new identity. Other examples include the experiences of a young person leaving home to join the military, or of a religious convert internalizing the beliefs and rituals of a new faith. Another example would be the process by which a transsexual person learns to function socially in a dramatically altered gender-role. Organizational socialization Organizational socialization is the process whereby an employee learns the knowledge and skills necessary to assume his or her role in an organization. As newcomers become socialized, they learn about the organization and its history, values, jargon, culture, and procedures. Acquired knowledge about new employees' future work-environment affects the way they are able to apply their skills and abilities to their jobs. How actively engaged the employees are in pursuing knowledge affects their socialization process. New employees also learn about their work group, the specific people they will work with on a daily basis, their own role in the organization, the skills needed to do their job, and both formal procedures and informal norms. Socialization functions as a control system in that newcomers learn to internalize and obey organizational values and practices. Group socialization Group socialization is the theory that an individual's peer groups, rather than parental figures, become the primary influence on personality and behavior in adulthood. Parental behavior and the home environment has either no effect on the social development of children, or the effect varies significantly between children. Adolescents spend more time with peers than with parents. Therefore, peer groups have stronger correlations with personality development than parental figures do. For example, twin brothers with an identical genetic heritage will differ in personality because they have different groups of friends, not necessarily because their parents raised them differently. Behavioral genetics suggest that up to fifty percent of the variance in adult personality is due to genetic differences. The environment in which a child is raised accounts for only approximately ten percent in the variance of an adult's personality. As much as twenty percent of the variance is due to measurement error. This suggests that only a very small part of an adult's personality is influenced by factors which parents control (i.e. the home environment). Harris grants that while siblings do not have identical experiences in the home environment (making it difficult to associate a definite figure to the variance of personality due to home environments), the variance found by current methods is so low that researchers should look elsewhere to try to account for the remaining variance. Harris also states that developing long-term personality characteristics away from the home environment would be evolutionarily beneficial because future success is more likely to depend on interactions with peers than on interactions with parents and siblings. Also, because of already existing genetic similarities with parents, developing personalities outside of childhood home environments would further diversify individuals, increasing their evolutionary success. Stages Individuals and groups change their evaluations of and commitments to each other over time. There is a predictable sequence of stages that occur as an individual transitions through a group: investigation, socialization, maintenance, resocialization, and remembrance. During each stage, the individual and the group evaluate each other, which leads to an increase or decrease in commitment to socialization. This socialization pushes the individual from prospective to new, full, marginal, and ex member. Stage 1: Investigation This stage is marked by a cautious search for information. The individual compares groups in order to determine which one will fulfill their needs (reconnaissance), while the group estimates the value of the potential member (recruitment). The end of this stage is marked by entry to the group, whereby the group asks the individual to join and they accept the offer. Stage 2: Socialization Now that the individual has moved from a prospective member to a new member, the recruit must accept the group's culture. At this stage, the individual accepts the group's norms, values, and perspectives (assimilation), and the group may adapt to fit the new member's needs (accommodation). The acceptance transition-point is then reached and the individual becomes a full member. However, this transition can be delayed if the individual or the group reacts negatively. For example, the individual may react cautiously or misinterpret other members' reactions in the belief that they will be treated differently as a newcomer. Stage 3: Maintenance During this stage, the individual and the group negotiate what contribution is expected of members (role negotiation). While many members remain in this stage until the end of their membership, some individuals may become dissatisfied with their role in the group or fail to meet the group's expectations (divergence). Stage 4: Resocialization If the divergence point is reached, the former full member takes on the role of a marginal member and must be resocialized. There are two possible outcomes of resocialization: the parties resolve their differences and the individual becomes a full member again (convergence), or the group and the individual part ways via expulsion or voluntary exit. Stage 5: Remembrance In this stage, former members reminisce about their memories of the group and make sense of their recent departure. If the group reaches a consensus on their reasons for departure, conclusions about the overall experience of the group become part of the group's tradition. Gender socialization Henslin contends that "an important part of socialization is the learning of culturally defined gender roles". Gender socialization refers to the learning of behavior and attitudes considered appropriate for a given sex: boys learn to be boys and girls learn to be girls. This "learning" happens by way of many different agents of socialization. The behavior that is seen to be appropriate for each gender is largely determined by societal, cultural, and economic values in a given society. Gender socialization can therefore vary considerably among societies with different values. The family is certainly important in reinforcing gender roles, but so are groups - including friends, peers, school, work, and the mass media. Social groups reinforce gender roles through "countless subtle and not so subtle ways". In peer-group activities, stereotypic gender-roles may also be rejected, renegotiated, or artfully exploited for a variety of purposes. Carol Gilligan compared the moral development of girls and boys in her theory of gender and moral development. She claimed that boys have a justice perspective - meaning that they rely on formal rules to define right and wrong. Girls, on the other hand, have a care-and-responsibility perspective, where personal relationships are considered when judging a situation. Gilligan also studied the effect of gender on self-esteem. She claimed that society's socialization of females is the reason why girls' self-esteem diminishes as they grow older. Girls struggle to regain their personal strength when moving through adolescence as they have fewer female teachers and most authority figures are men. As parents are present in a child's development from the beginning, their influence in a child's early socialization is very important, especially in regard to gender roles. Sociologists have identified four ways in which parents socialize gender roles in their children: Shaping gender related attributes through toys and activities, differing their interaction with children based on the sex of the child, serving as primary gender models, and communicating gender ideals and expectations. Sociologist of gender R.W. Connell contends that socialization theory is "inadequate" for explaining gender, because it presumes a largely consensual process except for a few "deviants", when really most children revolt against pressures to be conventionally gendered; because it cannot explain contradictory "scripts" that come from different socialization agents in the same society, and because it does not account for conflict between the different levels of an individual's gender (and general) identity. Racial socialization Racial socialization, or racial-ethnic socialization, has been defined as "the developmental processes by which children acquire the behaviors, perceptions, values, and attitudes of an ethnic group, and come to see themselves and others as members of the group". The existing literature conceptualizes racial socialization as having multiple dimensions. Researchers have identified five dimensions that commonly appear in the racial socialization literature: cultural socialization, preparation for bias, promotion of mistrust, egalitarianism, and other. Cultural socialization, sometimes referred to as "pride development", refers to parenting practices that teach children about their racial history or heritage. Preparation for bias refers to parenting practices focused on preparing children to be aware of, and cope with, discrimination. Promotion of mistrust refers to the parenting practices of socializing children to be wary of people from other races. Egalitarianism refers to socializing children with the belief that all people are equal and should be treated with common humanity. In the United States, white people are socialized to perceive race as a zero-sum game and a black-white binary. Oppression socialization Oppression socialization refers to the process by which "individuals develop understandings of power and political structure, particularly as these inform perceptions of identity, power, and opportunity relative to gender, racialized group membership, and sexuality". This action is a form of political socialization in its relation to power and the persistent compliance of the disadvantaged with their oppression using limited "overt coercion". Language socialization Based on comparative research in different societies, and focusing on the role of language in child development, linguistic anthropologists Elinor Ochs and Bambi Schieffelin have developed the theory of language socialization. They discovered that the processes of enculturation and socialization do not occur apart from the process of language acquisition, but that children acquire language and culture together in what amounts to an integrated process. Members of all societies socialize children both to and through the use of language; acquiring competence in a language, the novice is by the same token socialized into the categories and norms of the culture, while the culture, in turn, provides the norms of the use of language. Planned socialization Planned socialization occurs when other people take actions designed to teach or train others. This type of socialization can take on many forms and can occur at any point from infancy onward. Natural socialization Natural socialization occurs when infants and youngsters explore, play and discover the social world around them. Natural socialization is easily seen when looking at the young of almost any mammalian species (and some birds). On the other hand, planned socialization is mostly a human phenomenon; all through history, people have made plans for teaching or training others. Both natural and planned socialization can have good and bad qualities: it is useful to learn the best features of both natural and planned socialization in order to incorporate them into life in a meaningful way. Political socialization Socialization produces the economic, social, and political development of any particular country. The nature of the compromise between nature and nurture also determines whether society is good or harmful. Political socialization is described as "the long developmental process by which an infant (even an adult) citizen learns, imbibes and ultimately internalizes the political culture (core political values, beliefs, norms and ideology) of his political system in order to make him a more informed and effective political participant." A society's political culture is inculcated in its citizens and passed down from one generation to the next as part of the political socialization process. Agents of socialization are thus people, organizations, or institutions that have an impact on how people perceive themselves, behave, or have other orientations. In contemporary democratic government, political parties are the main forces behind political socialization. Socialization enhances business, trade, and foreign investment globally. Building technology is made easy, is improved and carried out due to the ease with which interaction in interest services and media work can be connected. Citizens must instil in themselves excellent morals, ethics, and values and must preserve human rights or have sound judgment to be able to lead a country to a higher developmental level in order to construct a decent and democratic society for nation-building. Developing nations can transfer agricultural technology and machinery like tractors, harvesters, and agrochemicals to enhance the agricultural sector of the economy through socialization. Positive socialization Positive socialization is the type of social learning that is based on pleasurable and exciting experiences. Individual humans tend to like the people who fill their social learning processes with positive motivation, loving care, and rewarding opportunities. Positive socialization occurs when desired behaviors are reinforced with a reward, encouraging the individual to continue exhibiting similar behaviors in the future. Negative socialization Negative socialization occurs when socialialization agents use punishment, harsh criticisms, or anger to try to "teach us a lesson"; and often we come to dislike both negative socialization and the people who impose it on us. There are all types of mixes of positive and negative socialization, and the more positive social learning experiences we have, the happier we tend to be—especially if we are able to learn useful information that helps us cope well with the challenges of life. A high ratio of negative to positive socialization can make a person unhappy, leading to defeated or pessimistic feelings about life. Bullying can examplify negative socialization. Institutions In the social sciences, institutions are the structures and mechanisms of social order and cooperation governing the behavior of individuals within a given human collectivity. Institutions are identified with a social purpose and permanence, transcending individual human lives and intentions, and with the making and enforcing of rules governing cooperative human behavior. Productive processing of reality From the late 1980s, sociological and psychological theories have been connected with the term socialization. One example of this connection is the theory of Klaus Hurrelmann. In his book Social Structure and Personality Development, he develops the model of productive processing of reality. The core idea is that socialization refers to an individual's personality development. It is the result of the productive processing of interior and exterior realities. Bodily and mental qualities and traits constitute a person's inner reality; the circumstances of the social and physical environment embody the external reality. Reality processing is productive because human beings actively grapple with their lives and attempt to cope with the attendant developmental tasks. The success of such a process depends on the personal and social resources available. Incorporated within all developmental tasks is the necessity to reconcile personal individuation and social integration and so secure the "I-dentity". The process of productive processing of reality is an enduring process throughout the life course. Oversocialization The problem of order, or Hobbesian problem, questions the existence of social orders and asks if it is possible to oppose them. Émile Durkheim viewed society as an external force controlling individuals through the imposition of sanctions and codes of law. However, constraints and sanctions also arise internally as feelings of guilt or anxiety. See also References Further reading Bayley, Robert; Schecter, Sandra R. (2003). Multilingual Matters, Duff, Patricia A.; Hornberger, Nancy H. (2010). Language Socialization: Encyclopedia of Language and Education, Volume 8. Springer, Kramsch, Claire (2003). Language Acquisition and Language Socialization: Ecological Perspectives – Advances in Applied Linguistics. Continuum International Publishing Group, McQuail, Dennis (2005). McQuail's Mass Communication Theory: Fifth Edition, London: Sage. Mehan, Hugh (1991). Sociological Foundations Supporting the Study of Cultural Diversity. National Center for Research on Cultural Diversity and Second Language Learning. White, Graham (1977). Socialisation, London: Longman. Conformity Deviance (sociology) Sociological terminology Majority–minority relations
0.774593
0.997798
0.772888
Archetype
The concept of an archetype ( ; ) appears in areas relating to behavior, historical psychology, philosophy and literary analysis. An archetype can be any of the following: a statement, pattern of behavior, prototype, "first" form, or a main model that other statements, patterns of behavior, and objects copy, emulate, or "merge" into. Informal synonyms frequently used for this definition include "standard example", "basic example", and the longer-form "archetypal example"; mathematical archetypes often appear as "canonical examples". the Platonic concept of pure form, believed to embody the fundamental characteristics of a thing. the Jungian psychology concept of an inherited unconscious predisposition, behavioral trait or tendency ("instinct") shared among the members of the species; as any behavioral trait the tendency comes to being by way of patterns of thought, images, affects or pulsions characterized by its qualitative likeness to distinct narrative constructs; unlike personality traits, many of the archetype's fundamental characteristics are shared in common with the collective & are not predominantly defined by the individual's representation of them; and the tendency to utilize archetypal representations is postulated to arise from the evolutionary drive to establish specific cues corresponding with the historical evolutionary environment to better adapt to it. Such evolutionary drives are: survival and thriving in the physical environment, the relating function, acquiring knowledge, etc. It is communicated graphically as archetypal "figures". a constantly-recurring symbol or motif in literature, painting, or mythology. This definition refers to the recurrence of characters or ideas sharing similar traits throughout various, seemingly unrelated cases in classic storytelling, media, etc. This usage of the term draws from both comparative anthropology and from Jungian archetypal theory. Archetypes are also very close analogies to instincts, in that, long before any consciousness develops, it is the impersonal and inherited traits of human beings that present and motivate human behavior. They also continue to influence feelings and behavior even after some degree of consciousness developed later on. Etymology The word archetype, "original pattern from which copies are made," first entered into English usage in the 1540s. It derives from the Latin noun , latinization of the Greek noun , whose adjective form is , which means "first-molded", which is a compound of , "beginning, origin", and , which can mean, among other things, "pattern", "model", or "type". It, thus, referred to the beginning or origin of the pattern, model or type. Archetypes in literature Function Usage of archetypes in specific pieces of writing is a holistic approach, which can help the writing win universal acceptance. This is because readers can relate to and identify with the characters and the situation, both socially and culturally. By deploying common archetypes contextually, a writer aims to impart realism to their work. According to many literary critics, archetypes have a standard and recurring depiction in a particular human culture or the whole human race that ultimately lays concrete pillars and can shape the whole structure in a literary work. Story archetypes Christopher Booker, author of The Seven Basic Plots: Why We Tell Stories, argues that the following basic archetypes underlie all stories: Overcoming the Monster Rags to Riches The Quest Voyage and Return Comedy Tragedy Rebirth These themes coincide with the characters of Jung's archetypes. Literary criticism Archetypal literary criticism argues that archetypes determine the form and function of literary works and that a text's meaning is shaped by cultural and psychological myths. Cultural archetypes are the unknowable basic forms personified or made concrete by recurring images, symbols, or patterns (which may include motifs such as the "quest" or the "heavenly ascent"; recognizable character types such as the "trickster", "saint", "martyr" or the "hero"; symbols such as the apple or the snake; and imagery) and that have all been laden with meaning prior to their inclusion in any particular work. The archetypes reveal shared roles universal among societies, such as the role of the mother in her natural relations with all members of the family. These archetypes create a shared imagery which is defined by many stereotypes that have not separated themselves from the traditional, biological, religious, and mythical framework. Platonic archetypes The origins of the archetypal hypothesis date as far back as Plato. Plato's eidos, or ideas, were pure mental forms that were said to be imprinted in the soul before it was born into the world. Some philosophers also translate the archetype as "essence" in order to avoid confusion with respect to Plato's conceptualization of Forms. While it is tempting to think of Forms as mental entities (ideas) that exist only in our mind, the philosopher insisted that they are independent of any minds (real). Eidos were collective in the sense that they embodied the fundamental characteristics of a thing rather than its specific peculiarities. In the seventeenth century, Sir Thomas Browne and Francis Bacon both employ the word archetype in their writings; Browne in The Garden of Cyrus (1658) attempted to depict archetypes in his usage of symbolic proper-names. Jungian archetypes The concept of psychological archetypes was advanced by the Swiss psychiatrist Carl Jung, c. 1919. Jung has acknowledged that his conceptualization of archetype is influenced by Plato's eidos, which he described as "the formulated meaning of a primordial image by which it was represented symbolically." According to Jung, the term archetype is an explanatory paraphrase of the Platonic eidos, also believed to represent the word form. He maintained that Platonic archetypes are metaphysical ideas, paradigms, or models, and that real things are held to be only copies of these model ideas. However, archetypes are not easily recognizable in Plato's works in the way in which Jung meant them. In Jung's psychological framework, archetypes are innate, libidinally collective schemas, universal prototypes for idea-sensory impression images and may be used to interpret observations. A group of memories and interpretations associated with an archetype is a complex (e.g. a mother complex associated with the mother archetype). Jung treated the archetypes as psychological organs, analogous to physical ones in that both are morphological constructs that arose through evolution. At the same time, it has also been observed that evolution can itself be considered an archetypal construct. Jung states in part one of Man And His Symbols that:While there are a variety of categorizations of archetypes, Jung's configuration is perhaps the most well known and serves as the foundation for many other models. The four major archetypes to emerge from his work, which Jung originally terms primordial images, include the anima/animus, the self, the shadow, and the persona. Additionally, Jung referred to images of the wise old man, the child, the mother, and the maiden. He believed that each human mind retains these basic unconscious understandings of the human condition and the collective knowledge of our species in the construct of the collective unconscious. Neo-Jungian concepts Other authors, such as Carol Pearson and Margaret Mark, have attributed 12 different archetypes to Jung, organized in three overarching categories, based on a fundamental driving force. These include: Other authors, such as Margaret Hartwell and Joshua Chen, go further to give these 12 archetypes families 5 archetypes each. They are as follows: Other uses of archetypes There is also the position that the use of archetypes in different ways is possible because every archetype has multiple manifestations, with each one featuring different attributes. For instance, there is the position that the function of the archetype must be approached according to the context of biological sciences and is accomplished through the concept of the ultimate function. This pertains to the organism's response to those pressures in terms of biological trait. Dichter's application of archetypes Later in the 1900s, a Viennese psychologist named Dr. Ernest Dichter took these psychological constructs and applied them to marketing. Dichter moved to New York around 1939 and sent every ad agency on Madison Avenue a letter boasting of his new discovery. He found that applying these universal themes to products promoted easier discovery and stronger loyalty for brands. See also Allegory of the Cave Archetypal pedagogy Archive for Research in Archetypal Symbolism Character (arts) Cliché Dmuta in Mandaeism Mental model Monomyth Ostensive definition Perennial philosophy Personification Prototype Role reversal Simulacrum Stereotype System archetypes Theory of Forms Type (biology) Wounded healer References External links Archetypal psychology Cultural anthropology Literary concepts Narratology Social psychology Tropes
0.774212
0.99825
0.772858
Interpersonal psychotherapy
Interpersonal psychotherapy (IPT) is a brief, attachment-focused psychotherapy that centers on resolving interpersonal problems and symptomatic recovery. It is an empirically supported treatment (EST) that follows a highly structured and time-limited approach and is intended to be completed within 12–16 weeks. IPT is based on the principle that relationships and life events impact mood and that the reverse is also true. It was developed by Gerald Klerman and Myrna Weissman for major depression in the 1970s and has since been adapted for other mental disorders. IPT is an empirically validated intervention for depressive disorders, and is more effective when used in combination with psychiatric medications. Along with cognitive behavioral therapy (CBT), IPT is recommended in treatment guidelines as a psychosocial treatment of choice for depression. History Originally named "high contact" therapy, IPT was first developed in 1969 at Yale University as part of a study designed by Gerald Klerman, Myrna Weissman and colleagues to test the efficacy of an antidepressant with and without psychotherapy as maintenance treatment of depression. IPT has been studied in many research protocols since its development. NIMH-TDCRP demonstrated the efficacy of IPT as a maintenance treatment and delineated some contributing factors. Foundations IPT was influenced by CBT as well as psychodynamic approaches. It takes its structure from CBT in that it is time-limited, employs structured interviews and assessment tools. In general, however, IPT focuses directly on affects, or feelings, whereas CBT focuses on cognitions with strong associated affects. Unlike CBT, IPT makes no attempt to uncover distorted thoughts systematically by giving homework or other assignments, nor does it help the patient develop alternative thought patterns through prescribed practice. Rather, as evidence arises during the course of therapy, the therapist calls attention to distorted thinking in relation to significant others. The goal is to change the relationship pattern rather than associated depressive cognitions, which are acknowledged as depressive symptoms. The content of IPT's therapy was inspired by Attachment theory and Harry Stack Sullivan's Interpersonal psychoanalysis. Social theory is also influenced in a lesser role to emphasis on qualitative impact of social support networks for recovery. Unlike psychodynamic approaches, IPT does not include a personality theory or attempt to conceptualize or treat personality but focuses on humanistic applications of interpersonal sensitivity. Attachment Theory, forms the basis for understanding patients' relationship difficulties, attachment schema and optimal functioning when attachment needs are met. Interpersonal Theory, describes the ways in which patients' maladaptive metacommunication patterns (Low to high Affiliation & Inclusion and dominant to submissive Status) lead to or evoke difficulty in their here-and-now interpersonal relationships. The aim of IPT is to help the patient to improve interpersonal and intrapersonal communication skills within relationships and to develop social support network with realistic expectations to deal with the crises precipitated in distress and to weather 'interpersonal storms'. Clinical applications It has been demonstrated to be an effective treatment for depression and has been modified to treat other psychiatric disorders such as substance use disorders and eating disorders. It is incumbent upon the therapist in the treatment to quickly establish a therapeutic alliance with positive countertransference of warmth, empathy, affective attunement and positive regard for encouraging a positive transferential relationship, from which the patient is able to seek help from the therapist despite resistance. It is primarily used as a short-term therapy completed in 12–16 weeks, but it has also been used as a maintenance therapy for patients with recurrent depression. A shorter, 6-week therapy suited to primary care settings called Interpersonal counselling (IPC) has been derived from IPT. Interpersonal psychotherapy has been found to be an effective treatment for the following: Bipolar disorder Bulimia nervosa Major depressive disorder Post-partum depression Adolescents Although originally developed as an individual therapy for adults, IPT has been modified for use with adolescents and older adults. IPT for children is based on the premise that depression occurs in the context of an individual's relationships regardless of its origins in biology or genetics. More specifically, depression affects people's relationships and these relationships further affect our mood. The IPT model identifies four general areas in which a person may be having relationship difficulties: grief after the loss of a loved one; conflict in significant relationships, including a client's relationship with his or her own self; difficulties adapting to changes in relationships or life circumstances; and difficulties stemming from social isolation. The IPT therapist helps identify areas in need of skill-building to improve the client's relationships and decrease the depressive symptoms. Over time, the client learns to link changes in mood to events occurring in his/her relationships, communicate feelings and expectations for the relationships, and problem-solve solutions to difficulties in the relationships. IPT has been adapted for the treatment of depressed adolescents (IPT-A) to address developmental issues most common to teenagers such as separation from parents, development of romantic relationships, and initial experience with death of a relative or friend. IPT-A helps the adolescent identify and develop more adaptive methods for dealing with the interpersonal issues associated with the onset or maintenance of their depression. IPT-A is typically a 12- to 16-week treatment. Although the treatment involves primarily individual sessions with the teenager, parents are asked to participate in a few sessions to receive education about depression, to address any relationship difficulties that may be occurring between the adolescent and his/her parents, and to help support the adolescent's treatment. Elderly IPT has been used as a psychotherapy for depressed elderly, with its emphasis on addressing interpersonally relevant problems. IPT appears especially well suited to the life changes that many people experience in their later years. References Sources Psychotherapy by type
0.780015
0.990798
0.772837
Atlas personality
The Atlas personality, named after the story of the Titan Atlas from Greek mythology who is forced to hold up the sky, is someone obliged to take on adult responsibilities prematurely. They are as a result liable to develop a pattern of compulsive caregiving in later life. Origins and nature The Atlas personality is typically found in a person who felt obliged during childhood to take on responsibilities such as providing psychological support to parents, often in a chaotic family situation. This experience often involves parentification. The result in adult life can be a personality devoid of fun, and feeling the weight of the world on their shoulders. Depression and anxiety, as well as oversensitivity to others and an inability to assert their own needs, are further identifiable characteristics. In addition, there may also be an underlying rage against the parents for not having provided love, and for exploiting the child for their own needs.<ref>Alice Miller, 'The Drama of Being a Child (London 1990) p. 38</ref> While Atlas personalities may appear to function adequately as adults, they may be pervaded with a sense of emptiness and be lacking in vitality. Treatment Persons suffering from Atlas personality may benefit from psychotherapy. In such cases, a therapist talks with the patient about the patient's childhood and helps identify behavioral patterns that may have arisen from being given too many responsibilities too early in life. See also References Further reading L. J. Cozolino, The Making of a Therapist'' (New York 2004) Behavioural syndromes associated with physiological disturbances and physical factors Interpersonal relationships Narcissism Borderline personality disorder Atlas (mythology)
0.783084
0.986779
0.772731
Social phenomenon
Social phenomena or social phenomenon (singular) are any behaviours, actions, or events that takes place because of social influence, including from contemporary as well as historical societal influences. They are often a result of multifaceted processes that add ever increasing dimensions as they operate through individual nodes of people. Because of this, social phenomenon are inherently dynamic and operate within a specific time and historical context. Social phenomena are observable, measurable data. Psychological notions may drive them, but those notions are not directly observable; only the phenomena that express them. See also Phenomenological sociology Sociological imagination Further reading References Sociological terminology Social philosophy Phenomena
0.783225
0.986456
0.772617
Reflexivity (social theory)
In epistemology, and more specifically, the sociology of knowledge, reflexivity refers to circular relationships between cause and effect, especially as embedded in human belief structures. A reflexive relationship is multi-directional when the causes and the effects affect the reflexive agent in a layered or complex sociological relationship. The complexity of this relationship can be furthered when epistemology includes religion. Within sociology more broadly—the field of origin—reflexivity means an act of self-reference where existence engenders examination, by which the thinking action "bends back on", refers to, and affects the entity instigating the action or examination. It commonly refers to the capacity of an agent to recognise forces of socialisation and alter their place in the social structure. A low level of reflexivity would result in individuals shaped largely by their environment (or "society"). A high level of social reflexivity would be defined by individuals shaping their own norms, tastes, politics, desires, and so on. This is similar to the notion of autonomy. (See also structure and agency and social mobility.) Within economics, reflexivity refers to the self-reinforcing effect of market sentiment, whereby rising prices attract buyers whose actions drive prices higher still until the process becomes unsustainable. This is an instance of a positive feedback loop. The same process can operate in reverse leading to a catastrophic collapse in prices. Overview In social theory, reflexivity may occur when theories in a discipline should apply equally to the discipline itself; for example, in the case that the theories of knowledge construction in the field of sociology of scientific knowledge should apply equally to knowledge construction by sociology of scientific knowledge practitioners, or when the subject matter of a discipline should apply equally to the individual practitioners of that discipline (e.g., when psychological theory should explain the psychological processes of psychologists). More broadly, reflexivity is considered to occur when the observations of observers in the social system affect the very situations they are observing, or when theory being formulated is disseminated to and affects the behaviour of the individuals or systems the theory is meant to be objectively modelling. Thus, for example, an anthropologist living in an isolated village may affect the village and the behaviour of its citizens under study. The observations are not independent of the participation of the observer. Reflexivity is, therefore, a methodological issue in the social sciences analogous to the observer effect. Within that part of recent sociology of science that has been called the strong programme, reflexivity is suggested as a methodological norm or principle, meaning that a full theoretical account of the social construction of, say, scientific, religious or ethical knowledge systems, should itself be explainable by the same principles and methods as used for accounting for these other knowledge systems. This points to a general feature of naturalised epistemologies, that such theories of knowledge allow for specific fields of research to elucidate other fields as part of an overall self-reflective process: any particular field of research occupied with aspects of knowledge processes in general (e.g., history of science, cognitive science, sociology of science, psychology of perception, semiotics, logic, neuroscience) may reflexively study other such fields yielding to an overall improved reflection on the conditions for creating knowledge. Reflexivity includes both a subjective process of self-consciousness inquiry and the study of social behaviour with reference to theories about social relationships. History The principle of reflexivity was perhaps first enunciated by the sociologists William I. Thomas and Dorothy Swaine Thomas, in their 1928 book The child in America: "If men define situations as real, they are real in their consequences". The theory was later termed the "Thomas theorem". Sociologist Robert K. Merton (1948, 1949) built on the Thomas principle to define the notion of a self-fulfilling prophecy: that once a prediction or prophecy is made, actors may accommodate their behaviours and actions so that a statement that would have been false becomes true or, conversely, a statement that would have been true becomes false - as a consequence of the prediction or prophecy being made. The prophecy has a constitutive impact on the outcome or result, changing the outcome from what would otherwise have happened. Reflexivity was taken up as an issue in science in general by Karl Popper (1957), who in his book The poverty of historicism highlighted the influence of a prediction upon the event predicted, calling this the 'Oedipus effect' in reference to the Greek tale in which the sequence of events fulfilling the Oracle's prophecy is greatly influenced by the prophecy itself. Popper initially considered such self-fulfilling prophecy a distinguishing feature of social science, but later came to see that in the natural sciences, particularly biology and even molecular biology, something equivalent to expectation comes into play and can act to bring about that which has been expected. It was also taken up by Ernest Nagel (1961). Reflexivity presents a problem for science because if a prediction can lead to changes in the system that the prediction is made in relation to, it becomes difficult to assess scientific hypotheses by comparing the predictions they entail with the events that actually occur. The problem is even more difficult in the social sciences. Reflexivity has been taken up as the issue of "reflexive prediction" in economic science by Grunberg and Modigliani (1954) and Herbert A. Simon (1954), has been debated as a major issue in relation to the Lucas critique, and has been raised as a methodological issue in economic science arising from the issue of reflexivity in the sociology of scientific knowledge (SSK) literature. Reflexivity has emerged as both an issue and a solution in modern approaches to the problem of structure and agency, for example in the work of Anthony Giddens in his structuration theory and Pierre Bourdieu in his genetic structuralism. Giddens, for example, noted that constitutive reflexivity is possible in any social system, and that this presents a distinct methodological problem for the social sciences. Giddens accentuated this theme with his notion of "reflexive modernity" – the argument that, over time, society is becoming increasingly more self-aware, reflective, and hence reflexive. Bourdieu argued that the social scientist is inherently laden with biases, and only by becoming reflexively aware of those biases can the social scientists free themselves from them and aspire to the practice of an objective science. For Bourdieu, therefore, reflexivity is part of the solution, not the problem. Michel Foucault's The order of things can be said to touch on the issue of Reflexivity. Foucault examines the history of Western thought since the Renaissance and argues that each historical epoch (he identifies three and proposes a fourth) has an episteme, or "a historical a priori", that structures and organises knowledge. Foucault argues that the concept of man emerged in the early 19th century, what he calls the "Age of Man", with the philosophy of Immanuel Kant. He finishes the book by posing the problem of the age of man and our pursuit of knowledge- where "man is both knowing subject and the object of his own study"; thus, Foucault argues that the social sciences, far from being objective, produce truth in their own mutually exclusive discourses. In economics Economic philosopher George Soros, influenced by ideas put forward by his tutor, Karl Popper (1957), has been an active promoter of the relevance of reflexivity to economics, first propounding it publicly in his 1987 book The alchemy of finance. He regards his insights into market behaviour from applying the principle as a major factor in the success of his financial career. Reflexivity is inconsistent with general equilibrium theory, which stipulates that markets move towards equilibrium and that non-equilibrium fluctuations are merely random noise that will soon be corrected. In equilibrium theory, prices in the long run at equilibrium reflect the underlying economic fundamentals, which are unaffected by prices. Reflexivity asserts that prices do in fact influence the fundamentals and that these newly influenced sets of fundamentals then proceed to change expectations, thus influencing prices; the process continues in a self-reinforcing pattern. Because the pattern is self-reinforcing, markets tend towards disequilibrium. Sooner or later they reach a point where the sentiment is reversed and negative expectations become self-reinforcing in the downward direction, thereby explaining the familiar pattern of boom and bust cycles. An example Soros cites is the procyclical nature of lending, that is, the willingness of banks to ease lending standards for real estate loans when prices are rising, then raising standards when real estate prices are falling, reinforcing the boom and bust cycle. He further suggests that property price inflation is essentially a reflexive phenomenon: house prices are influenced by the sums that banks are prepared to advance for their purchase, and these sums are determined by the banks' estimation of the prices that the property would command. Soros has often claimed that his grasp of the principle of reflexivity is what has given him his "edge" and that it is the major factor contributing to his successes as a trader. For several decades there was little sign of the principle being accepted in mainstream economic circles, but there has been an increase of interest following the crash of 2008, with academic journals, economists, and investors discussing his theories. Economist and former columnist of the Financial Times, Anatole Kaletsky, argued that Soros' concept of reflexivity is useful in understanding China's economy and how the Chinese government manages it. In 2009, Soros funded the launch of the Institute for New Economic Thinking with the hope that it would develop reflexivity further. The Institute works with several types of heterodox economics, particularly the post-Keynesian branch. In sociology Margaret Archer has written extensively on laypeople's reflexivity. For her, human reflexivity is a mediating mechanism between structural properties, or the individual's social context, and action, or the individual's ultimate concerns. Reflexive activity, according to Archer, increasingly takes the place of habitual action in late modernity since routine forms prove ineffective in dealing with the complexity of modern life trajectories. While Archer emphasises the agentic aspect of reflexivity, reflexive orientations can themselves be seen as being "socially and temporally embedded". For example, Elster points out that reflexivity cannot be understood without taking into account the fact that it draws on background configurations (e.g., shared meanings, as well as past social engagement and lived experiences of the social world) to be operative. In anthropology In anthropology, reflexivity has come to have two distinct meanings, one that refers to the researcher's awareness of an analytic focus on his or her relationship to the field of study, and the other that attends to the ways that cultural practices involve consciousness and commentary on themselves. The first sense of reflexivity in anthropology is part of social science's more general self-critique in the wake of theories by Michel Foucault and others about the relationship of power and knowledge production. Reflexivity about the research process became an important part of the critique of the colonial roots and scientistic methods of anthropology in the "writing cultures" movement associated with James Clifford and George Marcus, as well as many other anthropologists. Rooted in literary criticism and philosophical analysis of the relationship among the anthropologists, the people represented in texts, and their textual representations, this approach has fundamentally changed ethical and methodological approaches in anthropology. As with the feminist and anti-colonial critiques that provide some of reflexive anthropology's inspiration, the reflexive understanding of the academic and political power of representations, analysis of the process of "writing culture" has become a necessary part of understanding the situation of the ethnographer in the fieldwork situation. Objectification of people and cultures and analysis of them only as objects of study has been largely rejected in favor of developing more collaborative approaches that respect local people's values and goals. Nonetheless, many anthropologists have accused the "writing cultures" approach of muddying the scientific aspects of anthropology with too much introspection about fieldwork relationships, and reflexive anthropology have been heavily attacked by more positivist anthropologists. Considerable debate continues in anthropology over the role of postmodernism and reflexivity, but most anthropologists accept the value of the critical perspective, and generally only argue about the relevance of critical models that seem to lead anthropology away from its earlier core foci. The second kind of reflexivity studied by anthropologists involves varieties of self-reference in which people and cultural practices call attention to themselves. One important origin for this approach is Roman Jakobson in his studies of deixis and the poetic function in language, but the work of Mikhail Bakhtin on carnival has also been important. Within anthropology, Gregory Bateson developed ideas about meta-messages (subtext) as part of communication, while Clifford Geertz's studies of ritual events such as the Balinese cock-fight point to their role as foci for public reflection on the social order. Studies of play and tricksters further expanded ideas about reflexive cultural practices. Reflexivity has been most intensively explored in studies of performance, public events, rituals, and linguistic forms but can be seen any time acts, things, or people are held up and commented upon or otherwise set apart for consideration. In researching cultural practices, reflexivity plays an important role, but because of its complexity and subtlety, it often goes under-investigated or involves highly specialised analyses. One use of studying reflexivity is in connection to authenticity. Cultural traditions are often imagined as perpetuated as stable ideals by uncreative actors. Innovation may or may not change tradition, but since reflexivity is intrinsic to many cultural activities, reflexivity is part of tradition and not inauthentic. The study of reflexivity shows that people have both self-awareness and creativity in culture. They can play with, comment upon, debate, modify, and objectify culture through manipulating many different features in recognised ways. This leads to the metaculture of conventions about managing and reflecting upon culture. In international relations In international relations, the question of reflexivity was first raised in the context of the so-called ‘Third Debate’ of the late 1980s. This debate marked a break with the positivist orthodoxy of the discipline. The post-positivist theoretical restructuring was seen to introduce reflexivity as a cornerstone of critical scholarship. For Mark Neufeld, reflexivity in International Relations was characterized by 1) self-awareness of underlying premises, 2) an acknowledgment of the political-normative dimension of theoretical paradigms, and 3) the affirmation that judgement about the merits of paradigms is possible despite the impossibility of neutral or apolitical knowledge production. Since the nineties, reflexivity has become an explicit concern of constructivist, poststructuralist, feminist, and other critical approaches to International Relations. In The Conduct of Inquiry in International Relations, Patrick Thaddeus Jackson identified reflexivity of one of the four main methodologies into which contemporary International Relations research can be divided, alongside neopositivism, critical realism, and analyticism. Reflexivity and the status of the social sciences Flanagan has argued that reflexivity complicates all three of the traditional roles that are typically played by a classical science: explanation, prediction and control. The fact that individuals and social collectivities are capable of self-inquiry and adaptation is a key characteristic of real-world social systems, differentiating the social sciences from the physical sciences. Reflexivity, therefore, raises real issues regarding the extent to which the social sciences may ever be viewed as "hard" sciences analogous to classical physics, and raises questions about the nature of the social sciences. Methods for the implementation of reflexivity A new generation of scholars has gone beyond (meta-)theoretical discussion to develop concrete research practices for the implementation of reflexivity. These scholars have addressed the ‘how to’ question by turning reflexivity from an informal process into a formal research practice. While most research focuses on how scholars can become more reflexive toward their positionality and situatedness, some have sought to build reflexive methods in relation to other processes of knowledge production, such as the use of language. The latter has been advanced by the work of Professor Audrey Alejandro in a trilogy on reflexive methods. The first article of the trilogy develops what is referred to as Reflexive Discourse Analysis, a critical methodology for the implementation of reflexivity that integrates discourse theory. The second article further expands the methodological tools for practicing reflexivity by introducing a three-stage research method for problematizing linguistic categories. The final piece of the trilogy adds a further method for linguistic reflexivity, namely the Reflexive Review. This method provides four steps that aim to add a linguistic and reflexive dimension to the practice of writing a literature review. See also References Further reading Bryant, C. G. A. (2002). "George Soros's theory of reflexivity: a comparison with the theories of Giddens and Beck and a consideration of its practical value", Economy and society, 31 (1), pp. 112–131. Flanagan, O. J. (1981). "Psychology, progress, and the problem of reflexivity: a study in the epistemological foundations of psychology", Journal of the history of the behavioral sciences, 17, pp. 375–386. Gay, D. (2009). Reflexivity and development economics. London: Palgrave Macmillan Grunberg, E. and F. Modigliani (1954). "The predictability of social events", Journal of political economy, 62 (6), pp. 465–478. Merton, R. K. (1948). "The self-fulfilling prophecy", Antioch Review, 8, pp. 193–210. Merton, R. K. (1949/1957), Social theory and social structure. Rev. ed. The Free Press, Glencoe, IL. Nagel, E. (1961), The structure of science: problems in the logic of scientific explanation, Harcourt, New York. Popper, K. (1957), The poverty of historicism, Harper and Row, New York. Simon, H. (1954). "Bandwagon and underdog effects of election predictions", Public opinion quarterly, 18, pp. 245–253. Soros, G (1987) The alchemy of finance (Simon & Schuster, 1988) (paperback: Wiley, 2003; ) Soros, G (2008) The new paradigm for financial markets: the credit crisis of 2008 and what it means (PublicAffairs, 2008) Soros, G (2006) The age of fallibility: consequences of the war on terror (PublicAffairs, 2006) Soros, G The bubble of American supremacy: correcting the misuse of American power (PublicAffairs, 2003) (paperback; PublicAffairs, 2004; ) Soros, G George Soros on globalization (PublicAffairs, 2002) (paperback; PublicAffairs, 2005; ) Soros, G (2000) Open society: reforming global capitalism (PublicAffairs, 2001) Thomas, W. I. (1923), The unadjusted girl : with cases and standpoint for behavior analysis, Little, Brown, Boston, MA. Thomas, W. I. and D. S. Thomas (1928), The child in America : behavior problems and programs, Knopf, New York. Tsekeris, C. (2013). "Toward a chaos-friendly reflexivity", Entelequia, 16, pp. 71–89. Woolgar, S. (1988). Knowledge and reflexivity: new frontiers in the sociology of knowledge. London and Beverly Hills: Sage. Sociological terminology Sociological theories George Soros Self-reference
0.776855
0.994522
0.772599
Education sciences
Education sciences, also known as education studies, education theory, and traditionally called pedagogy, seek to describe, understand, and prescribe education including education policy. Subfields include comparative education, educational research, instructional theory, curriculum theory and psychology, philosophy, sociology, economics, and history of education. Related are learning theory or cognitive science. History The earliest known attempts to understand education in Europe were by classical Greek philosophers and sophists, but there is also evidence of contemporary (or even preceding) discussions among Arabic, Indian, and Chinese scholars. Philosophy of education Educational thought is not necessarily concerned with the construction of theories as much as the "reflective examination of educational issues and problems from the perspective of diverse disciplines." For example, a cultural theory of education considers how education occurs through the totality of culture, including prisons, households, and religious institutions as well as schools. Other examples are the behaviorist theory of education that comes from educational psychology and the functionalist theory of education that comes from sociology of education. Normative theories of education Normative theories of education provide the norms, goals, and standards of education. In contrast, descriptive theories of education provide descriptions, explanations or predictions of the processes of education. "Normative philosophies or theories of education may make use of the results of [philosophical thought] and of factual inquiries about human beings and the psychology of learning, but in any case they propound views about what education should be, what dispositions it should cultivate, why it ought to cultivate them, how and in whom it should do so, and what forms it should take. In a full-fledged philosophical normative theory of education, besides analysis of the sorts described, there will normally be propositions of the following kinds: 1. Basic normative premises about what is good or right; 2. Basic factual premises about humanity and the world; 3. Conclusions, based on these two kinds of premises, about the dispositions education should foster; 4. Further factual premises about such things as the psychology of learning and methods of teaching; and 5. Further conclusions about such things as the methods that education should use." Examples of the purpose of schools include: to develop reasoning about perennial questions, to master the methods of scientific inquiry, to cultivate the intellect, to create change agents, to develop spirituality, and to model a democratic society. Common educational philosophies include: educational perennialism, educational progressivism, educational essentialism, critical pedagogy, Montessori education, Waldorf education, and democratic education. Normative Curriculum theory Normative theories of curriculum aim to "describe, or set norms, for conditions surrounding many of the concepts and constructs" that define curriculum. These normative propositions differ from those above in that normative curriculum theory is not necessarily untestable. A central question asked by normative curriculum theory is: given a particular educational philosophy, what is worth knowing and why? Some examples are: a deep understanding of the Great Books, direct experiences driven by student interest, a superficial understanding of a wide range knowledge (e.g. Core knowledge), social and community problems and issues, knowledge and understanding specific to cultures and their achievements (e.g. African-Centered Education). Normative Feminist educational theory Scholars such as Robyn Wiegman argue that, "academic feminism is perhaps the most successful institutionalizing project of its generation, with more full-time faculty positions and new doctoral degree programs emerging each year in the field it inaugurated, Women's Studies". Feminist educational theory stems from four key tenets, supported by empirical data based on surveys of feminist educators. The first tenet of feminist educational theory is, "Creation of participatory classroom communities". Participatory classroom communities often are smaller classes built around discussion and student involvement. The second tenet is, "Validation of personal experience". Classrooms in which validation of personal experience occur often are focused around students providing their own insights and experiences in group discussion, rather than relying exclusively on the insight of the educator. The third tenet is, "Encouragement of social understanding and activism". This tenet is generally actualized by classrooms discussing and reading about social and societal aspects that students may not be aware of, along with breeding student self-efficacy. The fourth and final tenet of feminist education is, "Development of critical thinking skills/open-mindedness". Classrooms actively engaging in this tenet encourage students to think for themselves and prompt them to move beyond their comfort zones, working outside the bounds of the traditional lecture-based classroom. Though these tenets at times overlap, they combine to provide the basis for modern feminist educational theory, and are supported by a majority of feminist educators. Feminist educational theory derives from the feminist movement, particularly that of the early 1970s, which prominent feminist bell hooks describes as, "a movement to end sexism, sexist exploitation, and oppression". Academic feminist Robyn Weigman recalls that, "In the early seventies, feminism in the U.S. academy was less an organized entity than a set of practices: an ensemble of courses listed on bulletin boards often taught for free by faculty and community leaders". While feminism traditionally existed outside of the institutionalization of schools (particularly universities), feminist education has gradually taken hold in the last few decades and has gained a foothold in institutionalized educational bodies. "Once fledgling programs have become departments, and faculty have been hired and tenured with full-time commitments". There are supporters of feminist education as well, many of whom are educators or students. Professor Becky Ropers-Huilman recounts one of her positive experiences with feminist education from the student perspective, explaining that she "...felt very 'in charge' of [her] own learning experiences," and "...was not being graded–or degraded... [while completing] the majority of the assigned work for the class (and additional work that [she] thought would add to class discussion)," all while "...[regarding] the teacher's feedback on [her] participation as one perspective, rather than the perspective". Ropers-Huilman experienced a working feminist classroom that successfully motivated students to go above and beyond, succeeding in generating self-efficacy and caring in the classroom. When Ropers-Huilman became a teacher herself, she embraced feminist educational theory, noting that, "[Teachers] have an obligation as the ones who are vested with an assumed power, even if that power is easily and regularly disrupted, to assess and address the effects that it is having in our classrooms". Ropers-Huilman firmly believes that educators have a duty to address feminist concepts such as the use and flow of power within the classroom, and strongly believes in the potential of feminist educational theory to create positive learning experiences for students and teachers as she has personally experienced. Ropers-Huilman also celebrates the feminist classroom's inclusivity, noting that in a feminist classroom, "in which power is used to care about, for, and with others… educational participants can shape practices aimed at creating an inclusive society that discovers and utilizes the potential of its actors". Ropers-Huilman believes that a feminist classroom carries the ability to greatly influence the society as a whole, promoting understanding, caring, and inclusivity. Ropers-Huilman actively engages in feminist education in her classes, focusing on concepts such as active learning and critical thinking while attempting to demonstrate and engage in caring behavior and atypical classroom settings, similar to many other feminist educators. Leading feminist scholar bell hooks argues for the incorporation of feminism into all aspects of society, including education, in her book Feminism is for Everybody. hooks notes that, "Everything [people] know about feminism has come into their lives thirdhand". hooks believes that education offers a counter to the, "...wrongminded notion of feminist movement which implied it was anti-male". hooks cites feminism's negative connotations as major inhibitors to the spread and adoption of feminist ideologies. However, feminist education has seen tremendous growth in adoption in the past few decades, despite the negative connotations of its parent movement. Criticism of Feminist educational theory Opposition to feminist educational theory comes from both those who oppose feminism in general and feminists who oppose feminist educational theory in particular. Critics of feminist educational theory argue against the four basic tenets of the theory, "...[contesting] both their legitimacy and their implementation". Lewis Lehrman particularly describes feminist educational ideology as, "...'therapeutic pedagogy' that substitutes an 'overriding' (and detrimental) value on participatory interaction for the expertise of the faculty" (Hoffman). Lehrman argues that the feminist educational tenets of participatory experience and validation of person experience hinder education by limiting and inhibiting the educator's ability to share his or her knowledge, learned through years of education and experience. Others challenge the legitimacy of feminist educational theory, arguing that it is not unique and is instead a sect of liberatory education. Even feminist educational scholars such as Frances Hoffmann and Jayne Stake are forced to concede that, "feminist pedagogy shared intellectual and political roots with the movements comprising the liberatory education agenda of the past 30 years". These liberatory attempts at the democratization of classrooms demonstrate a growth in liberatory education philosophy that some argue feminist educational theory simply piggybacks off of. The harshest critiques of feminist educational theory often come from feminists themselves. Feminist scholar Robyn Wiegman argues against feminist education in her article "Academic Feminism against Itself", arguing that feminist educational ideology has abandoned the intersectionality of feminism in many cases, and has also focused exclusively on present content with a singular perspective. Wiegman refers to feminist scholar James Newman's arguments, centered around the idea that, "When we fail... to challenge both students and ourselves to theorize alterity as an issue of change over time as well as of geographic distance, ethnic difference, and sexual choice, we repress... not only the 'thickness' of historical difference itself, but also... our (self) implication in a narrative of progress whose hero(in)es inhabit only the present". Newman (and Wiegman) believe that this presentist ideology imbued within modern academic feminism creates an environment breeding antifeminist ideologies, most importantly an abandonment of the study of difference, integral to feminist ideology. Wiegman believes that feminist educational theory does a great disservice to the feminist movement, while failing to instill the critical thinking and social awareness that feminist educational theory is intended to. Educational anthropology Philosophical anthropology is the philosophical study of human nature. In terms of learning, examples of descriptive theories of the learner are: a mind, soul, and spirit capable of emulating the Absolute Mind (Idealism); an orderly, sensing, and rational being capable of understanding the world of things (Realism), a rational being with a soul modeled after God and who comes to know God through reason and revelation (Neo-Thomism), an evolving and active being capable of interacting with the environment (Pragmatism), a fundamentally free and individual being who is capable of being authentic through the making of and taking responsibility for choices (Existentialism). Philosophical concepts for the process of education include Bildung and paideia. Educational anthropology is a sub-field of anthropology and is widely associated with the pioneering work of George Spindler. As the name would suggest, the focus of educational anthropology is obviously on education, although an anthropological approach to education tends to focus on the cultural aspects of education, including informal as well as formal education. As education involves understandings of who we are, it is not surprising that the single most recognized dictum of educational anthropology is that the field is centrally concerned with cultural transmission. Cultural transmission involves the transfer of a sense of identity between generations, sometimes known as enculturation and also transfer of identity between cultures, sometimes known as acculturation. Accordingly, thus it is also not surprising that educational anthropology has become increasingly focused on ethnic identity and ethnic change. Descriptive Curriculum theory Descriptive theories of curriculum explain how curricula "benefit or harm all publics it touches". The term hidden curriculum describes that which is learned simply by being in a learning environment. For example, a student in a teacher-led classroom is learning submission. The hidden curriculum is not necessarily intentional. Instructional theory Instructional theories focus on the methods of instruction for teaching curricula. Theories include the methods of: autonomous learning, coyote teaching, inquiry-based instruction, lecture, maturationism, socratic method, outcome-based education, taking children seriously, transformative learning Educational psychology Educational psychology is an empirical science that provides descriptive theories of how people learn. Examples of theories of education in psychology are: constructivism, behaviorism, cognitivism, and motivational theory Cognitive science Educational neuroscience Educational neuroscience is an emerging field that brings together researchers in diverse disciplines to explore the interactions between biological processes and education. Sociology of education The sociology of education is the study of how public institutions and individual experiences affect education and its outcomes. It is most concerned with the public schooling systems of modern industrial societies, including the expansion of higher, further, adult, and continuing education. Examples of theories of education from sociology include: functionalism, conflict theory, social efficiency, and social mobility. Teaching method Learning theories Educational research Educational assessment Educational evaluation Educational aims and objectives Politics in education Education economics Comparative education Educational theorists List of educational psychologists See also Anti-schooling activism Classical education movement Cognitivism (learning theory) Andragogy Geragogy Humanistic education International education Peace education Movement in learning Co-construction, collaborative learning Scholarship of teaching and learning Notes References Thomas, G. (2007) Education and Theory: Strangers in Paradigms. Open University Press External links Educational Theory (journal)
0.783138
0.986535
0.772593
Nomothetic and idiographic
Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view. Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general. Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena. Use in the social sciences The problem of whether to use nomothetic or idiographic approaches is most sharply felt in the social sciences, whose subject are unique individuals (idiographic perspective), but who have certain general properties or behave according to general rules (nomothetic perspective). Often, nomothetic approaches are quantitative, and idiographic approaches are qualitative, although the "Personal Questionnaire" developed by Monte B. Shapiro and its further developments (e.g. Discan scale and PSYCHLOPS) are both quantitative and idiographic. Another very influential quantitative but idiographic tool is the Repertory grid when used with elicited constructs and perhaps elicited elements. Personal cognition (D.A. Booth) is idiographic, qualitative and quantitative, using the individual's own narrative of action within situation to scale the ongoing biosocial cognitive processes in units of discrimination from norm (with M.T. Conner 1986, R.P.J. Freeman 1993 and O. Sharpe 2005). Methods of "rigorous idiography" allow probabilistic evaluation of information transfer even with fully idiographic data. In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting them apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who they are. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours. It is widely held that the terms idiographic and nomothetic were introduced to American psychology by Gordon Allport in 1937, but Hugo Münsterberg used them in his 1898 presidential address at the American Psychological Association meeting. This address was published in Psychological Review in 1899. Theodore Millon stated that when spotting and diagnosing personality disorders, first clinicians start with the nomothetic perspective and look for various general scientific laws; then when they believe they have identified a disorder, they switch their view to the idiographic perspective to focus on the specific individual and his or her unique traits. In sociology, the nomothetic model tries to find independent variables that account for the variations in a given phenomenon (e.g. What is the relationship between timing/frequency of childbirth and education?). Nomothetic explanations are probabilistic and usually incomplete. The idiographic model focuses on a complete, in-depth understanding of a single case (e.g. Why do I not have any pets?). In anthropology, idiographic describes the study of a group, seen as an entity, with specific properties that set it apart from other groups. Nomothetic refers to the use of generalization rather than specific properties in the same context. See also Nomological References Further reading Cone, J. D. (1986). "Idiographic, nomothetic, and related perspectives in behavioral assessment." In: R. O. Nelson & S. C. Hayes (eds.): Conceptual foundations of behavioral assessment (pp. 111–128). New York: Guilford. Thomae, H. (1999). "The nomothetic-idiographic issue: Some roots and recent trends." International Journal of Group Tensions, 28(1), 187–215. Concepts in epistemology
0.782532
0.987158
0.772483
Homophily
Homophily is a concept in sociology describing the tendency of individuals to associate and bond with similar others, as in the proverb "". The presence of homophily has been discovered in a vast array of network studies: over have observed homophily in some form or another, and they establish that similarity is associated with connection. The categories on which homophily occurs include age, gender, class, and organizational role. The opposite of homophily is heterophily or intermingling. Individuals in homophilic relationships share common characteristics (beliefs, values, education, etc.) that make communication and relationship formation easier. Homophily between mated pairs in animals has been extensively studied in the field of evolutionary biology, where it is known as assortative mating. Homophily between mated pairs is common within natural animal mating populations. Homophily has a variety of consequences for social and economic outcomes. Types and dimensions Baseline vs. inbreeding To test the relevance of homophily, researchers have distinguished between two types: Baseline homophily: simply the amount of homophily that would be expected by chance given an existing uneven distribution of people with varying characteristics; and Inbreeding homophily: the amount of homophily over and above this expected value, typically due to personal preferences and choices. Status vs. value In their original formulation of homophily, Paul Lazarsfeld and Robert K. Merton (1954) distinguished between status homophily and value homophily; individuals with similar social status characteristics were more likely to associate with each other than by chance: Status homophily: includes both society-ascribed characteristics (e.g. race, ethnicity, sex, and age) and acquired characteristics (e.g., religion, occupation, behavior patterns, and education). Value homophily: involves association with others who have similar values, attitudes, and beliefs, regardless of differences in status characteristics. Dimensions Race and ethnicity Social networks in the United States today are strongly divided by race and ethnicity, which account for a large proportion of inbreeding homophily (though classification by these criteria can be problematic in sociology due to fuzzy boundaries and different definitions of race). Smaller groups have lower diversity simply due to the number of members. This tends to give racial and ethnic minority groups a higher baseline homophily. Race and ethnicity also correlates with educational attainment and occupation, which further increase baseline homophily. Sex and gender In terms of sex and gender, the baseline homophily networks were relatively low compared to race and ethnicity. In this form of homophily men and women frequently live together and have large populations that are normally equal in size. It is also common to find higher levels of gender homophily among school students. Most sex homophily are a result of inbreeding homophily. Age Most age homophily is of the baseline type. An interesting pattern of inbreeding age homophily for groups of different ages was found by Marsden (1988). It indicated a strong relationship between someone's age and the social distance to other people with regard to confiding in someone. For example, the larger age gap someone had, the smaller chances that they were confided by others with lower ages to "discuss important matters." Religion Homophily based on religion is due to both baseline and inbreeding homophily. Those that belong in the same religion are more likely to exhibit acts of service and aid to one another, such as loaning money, giving therapeutic counseling, and other forms of help during moments of emergency. Parents have been shown to have higher levels of religious homophily than nonparent, which supports the notion that religious institutions are sought out for the benefit of children. Education, occupation and social class Family of birth accounts for considerable baseline homophily with respect to education, occupation, and social class. In terms of education, there is a divide among those who have a college education and those who do not. Another major distinction can be seen between those with white collar occupations and blue collar occupations. Interests Homophily occurs within groups of people that have similar interests as well. We enjoy interacting more with individuals who share similarities with us, so we tend to actively seek out these connections. Additionally, as more users begin to rely on the Internet to find like minded communities for themselves, many examples of niches within social media sites have begun appearing to account for this need. This response has led to the popularity of sites like Reddit in the 2010s, advertising itself as a "home to thousands of communities... and authentic human interaction." Social media As social networks are largely divided by race, social-networking websites like Facebook also foster homophilic atmospheres. When a Facebook user 'likes' or interacts with an article or post of a certain ideology, Facebook continues to show that user posts of that similar ideology (which Facebook believes they will be drawn to). In a research article, McPherson, Smith-Lovin, and Cook (2003) write that homogeneous personal networks result in limited "social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience." This homophily can foster divides and echo chambers on social networking sites, where people of similar ideologies only interact with each other. Causes and effects Causes Geography: Baseline homophily often arises when the people who are located nearby also have similar characteristics. People are more likely to have contact with those who are geographically closer than those who are distant. Technology such as the telephone, e-mail, and social networks have reduced but not eliminated this effect. Family ties: These ties decay slowly, but familial ties, specifically that of domestic partners, fulfill many requisites that generate homophily. Family relationships are generally close and keep frequent contact though they may be at great geographic distances. Ideas that may get lost in other relational contexts, will often instead lead to actions in this setting. Organizations: School, work, and volunteer activities provide the great majority of non-family ties. Many friendships, confiding relations, and social support ties are formed within voluntary groups. The social homogeneity of most organizations creates a strong baseline homophily in networks that are formed there. Isomorphic sources: The connections between people who occupy equivalent roles will induce homophily in the system of network ties. This is common in three domains: workplace (e.g., all heads of HR departments will tend to associate with other HR heads), family (e.g., mothers tend to associate with other mothers), and informal networks. Cognitive processes: People who have demographic similarity tend to own shared knowledge, and therefore they have a greater ease of communication and share cultural tastes, which can also generate homophily. Effects According to one study, perception of interpersonal similarity improves coordination and increase the expected payoff of interactions, above and beyond the effect of merely "liking others." Another study claims that homophily produces tolerance and cooperation in social spaces. However, homophilic patterns can also restrict access to information or inclusion for minorities. Nowadays, the restrictive patterns of homophily can be widely seen within social media. This selectiveness within social media networks can be traced back to the origins of Facebook and the transition of users from MySpace to Facebook in the early 2000s. One study of this shift in a network's user base from (2011) found that this perception of homophily impacted many individuals' preference of one site over another. Most users chose to be more active on the site their friends were on. However, along with the complexities of belongingness, people of similar ages, economic class, and prospective futures (higher education and/or career plans) shared similar reasons for favoring one social media platform. The different features of homophily affected their outlook of each respective site. The effects of homophily on the diffusion of information and behaviors are also complex. Some studies have claimed that homophily facilitates access information, the diffusion of innovations and behaviors, and the formation of social norms. Other studies, however, highlight mechanisms through which homophily can maintain disagreement, exacerbate polarization of opinions, lead to self segregation between groups, and slow the formation of an overall consensus. As online users have a degree of power to form and dictate the environment, the effects of homophily continue to persist. On Twitter, terms such as "stan Twitter", "Black Twitter", or "local Twitter" have also been created and popularized by users to separate themselves based on specific dimensions. Homophily is a cause of homogamy—marriage between people with similar characteristics. Homophily is a fertility factor; an increased fertility is seen in people with a tendency to seek acquaintance among those with common characteristics. Governmental family policies have a decreased influence on fertility rates in such populations. See also Groupthink Echo chamber (media) References Interpersonal relationships Sociological terminology
0.781733
0.988145
0.772466
Personality type
In psychology, personality type refers to the psychological classification of individuals. In contrast to personality traits, the existence of personality types remains extremely controversial. Types are sometimes said to involve qualitative differences between people, whereas traits might be construed as quantitative differences. According to type theories, for example, introverts and extraverts are two fundamentally different categories of people. According to trait theories, introversion and extraversion are part of a continuous dimension, with many people in the middle. Clinically effective personality typologies Effective personality typologies reveal and increase knowledge and understanding of individuals, as opposed to diminishing knowledge and understanding as occurs in the case of stereotyping. Effective typologies also allow for increased ability to predict clinically relevant information about people and to develop effective treatment strategies. There is an extensive literature on the topic of classifying the various types of human temperament and an equally extensive literature on personality traits or domains. These classification systems attempt to describe normal temperament and personality and emphasize the predominant features of different temperament and personality types; they are largely the province of the discipline of psychology. Personality disorders, on the other hand, reflect the work of psychiatry, a medical specialty, and are disease-oriented. They are classified in the Diagnostic and Statistical Manual (DSM), a product of the American Psychiatric Association. Types vs. traits The term type has not been used consistently in psychology and has become the source of some confusion. Furthermore, because personality test scores usually fall on a bell curve rather than in distinct categories, personality type theories have received considerable criticism among psychometric researchers. One study that directly compared a "type" instrument (the MBTI) to a "trait" instrument (the NEO PI) found that the trait measure was a better predictor of personality disorders. Because of these problems, personality type theories have fallen out of favor in psychology. Most researchers now believe that it is impossible to explain the diversity of human personality with a small number of discrete types. They recommend trait models instead, such as the five-factor model. Type theories An early form of personality type indicator theory was the Four Temperaments system of Galen, based on the four humours model of Hippocrates; an extended five temperaments system based on the classical theory was published in 1958. One example of personality types is Type A and Type B personality theory. According to this theory, impatient, achievement-oriented people are classified as Type A, whereas easy-going, relaxed individuals are designated as Type B. The theory originally suggested that Type A individuals were more at risk for coronary heart disease, but this claim has not been supported by empirical research. One study suggests that people with Type A personalities are more likely to develop personality disorders whereas Type B personalities are more likely to become alcoholics. Developmental psychologist Jerome Kagan is a prominent advocate of type indicator theory. He suggests that shy, withdrawn children are best viewed as having an inhibited temperament, which is qualitatively different from that of other children. As a matter of convenience, trait theorists sometimes use the term type to describe someone who scores exceptionally high or low on a particular personality trait. Hans Eysenck refers to superordinate personality factors as types, and more specific associated traits as traits. Several pop psychology theories (e.g., Men Are From Mars, Women Are From Venus, the enneagram) rely on the idea of distinctively different types of people. Nancy McWilliams distinguishes eight psychoanalytic personalities: Psychopathic (Antisocial), Narcissistic, Schizoid, Paranoid, Depressive and Manic, Masochistic (Self-Defeating), Obsessive and Compulsive, Hysterical (Histrionic), and one Dissociative psychology. Carl Jung One of the more influential ideas originated in the theoretical work of Carl Jung as published in the book Psychological Types. The original German language edition, Psychologische Typen, was first published by Rascher Verlag, Zurich, in 1921. Typologies such as Socionics, the MBTI assessment, and the Keirsey Temperament Sorter have roots in Jungian theory. Jung's interest in typology grew from his desire to reconcile the theories of Sigmund Freud and Alfred Adler, and to define how his own perspective differed from theirs. Jung wrote, "In attempting to answer this question, I came across the problem of types; for it is one's psychological type which from the outset determines and limits a person's judgment." (Jung, [1961] 1989:207) He concluded that Freud's theory was extraverted and Adler's introverted. (Jung, [1921] 1971: par. 91) Jung became convinced that acrimony between the Adlerian and Freudian camps was due to this unrecognized existence of different fundamental psychological attitudes, which led Jung "to conceive the two controversial theories of neurosis as manifestations of a type-antagonism." (Jung, 1966: par. 64) Four functions of consciousness In the book Jung categorized people into primary types of psychological function. Jung proposed the existence of two dichotomous pairs of cognitive functions: The "rational" (judging) functions: thinking and feeling The "irrational" (perceiving) functions: sensation and intuition Jung went on to suggest that these functions are expressed in either an introverted or extraverted form. According to Jung, the psyche is an apparatus for adaptation and orientation, and consists of a number of different psychic functions. Among these he distinguishes four basic functions: sensation—perception by means of immediate apprehension of the visible relationship between subject and object intuition—perception of processes in the background; e.g. unconscious drives and/or motivations of other people thinking—function of intellectual cognition; the forming of logical conclusions feeling—function of subjective estimation, value oriented thinking Thinking and feeling functions are rational, while sensation and intuition are nonrational. According to Jung, rationality consists of figurative thoughts, feelings or actions with reason — a point of view based on a set of criteria and standards. Nonrationality is not based in reason. Jung notes that elementary facts are also nonrational, not because they are illogical but because, as thoughts, they are not judgments. Attitudes: extraversion and introversion Analytical psychology distinguishes several psychological types or temperaments. Extravert (Jung's spelling, although some dictionaries prefer the variant extrovert) Introvert Extraversion means "outward-turning" and introversion means "inward-turning". These specific definitions vary somewhat from the popular usage of the words. The preferences for extraversion and introversion are often called attitudes. Each of the cognitive functions can operate in the external world of behavior, action, people, and things (extraverted attitude) or the internal world of ideas and reflection (introverted attitude). People who prefer extraversion draw their energy toward objective, external data. They seek to experience and base their judgments on data from the outer world. Conversely, those who prefer introversion draw their energy toward subjective, internal data. They seek to experience and base their judgments on data from the inner world. The attitude type could be thought of as the flow of libido (psychic energy). The functions are modified by two main attitude types: extraversion and introversion. In any person, the degree of introversion or extraversion of one function can be quite different from that of another function. Four functions: sensation, intuition, thinking, feeling Jung identified two pairs of psychological functions: The two irrational (perception) functions, sensation and intuition The two rational (judgment) functions, thinking and feeling Sensation and intuition are irrational (perception) functions, meaning they gather information. They describe how information is received and experienced. Individuals who prefer sensation are more likely to trust information that is real, concrete, and actual, meaning they seek the information itself. They prefer to look for discernable details. For them, the meaning is in the data. On the other hand, those who prefer intuition tend to trust information that is envisioned or hypothetical, that can be associated with other possible information. They are more interested in hidden possibilities via the unconscious. The meaning is in how or what the information could be. Thinking and feeling are rational (judgment) functions, meaning they form judgments or make decisions. The thinking and feeling functions are both used to make rational decisions, based on the data received from their information-gathering functions (sensing or intuition). Those who prefer thinking tend to judge things from a more detached standpoint, measuring the decision by what is logical, causal, consistent, and functional. Those who prefer the feeling function tend to form judgments by evaluating the situation; deciding the worth of the situation. They measure the situation by what is pleasant or unpleasant, liked or disliked, harmonious or inharmonious, etc. As noted already, people who prefer the thinking function do not necessarily, in the everyday sense, "think better" than their feeling counterparts; the opposite preference is considered an equally rational way of coming to decisions (and, in any case, the Jung's typology is a discernment of preference, not ability). Similarly, those who prefer the feeling function do not necessarily have "better" emotional reactions than their thinking counterparts. Dominant function All four functions are used at different times depending on the circumstances. However, one of the four functions is generally used more dominantly and proficiently than the other three, in a more conscious and confident way. According to Jung the dominant function is supported by two auxiliary functions. (In MBTI publications the first auxiliary is usually called the auxiliary or secondary function and the second auxiliary function is usually called the tertiary function.) The fourth and least conscious function is always the opposite of the dominant function. Jung called this the "inferior function" and Myers sometimes also called it the "shadow function". Jung's typological model regards psychological type as similar to left- or right-handedness: individuals are either born with, or develop, certain preferred ways of thinking and acting. These psychological differences are sorted into four opposite pairs, or dichotomies, with a resulting eight possible psychological types. People tend to find using their opposite psychological preferences more difficult, even if they can become more proficient (and therefore behaviorally flexible) with practice and development. The four functions operate in conjunction with the attitudes (extraversion and introversion). Each function is used in either an extraverted or introverted way. A person whose dominant function is extraverted intuition, for example, uses intuition very differently from someone whose dominant function is introverted intuition. The eight psychological types are as follows: Extraverted sensation Introverted sensation Extraverted intuition Introverted intuition Extraverted thinking Introverted thinking Extraverted feeling Introverted feeling Jung theorized that the dominant function characterizes consciousness, while its opposite is repressed and characterizes unconscious activity. Generally, we tend to favor our most developed dominant function, while we can broaden our personality by developing the others. Related to this, Jung noted that the unconscious often tends to reveal itself most easily through a person's least developed inferior function. The encounter with the unconscious and development of the underdeveloped functions thus tend to progress together. When the unconscious inferior functions fail to develop, imbalance results. In Psychological Types, Jung describes in detail the effects of tensions between the complexes associated with the dominant and inferior differentiating functions in highly one-sided individuals. Personality types and worrying The relationship between worry – the tendency of one's thoughts and mental images to revolve around and create negative emotions, and the experience of a frequent level of fear – and Jung's model of psychological types has been the subject of studies. In particular, correlational analysis has shown that the tendency to worry is significantly related to Jung's Introversion and Feeling dimensions. Similarly, worry has shown robust correlations with shyness and fear of social situations. The worrier's tendency to be fearful of social situations might make them appear more withdrawn. Jung's model suggests that the superordinate dimension of personality is introversion and extraversion. Introverts are likely to relate to the external world by listening, reflecting, being reserved, and having focused interests. Extraverts on the other hand, are adaptable and in tune with the external world. They prefer interacting with the outer world by talking, actively participating, being sociable, expressive, and having a variety of interests. Jung (1921) also identified two other dimensions of personality: Intuition - Sensing and Thinking - Feeling. Sensing types tend to focus on the reality of present situations, pay close attention to detail, and are concerned with practicalities. Intuitive types focus on envisioning a wide range of possibilities to a situation and favor ideas, concepts, and theories over data. Thinking types use objective and logical reasoning in making their decisions, are more likely to analyze stimuli in a logical and detached manner, be more emotionally stable, and score higher on intelligence. Feeling types make judgments based on subjective and personal values. In interpersonal decision-making, feeling types tend to emphasize compromise to ensure a beneficial solution for everyone. They also tend to be somewhat more neurotic than thinking types. The worrier's tendency to experience a fearful affect, could be manifested in Jung's feeling type. See also General overview Personality Personality psychology Personality tests Psychological typologies Trait theory Trait leadership Three modern theories closely associated with Jung's personality types Keirsey Temperament Sorter Myers–Briggs Type Indicator Socionics Other theories 16 Personality Factors, or the Cattell personality test Attitudinal Psyche Big Five personality traits DISC assessment Enneagram of Personality Eysenck's three-factor model Eysenck Personality Questionnaire Five temperaments Four temperaments Fundamental interpersonal relations orientation Gretchen Rubin's four tendencies HEXACO model of personality structure Holland Codes Humorism Type A and Type B personality theory References Further reading Jung, C.G. ([1921] 1971). Psychological Types, Collected Works, Volume 6, Princeton, N.J.: Princeton University Press. . Jung, C.G. (1966). Two Essays on Analytical Psychology, Collected Works, Volume 7, Princeton, N.J.: Princeton University Press. . Jung, C.G. ([1961] 1989). Memories, Dreams, Reflections, New York, N.Y.: Vantage Books. .
0.775616
0.995899
0.772436
Developmental stage theories
In psychology, developmental stage theories are theories that divide psychological development into distinct stages which are characterized by qualitative differences in behavior. There are several different views about psychological and physical development and how they proceed throughout the life span. The two main psychological developmental theories include continuous and discontinuous development. In addition to individual differences in development, developmental psychologists generally agree that development occurs in an orderly way and in different areas simultaneously. Stage theories The development of the human mind is complex and a debated subject, and may take place in a continuous or discontinuous fashion. Continuous development, like the height of a child, is measurable and quantitative, while discontinuous development is qualitative, like hair or skin color, where those traits fall only under a few specific phenotypes. Continuous development involves gradual and ongoing changes throughout the life span, with behavior in the earlier stages of development providing the basis of skills and abilities required for the next stages. On the other hand, discontinuous development involves distinct and separate stages, with different kinds of behavior occurring in each stage. Stage theories of development rest on the assumption that development is a discontinuous process involving distinct stages which are characterized by qualitative differences in behavior. They also assume that the structure of the stage is not variable according to each individual; however the time of each stage may vary individually. While some theories focus primarily on the healthy development of children, others propose stages that are characterized by a maturity rarely reached before old age. Ego-psychology The psychosexual stage theory created by Sigmund Freud (b.1856) consists of five distinct stages of Psychosexual development that individuals will pass through for the duration of their lifespan. Four of these stages stretch from birth through puberty and the final stage continues throughout the remainder of life. Erik Erikson (b.1902) developed a psychosocial developmental theory, which was both influenced and built upon by Freud, which includes four childhood and four adult stages of life that capture the essence of personality during each period of development. Each of Erikson's stages include both a positive and negative influences that can go on to be seen later in an individual's life. His theory includes the influence of biological factors on development. Jane Loevinger (b.1918) built on the work of Erikson in her description of stages of ego development. Individuation and attachment in ego-psychology Margaret Mahler's (b.1897) theory of separation-individuation in child development contains three phases regarding the child's object relations. John Bowlby's (b.1907) attachment theory proposes that developmental needs and attachment in children are connected to particular people, places, and objects throughout our lives. These connections provide a behavior in the young child that is heavily affected and relied on throughout the entire lifespan. In case of maternal deprivation, this development may be disturbed. Robert Kegan (b.1946) provided a theory of the evolving self, which describes the constructive development theory of subject–object relations. Cognitive and moral development Cognitive development Piaget's cognitive development theory Jean Piaget's cognitive developmental theory describes four major stages from birth through puberty, the last of which starts at 12 years and has no terminating age: Sensorimotor: (birth to 2 years), Preoperations: (2 to 7 years), Concrete operations: (7 to 11 years), and Formal Operations: (from 12 years). Each stage has at least two substages, usually called early and fully. Piaget's theory is a structural stage theory, which implies that: Each stage is qualitatively different; it is a change in nature, not just quantity; Each stage lays the foundation for the next; Everyone goes through the stages in the same order. Neo-Piagetian theories Neo-Piagetian theories criticize and build on Piaget's work. Juan Pascaual-Leone was the first to propose a neo-Piagetian stage theory. Since that time several neo-Piagetian theories of cognitive development have been proposed. These include the theories of Robbie Case, Grame Halford, Andreas Demetriou and Kurt W. Fischer. The theory of Michael Commons' model of hierarchical complexity is also relevant. The description of stages in these theories is more elaborate and focuses on underlying mechanisms of information processing rather than on reasoning as such. In fact, development in information processing capacity is invoked to explain the development of reasoning. More stages are described (as many as 15 stages), with 4 being added beyond the stage of Formal operations. Most stage sequences map onto one another. Post-Piagetian stages are free of content and context and are therefore very general. Other related theories Lawrence Kohlberg (b.1927) in his stages of moral development described how individuals developed moral reasoning. Kohlberg agreed with Piaget's theory of moral development that moral understanding is linked to cognitive development. His three levels were categorized as: preconventional, conventional, and postconventional, all of which have two sub-stages. James W. Fowler (b.1940), and his stages of faith development theory, builds off of both Piaget's and Kohlberg's schemes. Learning and education Maria Montessori (b.1871) described a number of stages in her educational philosophy. Albert Bandura (b.1925), in his social learning theory, emphasizes the child's experiential learning from the environment. Spirituality and consultancy Inspired by Theosophy, Rudolf Steiner (b.1861) had developed a stage theory based on seven-year life phases. Three childhood phases (conception to 21 years) are followed by three stages of development of the ego (21–42 years), concluding with three stages of spiritual development (42-63). The theory is applied in Waldorf education Clare W. Graves (b.1914) developed an emergent cyclical levels of existence theory. It was popularized by Don Beck (b.1937) and Chris Cowan's as spiral dynamics, and mainly applied in consultancy. Ken Wilber (b.1949) integrated Spiral Dynamics in his integral theory, which also includes psychological stages of development as described by Jean Piaget and Jane Loevinger, the spiritual models of Sri Aurobindo and Rudolf Steiner, and Jean Gebsers theory of mutations of consciousness in human history. Other theories Lev Vygotsky (b.1896) developed several theories, particularly zone of proximal development. Other theories are not exactly developmental stage theories, but do incorporate a hierarchy of psychological factors and elements. Abraham Maslow (b.1908) described a hierarchy of needs. James Marcia (b.1937) developed a theory of identity achievement and identity status. References
0.785042
0.98392
0.772418
Affect theory
Affect theory is a theory that seeks to organize affects, sometimes used interchangeably with emotions or subjectively experienced feelings, into discrete categories and to typify their physiological, social, interpersonal, and internalized manifestations. The conversation about affect theory has been taken up in psychology, psychoanalysis, neuroscience, medicine, interpersonal communication, literary theory, critical theory, media studies, and gender studies, among other fields. Hence, affect theory is defined in different ways, depending on the discipline. Affect theory is originally attributed to the psychologist Silvan Tomkins, introduced in the first two volumes of his book Affect Imagery Consciousness (1962). Tomkins uses the concept of affect to refer to the "biological portion of emotion," defined as the "hard-wired, preprogrammed, genetically transmitted mechanisms that exist in each of us," which, when triggered, precipitate a "known pattern of biological events". However, it is also acknowledged that, in adults, the affective experience is a result of interactions between the innate mechanism and a "complex matrix of nested and interacting ideo-affective formations." Affect theory in psychology Silvan Tomkins's nine affects According to the psychologist Silvan Tomkins, there are nine primary affects. Tomkins characterized affects by low/high intensity labels and by their physiological expression: Positive: Enjoyment/Joy (reaction to success/impulse to share) – smiling, lips wide and out Interest/Excitement (reaction to new situation/impulse to attend) – eyebrows down, eyes tracking, eyes looking, closer listening Neutral: Surprise/Startle (reaction to sudden change/resets impulses) – eyebrows up, eyes blinking Negative: Anger/Rage (reaction to threat/impulse to attack) – frowning, a clenched jaw, a red face Disgust (reaction to bad taste/impulse to discard) – the lower lip raised and protruded, head forward and down Dissmell (reaction to bad smell/impulse to avoid – similar to distaste) – upper lip raised, head pulled back Distress/Anguish (reaction to loss/impulse to mourn) – crying, rhythmic sobbing, arched eyebrows, mouth lowered Fear/Terror (reaction to danger/impulse to run or hide) – a frozen stare, a pale face, coldness, sweat, erect hair Shame/Humiliation (reaction to failure/impulse to review behaviour) – eyes lowered, the head down and averted, blushing Prescriptive applications According to Tomkins, optimal mental health involves maximizing positive affects and minimizing negative affects. Affect should also be properly expressed so to make the identification of affect possible to others. Affect theory is also used prescriptively in investigations about intimacy and intimate relationships. Kelly describes relationships as agreements to work collaboratively toward maximizing positive affect and minimizing negative affect. Like the "optimal mental health" blueprint, this blueprint requires that members of the relationship express affect to one another in order to identify progress. These blueprints can also describe natural and implicit goals. For example, Donald Nathanson uses the "affect" to create a narrative for one of his patients: I suspect that the reason he refuses to watch movies is the sturdy fear of enmeshment in the affect depicted on the screen; the affect mutualization for which most of us frequent the movie theater is only another source of discomfort for him. ... His refusal to risk the range of positive and negative affect associated with sexuality robs any possible relationship of one of its best opportunities to work on the first two rules of either the Kelly or the Tomkins blueprint. Thus, his problems with intimacy may be understood in one aspect as an overly substantial empathic wall, and in another aspect as a purely internal problem with the expression and management of his own affect. Tomkins claims that "Christianity became a powerful universal religion in part because of its more general solution to the problem of anger, violence, and suffering versus love, enjoyment, and peace.". Affect theory is also referenced heavily in Tomkins's script theory. Attempts to typify affects in psychology Humor is a subject of debate in affect theory. In studies of humor's physiological manifestations, humor provokes highly characteristic facial expressions. Some research has shown evidence that humor may be a response to a conflict between negative and positive affects, such as fear and enjoyment, which results in spasmodic contractions of parts of the body, mainly in the stomach and diaphragm area, as well as contractions in the upper cheek muscles. Further affects that seem to be missing for Tomkins's taxonomy include relief, resignation, and confusion, among many others. The affect joy is observed through the display of smiling. These affects can be identified through immediate facial reactions that people have to a stimulus, typically well before they could process any real response to the stimulus. The findings from a study on negative affect arousal and white noise by Stanley S. Seidner "support the existence of a negative affect arousal mechanism through observations regarding the devaluation of speakers from other Spanish ethnic origins". Critical theory Emotion theory organizes emotions into distinct categories, sometimes used interchangeably with emotions and subjectively experienced emotions, and typifies their physiological, social, interpersonal, and internalized symptoms. Conversations about emotion theory are addressed in fields such as psychology, psychoanalysis, neuroscience, medicine, interpersonal communication, literary theory, critical theory, media studies, and gender studies. Emotion theory is therefore defined in different ways depending on the field. Emotion theory was originally written by psychologist Silvan Tomkins and was introduced in the first two volumes of his book Effects on Image Consciousness (1962). Tomkins uses the concept of emotion to refer to the "biological part of emotion." This part is defined as "a wired, pre-programmed, genetically transmitted mechanism that exists within each of us" and causes "known biological patterns" when triggered. . event". However, it is also accepted that in adults, emotional experience is the result of interactions between innate mechanisms and "a complex matrix of nested and interacting thought-feeling formations". Affect theory is explored in philosophy, psychoanalytic theory, gender studies, and art theory. Eve Sedgwick and Lauren Berlant have been called "affect theorists" who write from critical theory perspectives. Many other critical theorists have relied heavily on affect theory, including Elizabeth Povinelli. Affect theory is drawn from by Marxist autonomists including Franco Berardi, Michael Hardt and Antonio Negri, as well as Marxist feminists such as Selma James and Silvia Federici, who consider the cognitive and material manifestations of particularized gendered, performed roles including caregiving. Critical theorist Sara Ahmed describes affect as "sticky" in her essay "Happy Objects" to explain the sustained connection between "ideas, values, and objects." In line with these theorists, many scholars identify the role of affect in shaping social values, gender ideals, and collective groups. Affect is seen as instrumental for events and symbols that produce shared identities, and is therefore central in contemporary politics. Affect is also treated as central in capitalist systems, including people's attachment to commodities and "dreams" of class mobility. In addition, the non-discursive and non-deliberative attributes of affect may produce social interactions and experiences that are non-reducible to specific endpoints, and at times may allow people to experience new modes of existence separated from their main life goals. Scholars who explored affect theory as an approach to art include Ruth Leys and Charles Altieri. In “The Turn to Affect”, Leys explained how the shift to the “neuroscience of emotions” based on the affect theory has a deleterious effect of equating precognitive, nonrational responses to critical and reflective insights. She maintained that there are no precognitive insights, nothing that acts as inhuman, presubjective, visceral forces, and intensities that shape our thoughts and judgments. Affect theory is part of Altieri's critique of contemporary literary criticism, which he believes is obsessed with historical and socio-political critiques. For him, this focus leads to “over-readings” of meaning. Instead, he focused on affect in relation to aesthetic experience. In his conceptualization, Altieri used the term “rapture” to explain the aesthetics of effects. He also drew from cognitive and neuroscience studies to distinguish “affect” or “feeling” and “emotion”. Interpersonal communication This nonverbal mode of conveying feelings and influence is held to play a central role in intimate relationships. The Emotional Safety model of couples therapy seeks to identify the affective messages that occur within the couple's emotional relationship (the partners' feelings about themselves, each other, and their relationship); most importantly, messages regarding (a) the security of the attachment and (b) how each individual is valued. One practical application of affect theory has been its incorporation into couples therapy. Two characteristics of affects have powerful implications for intimate relationships: According to Tomkins, a central characteristic of affects is affective resonance, which refers to a person's tendency to resonate and experience the same affect in response to viewing a display of that affect by another person, sometimes thought to be "contagion". Affective resonance is considered to be the original basis for all human communication (before there were words, there was a smile and a nod). Also according to Tomkins, affects provide a sense of urgency to the less powerful drives. Thus, affects are powerful sources of motivation. In Tomkins' words, affects make good things better and bad things worse. Criticism Some scholars have taken issue with the claims and methodologies of affect theorists. Ruth Leys has objected to affect theory's implications for artistic and literary criticism, as well as to its appropriation in some forms of trauma theory. Aubrey Anable has also criticised affect theory for its imprecision, claiming that its "language of intensity, becoming, and in-betweenness and its emphasis on the unpresentable give it a maddening incoherence, or shade too easily into purely subjective responses to the world". Jason Josephson Storm, a professor of religious studies, argued that affect theory in the humanities has failed to distinguish itself from poststructuralism and ignores empirical evidence that affects are culturally constructed. See also Selective exposure theory Mood management theory Affect consciousness References External links Tomkins Institute Psychoanalytic theory
0.777847
0.99288
0.772309
Metaphysics
Metaphysics is the branch of philosophy that examines the basic structure of reality. It is traditionally seen as the study of mind-independent features of the world, but some modern theorists view it as an inquiry into the fundamental categories of human understanding. It is sometimes characterized as first philosophy to suggest that it is more fundamental than other forms of philosophical inquiry. Metaphysics encompasses a wide range of general and abstract topics. It investigates the nature of existence, the features all entities have in common, and their division into categories of being. An influential division is between particulars and universals. Particulars are individual unique entities, like a specific apple. Universals are general repeatable entities that characterize particulars, like the color red. Modal metaphysics examines what it means for something to be possible or necessary. Metaphysicians also explore the concepts of space, time, and change, and their connection to causality and the laws of nature. Other topics include how mind and matter are related, whether everything in the world is predetermined, and whether there is free will. Metaphysicians use various methods to conduct their inquiry. Traditionally, they rely on rational intuitions and abstract reasoning but have more recently also included empirical approaches associated with scientific theories. Due to the abstract nature of its topic, metaphysics has received criticisms questioning the reliability of its methods and the meaningfulness of its theories. Metaphysics is relevant to many fields of inquiry that often implicitly rely on metaphysical concepts and assumptions. The roots of metaphysics lie in antiquity with speculations about the nature and origin of universe, like those found in the Upanishads in ancient India, Daoism in ancient China, and pre-Socratic philosophy in ancient Greece. During the subsequent medieval period in the West, discussions about the nature of universals were influenced by the philosophies of Plato and Aristotle. The modern period saw the emergence of various comprehensive systems of metaphysics, many of which embraced idealism. In the 20th century, a "revolt against idealism" was started, metaphysics was once declared meaningless, and then revived with various criticisms of earlier theories and new approaches to metaphysical inquiry. Definition Metaphysics is the study of the most general features of reality, including existence, objects and their properties, possibility and necessity, space and time, change, causation, and the relation between matter and mind. It is one of the oldest branches of philosophy. The precise nature of metaphysics is disputed and its characterization has changed in the course of history. Some approaches see metaphysics as a unified field and give a wide-sweeping definition by understanding it as the study of "fundamental questions about the nature of reality" or as an inquiry into the essences of things. Another approach doubts that the different areas of metaphysics share a set of underlying features and provides instead a fine-grained characterization by listing all the main topics investigated by metaphysicians. Some definitions are descriptive by providing an account of what metaphysicians do while others are normative and prescribe what metaphysicians ought to do. Two historically influential definitions in ancient and medieval philosophy understand metaphysics as the science of the first causes and as the study of being qua being, that is, the topic of what all beings have in common and to what fundamental categories they belong. In the modern period, the scope of metaphysics expanded to include topics such as the distinction between mind and body and free will. Some philosophers follow Aristotle in describing metaphysics as "first philosophy", suggesting that it is the most basic inquiry upon which all other branches of philosophy depend in some way. Metaphysics is traditionally understood as a study of mind-independent features of reality. Starting with Immanuel Kant's critical philosophy, an alternative conception gained prominence that focuses on conceptual schemes rather than external reality. Kant distinguishes transcendent metaphysics, which aims to describe the objective features of reality beyond sense experience, from critical metaphysics, which outlines the aspects and principles underlying all human thought and experience. Philosopher P. F. Strawson further explored the role of conceptual schemes, contrasting descriptive metaphysics, which articulates conceptual schemes commonly used to understand the world, with revisionary metaphysics, which aims to produce better conceptual schemes. Metaphysics differs from the individual sciences by studying the most general and abstract aspects of reality. The individual sciences, by contrast, examine more specific and concrete features and restrict themselves to certain classes of entities, such as the focus on physical things in physics, living entities in biology, and cultures in anthropology. It is disputed to what extent this contrast is a strict dichotomy rather than a gradual continuum. Etymology The word metaphysics has its origin in the ancient Greek words metá (μετά, meaning , , and ) and phusiká (φυσικά), as a short form of ta metá ta phusiká, meaning . This is often interpreted to mean that metaphysics discusses topics that, due to their generality and comprehensiveness, lie beyond the realm of physics and its focus on empirical observation. Metaphysics got its name by a historical accident when Aristotle's book on this subject was published. Aristotle did not use the term metaphysics but his editor (likely Andronicus of Rhodes) may have coined it for its title to indicate that this book should be studied after Aristotle's book published on physics: literally after physics. The term entered the English language through the Latin word metaphysica. Branches The nature of metaphysics can also be characterized in relation to its main branches. An influential division from early modern philosophy distinguishes between general and special or specific metaphysics. General metaphysics, also called ontology, takes the widest perspective and studies the most fundamental aspects of being. It investigates the features that all entities share and how entities can be divided into different categories. Categories are the most general kinds, such as substance, property, relation, and fact. Ontologists research which categories there are, how they depend on one another, and how they form a system of categories that provides a comprehensive classification of all entities. Special metaphysics considers being from more narrow perspectives and is divided into subdisciplines based on the perspective they take. Metaphysical cosmology examines changeable things and investigates how they are connected to form a world as a totality extending through space and time. Rational psychology focuses on metaphysical foundations and problems concerning the mind, such as its relation to matter and the freedom of the will. Natural theology studies the divine and its role as the first cause. The scope of special metaphysics overlaps with other philosophical disciplines, making it unclear whether a topic belongs to it or to areas like philosophy of mind and theology. Applied metaphysics is a relatively young subdiscipline. It belongs to applied philosophy and studies the applications of metaphysics, both within philosophy and other fields of inquiry. In areas like ethics and philosophy of religion, it addresses topics like the ontological foundations of moral claims and religious doctrines. Beyond philosophy, its applications include the use of ontologies in artificial intelligence, economics, and sociology to classify entities. In psychiatry and medicine, it examines the metaphysical status of diseases. Meta-metaphysics is the metatheory of metaphysics and investigates the nature and methods of metaphysics. It examines how metaphysics differs from other philosophical and scientific disciplines and assesses its relevance to them. Even though discussions of these topics have a long history in metaphysics, meta-metaphysics has only recently developed into a systematic field of inquiry. Topics Existence and categories of being Metaphysicians often regard existence or being as one of the most basic and general concepts. To exist means to form part of reality, distinguishing real entities from imaginary ones. According to the orthodox view, existence is a property of properties: if an entity exists then its properties are instantiated. A different position states that existence is a property of individuals, meaning that it is similar to other properties, such as shape or size. It is controversial whether all entities have this property. According to Alexius Meinong, there are nonexistent objects, including merely possible objects like Santa Claus and Pegasus. A related question is whether existence is the same for all entities or whether there are different modes or degrees of existence. For instance, Plato held that Platonic forms, which are perfect and immutable ideas, have a higher degree of existence than matter, which can only imperfectly reflect Platonic forms. Another key concern in metaphysics is the division of entities into distinct groups based on underlying features they share. Theories of categories provide a system of the most fundamental kinds or the highest genera of being by establishing a comprehensive inventory of everything. One of the earliest theories of categories was proposed by Aristotle, who outlined a system of 10 categories. He argued that substances (e.g. man and horse), are the most important category since all other categories like quantity (e.g. four), quality (e.g. white), and place (e.g. in Athens) are said of substances and depend on them. Kant understood categories as fundamental principles underlying human understanding and developed a system of 12 categories, divided into the four classes quantity, quality, relation, and modality. More recent theories of categories were proposed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Many philosophers rely on the contrast between concrete and abstract objects. According to a common view, concrete objects, like rocks, trees, and human beings, exist in space and time, undergo changes, and impact each other as cause and effect, whereas abstract objects, like numbers and sets, exist outside space and time, are immutable, and do not engage in causal relations. Particulars Particulars are individual entities and include both concrete objects, like Aristotle, the Eiffel Tower, or a specific apple, and abstract objects, like the number 2 or a specific set in mathematics. Also called individuals, they are unique, non-repeatable entities and contrast with universals, like the color red, which can at the same time exist in several places and characterize several particulars. A widely held view is that particulars instantiate universals but are not themselves instantiated by something else, meaning that they exist in themselves while universals exist in something else. Substratum theory analyzes each particular as a substratum, also called bare particular, together with various properties. The substratum confers individuality to the particular while the properties express its qualitative features or what it is like. This approach is rejected by bundle theorists, who state that particulars are only bundles of properties without an underlying substratum. Some bundle theorists include in the bundle an individual essence, called haecceity, to ensure that each bundle is unique. Another proposal for concrete particulars is that they are individuated by their space-time location. Concrete particulars encountered in everyday life, like rocks, tables, and organisms, are complex entities composed of various parts. For example, a table is made up of a tabletop and legs, each of which is itself made up of countless particles. The relation between parts and wholes is studied by mereology. The problem of the many is about which groups of entities form mereological wholes, for instance, whether a dust particle on the tabletop is part of the table. According to mereological universalists, every collection of entities forms a whole, meaning that the parts of the table without the dust particle form one whole while they together with it form a second whole. Mereological moderatists hold that certain conditions must be met for a group of entities to compose a whole, for example, that the entities touch one another. Mereological nihilists reject the idea of wholes altogether, claiming that there are no tables and chairs but only particles that are arranged table-wise and chair-wise. A related mereological problem is whether there are simple entities that have no parts, as atomists claim, or not, as continuum theorists contend. Universals Universals are general entities, encompassing both properties and relations, that express what particulars are like and how they resemble one another. They are repeatable, meaning that they are not limited to a unique existent but can be instantiated by different particulars at the same time. For example, the particulars Nelson Mandela and Mahatma Gandhi instantiate the universal humanity, similar to how a strawberry and a ruby instantiate the universal red. A topic discussed since ancient philosophy, the problem of universals consists in the challenge of characterizing the ontological status of universals. Realists argue that universals are real, mind-independent entities that exist in addition to particulars. According to Platonic realists, universals exist independently of particulars, which implies that the universal red would continue to exist even if there were no red things. A more moderate form of realism, inspired by Aristotle, states that universals depend on particulars, meaning that they are only real if they are instantiated. Nominalists reject the idea that universals exist in either form. For them, the world is composed exclusively of particulars. Conceptualists offer an intermediate position, stating that universals exist, but only as concepts in the mind used to order experience by classifying entities. Natural and social kinds are often understood as special types of universals. Entities belonging to the same natural kind share certain fundamental features characteristic of the structure of the natural world. In this regard, natural kinds are not an artificially constructed classification but are discovered, usually by the natural sciences, and include kinds like electrons, , and tigers. Scientific realists and anti-realists disagree about whether natural kinds exist. Social kinds, like money and baseball, are studied by social metaphysics and characterized as useful social constructions that, while not purely fictional, do not reflect the fundamental structure of mind-independent reality. Possibility and necessity The concepts of possibility and necessity convey what can or must be the case, expressed in statements like "it is possible to find a cure for cancer" and "it is necessary that two plus two equals four". They belong to modal metaphysics, which investigates the metaphysical principles underlying them, in particular, why some modal statements are true while others are false. Some metaphysicians hold that modality is a fundamental aspect of reality, meaning that besides facts about what is the case, there are additional facts about what could or must be the case. A different view argues that modal truths are not about an independent aspect of reality but can be reduced to non-modal characteristics, for example, to facts about what properties or linguistic descriptions are compatible with each other or to fictional statements. Borrowing a term from German philosopher Gottfried Wilhelm Leibniz's theodicy, many metaphysicians use the concept of possible worlds to analyze the meaning and ontological ramifications of modal statements. A possible world is a complete and consistent way of how things could have been. For example, the dinosaurs were wiped out in the actual world but there are possible worlds in which they are still alive. According to possible world semantics, a statement is possibly true if it is true in at least one possible world, whereas it is necessarily true if it is true in all possible worlds. Modal realists argue that possible worlds exist as concrete entities in the same sense as the actual world, with the main difference being that the actual world is the world we live in while other possible worlds are inhabited by counterparts. This view is controversial and various alternatives have been suggested, for example, that possible worlds only exist as abstract objects or are similar to stories told in works of fiction. Space, time, and change Space and time are dimensions that entities occupy. Spacetime realists state that space and time are fundamental aspects of reality and exist independently of the human mind. Spacetime idealists, by contrast, hold that space and time are constructs of the human mind, created to organize and make sense of reality. Spacetime absolutism or substantivalism understands spacetime as a distinct object, with some metaphysicians conceptualizing it as a container that holds all other entities within it. Spacetime relationism sees spacetime not as an object but as a network of relations between objects, such as the spatial relation of being next to and the temporal relation of coming before. In the metaphysics of time, an important contrast is between the A-series and the B-series. According to the A-series theory, the flow of time is real, meaning that events are categorized into the past, present, and future. The present continually moves forward in time and events that are in the present now will eventually change their status and lie in the past. From the perspective of the B-series theory, time is static, and events are ordered by the temporal relations earlier-than and later-than without any essential difference between past, present, and future. Eternalism holds that past, present, and future are equally real, whereas presentism asserts that only entities in the present exist. Material objects persist through time and change in the process, like a tree that grows or loses leaves. The main ways of conceptualizing persistence through time are endurantism and perdurantism. According to endurantism, material objects are three-dimensional entities that are wholly present at each moment. As they change, they gain or lose properties but otherwise remain the same. Perdurantists see material objects as four-dimensional entities that extend through time and are made up of different temporal parts. At each moment, only one part of the object is present, not the object as a whole. Change means that an earlier part is qualitatively different from a later part. For example, when a banana ripens, there is an unripe part followed by a ripe part. Causality Causality is the relation between cause and effect whereby one entity produces or affects another entity. For instance, if a person bumps a glass and spills its contents then the bump is the cause and the spill is the effect. Besides the single-case causation between particulars in this example, there is also general-case causation expressed in statements such as "smoking causes cancer". The term agent causation is used when people and their actions cause something. Causation is usually interpreted deterministically, meaning that a cause always brings about its effect. This view is rejected by probabilistic theories, which claim that the cause merely increases the probability that the effect occurs. This view can explain that smoking causes cancer even though this does not happen in every single case. The regularity theory of causation, inspired by David Hume's philosophy, states that causation is nothing but a constant conjunction in which the mind apprehends that one phenomenon, like putting one's hand in a fire, is always followed by another phenomenon, like a feeling of pain. According to nomic regularity theories, regularities manifest as laws of nature studied by science. Counterfactual theories focus not on regularities but on how effects depend on their causes. They state that effects owe their existence to the cause and would not occur without them. According to primitivism, causation is a basic concept that cannot be analyzed in terms of non-causal concepts, such as regularities or dependence relations. One form of primitivism identifies causal powers inherent in entities as the underlying mechanism. Eliminativists reject the above theories by holding that there is no causation. Mind and free will Mind encompasses phenomena like thinking, perceiving, feeling, and desiring as well as the underlying faculties responsible for these phenomena. The mind–body problem is the challenge of clarifying the relation between physical and mental phenomena. According to Cartesian dualism, minds and bodies are distinct substances. They causally interact with each other in various ways but can, at least in principle, exist on their own. This view is rejected by monists, who argue that reality is made up of only one kind. According to idealism, everything is mental, including physical objects, which may be understood as ideas or perceptions of conscious minds. Materialists, by contrast, state that all reality is at its core material. Some deny that mind exists but the more common approach is to explain mind in terms of certain aspects of matter, such as brain states, behavioral dispositions, or functional roles. Neutral monists argue that reality is fundamentally neither material nor mental and suggest that matter and mind are both derivative phenomena. A key aspect of the mind–body problem is the hard problem of consciousness or how to explain that physical systems like brains can produce phenomenal consciousness. The status of free will as the ability of a person to choose their actions is a central aspect of the mind–body problem. Metaphysicians are interested in the relation between free will and causal determinismthe view that everything in the universe, including human behavior, is determined by preceding events and laws of nature. It is controversial whether causal determinism is true, and, if so, whether this would imply that there is no free will. According to incompatibilism, free will cannot exist in a deterministic world since there is no true choice or control if everything is determined. Hard determinists infer from this that there is no free will, whereas libertarians conclude that determinism must be false. Compatibilists offer a third perspective, arguing that determinism and free will do not exclude each other, for instance, because a person can still act in tune with their motivation and choices even if they are determined by other forces. Free will plays a key role in ethics regarding the moral responsibility people have for what they do. Others Identity is a relation that every entity has to itself as a form of sameness. It refers to numerical identity when the very same entity is involved, as in the statement "the morning star is the evening star" (both are the planet Venus). In a slightly different sense, it encompasses qualitative identity, also called exact similarity and indiscernibility, which occurs when two distinct entities are exactly alike, such as perfect identical twins. The principle of the indiscernibility of identicals is widely accepted and holds that numerically identical entities exactly resemble one another. The converse principle, known as identity of indiscernibles or Leibniz's Law, is more controversial and states that two entities are numerically identical if they exactly resemble one another. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time, whereas diachronic identity is about the same entity at different times, as in statements like "the table I bought last year is the same as the table in my dining room now". Personal identity is a related topic in metaphysics that uses the term identity in a slightly different sense and concerns questions like what personhood is or what makes someone a person. Various contemporary metaphysicians rely on the concepts of truth, truth-bearer, and truthmaker to conduct their inquiry. Truth is a property of being in accord with reality. Truth-bearers are entities that can be true or false, such as linguistic statements and mental representations. A truthmaker of a statement is the entity whose existence makes the statement true. For example, the statement "a tomato is red" is true because there exists a red tomato as its truthmaker. Based on this observation, it is possible to pursue metaphysical research by asking what the truthmakers of statements are, with different areas of metaphysics being dedicated to different types of statements. According to this view, modal metaphysics asks what makes statements about what is possible and necessary true while the metaphysics of time is interested in the truthmakers of temporal statements about the past, present, and future. Methodology Metaphysicians employ a variety of methods to develop metaphysical theories and formulate arguments for and against them. Traditionally, a priori methods have been the dominant approach. They rely on rational intuition and abstract reasoning from general principles rather than sensory experience. A posteriori approaches, by contrast, ground metaphysical theories in empirical observations and scientific theories. Some metaphysicians incorporate perspectives from fields such as physics, psychology, linguistics, and history into their inquiry. The two approaches are not mutually exclusive: it is possible to combine elements from both. The method a metaphysician chooses often depends on their understanding of the nature of metaphysics, for example, whether they see it as an inquiry into the mind-independent structure of reality, as metaphysical realists claim, or the principles underlying thought and experience, as some metaphysical anti-realists contend. A priori approaches often rely on intuitionsnon-inferential impressions about the correctness of specific claims or general principles. For example, arguments for the A-theory of time, which states that time flows from the past through the present and into the future, often rely on pre-theoretical intuitions associated with the sense of the passage of time. Some approaches use intuitions to establish a small set of self-evident fundamental principles, known as axioms, and employ deductive reasoning to build complex metaphysical systems by drawing conclusions from these axioms. Intuition-based approaches can be combined with thought experiments, which help evoke and clarify intuitions by linking them to imagined situations. They use counterfactual thinking to assess the possible consequences of these situations. For example, to explore the relation between matter and consciousness, some theorists compare humans to philosophical zombieshypothetical creatures identical to humans but without conscious experience. A related method relies on commonly accepted beliefs instead of intuitions to formulate arguments and theories. The common-sense approach is often used to criticize metaphysical theories that deviate significantly from how the average person thinks about an issue. For example, common-sense philosophers have argued that mereological nihilism is false since it implies that commonly accepted things, like tables, do not exist. Conceptual analysis, a method particularly prominent in analytic philosophy, aims to decompose metaphysical concepts into component parts to clarify their meaning and identify essential relations. In phenomenology, the method of eidetic variation is used to investigate essential structures underlying phenomena. This method involves imagining an object and varying its features to determine which ones are essential and cannot be changed. The transcendental method is a further approach and examines the metaphysical structure of reality by observing what entities there are and studying the conditions of possibility without which these entities could not exist. Some approaches give less importance to a priori reasoning and view metaphysics as a practice continuous with the empirical sciences that generalizes their insights while making their underlying assumptions explicit. This approach is known as naturalized metaphysics and is closely associated with the work of Willard Van Orman Quine. He relies on the idea that true sentences from the sciences and other fields have ontological commitments, that is, they imply that certain entities exist. For example, if the sentence "some electrons are bonded to protons" is true then it can be used to justify that electrons and protons exist. Quine used this insight to argue that one can learn about metaphysics by closely analyzing scientific claims to understand what kind of metaphysical picture of the world they presuppose. In addition to methods of conducting metaphysical inquiry, there are various methodological principles used to decide between competing theories by comparing their theoretical virtues. Ockham's Razor is a well-known principle that gives preference to simple theories, in particular, those that assume that few entities exist. Other principles consider explanatory power, theoretical usefulness, and proximity to established beliefs. Criticism Despite its status as one of the main branches of philosophy, metaphysics has received numerous criticisms questioning its legitimacy as a field of inquiry. One criticism argues that metaphysical inquiry is impossible because humans lack the cognitive capacities needed to access the ultimate nature of reality. This line of thought leads to skepticism about the possibility of metaphysical knowledge. Empiricists often follow this idea, like Hume, who argued that there is no good source of metaphysical knowledge since metaphysics lies outside the field of empirical knowledge and relies on dubious intuitions about the realm beyond sensory experience. A related argument favoring the unreliability of metaphysical theorizing points to the deep and lasting disagreements about metaphysical issues, suggesting a lack of overall progress. Another criticism holds that the problem lies not with human cognitive abilities but with metaphysical statements themselves, which some claim are neither true nor false but meaningless. According to logical positivists, for instance, the meaning of a statement is given by the procedure used to verify it, usually through the observations that would confirm it. Based on this controversial assumption, they argue that metaphysical statements are meaningless since they make no testable predictions about experience. A slightly weaker position allows metaphysical statements to have meaning while holding that metaphysical disagreements are merely verbal disputes about different ways to describe the world. According to this view, the disagreement in the metaphysics of composition about whether there are tables or only particles arranged table-wise is a trivial debate about linguistic preferences without any substantive consequences for the nature of reality. The position that metaphysical disputes have no meaning or no significant point is called metaphysical or ontological deflationism. This view is opposed by so-called serious metaphysicians, who contend that metaphysical disputes are about substantial features of the underlying structure of reality. A closely related debate between ontological realists and anti-realists concerns the question of whether there are any objective facts that determine which metaphysical theories are true. A different criticism, formulated by pragmatists, sees the fault of metaphysics not in its cognitive ambitions or the meaninglessness of its statements, but in its practical irrelevance and lack of usefulness. Martin Heidegger criticized traditional metaphysics, saying that it fails to distinguish between individual entities and being as their ontological ground. His attempt to reveal the underlying assumptions and limitations in the history of metaphysics to "overcome metaphysics" influenced Jacques Derrida's method of deconstruction. Derrida employed this approach to criticize metaphysical texts for relying on opposing terms, like presence and absence, which he thought were inherently unstable and contradictory. There is no consensus about the validity of these criticisms and whether they affect metaphysics as a whole or only certain issues or approaches in it. For example, it could be the case that certain metaphysical disputes are merely verbal while others are substantive. Relation to other disciplines Metaphysics is related to many fields of inquiry by investigating their basic concepts and relation to the fundamental structure of reality. For example, the natural sciences rely on concepts such as law of nature, causation, necessity, and spacetime to formulate their theories and predict or explain the outcomes of experiments. While scientists primarily focus on applying these concepts to specific situations, metaphysics examines their general nature and how they depend on each other. For instance, physicists formulate laws of nature, like laws of gravitation and thermodynamics, to describe how physical systems behave under various conditions. Metaphysicians, by contrast, examine what all laws of nature have in common, asking whether they merely describe contingent regularities or express necessary relations. New scientific discoveries have also influenced existing and inspired new metaphysical theories. Einstein's theory of relativity, for instance, prompted various metaphysicians to conceive space and time as a unified dimension rather than as independent dimensions. Empirically focused metaphysicians often rely on scientific theories to ground their theories about the nature of reality in empirical observations. Similar issues arise in the social sciences where metaphysicians investigate their basic concepts and analyze their metaphysical implications. This includes questions like whether social facts emerge from non-social facts, whether social groups and institutions have mind-independent existence, and how they persist through time. Metaphysical assumptions and topics in psychology and psychiatry include the questions about the relation between body and mind, whether the nature of the human mind is historically fixed, and what the metaphysical status of diseases is. Metaphysics is similar to both physical cosmology and theology in its exploration of the first causes and the universe as a whole. Key differences are that metaphysics relies on rational inquiry while physical cosmology gives more weight to empirical observations and theology incorporates divine revelation and other faith-based doctrines. Historically, cosmology and theology were considered subfields of metaphysics. Metaphysics in the form of ontology plays a central role in computer science to classify objects and formally represent information about them. Unlike metaphysicians, computer scientists are usually not interested in providing a single all-encompassing characterization of reality as a whole. Instead, they employ many different ontologies, each one concerned only with a limited domain of entities. For instance, an organization may use an ontology with categories such as person, company, address, and name to represent information about clients and employees. Ontologies provide standards or conceptualizations for encoding and storing information in a structured way, enabling computational processes to use and transform their information for a variety of purposes. Some knowledge bases integrate information from various domains, which brings with it the challenge of handling data that was formulated using diverse ontologies. They address this by providing an upper ontology that defines concepts at a higher level of abstraction, applicable to all domains. Influential upper ontologies include Suggested Upper Merged Ontology and Basic Formal Ontology. Logic as the study of correct reasoning is often used by metaphysicians as a tool to engage in their inquiry and express insights through precise logical formulas. Another relation between the two fields concerns the metaphysical assumptions associated with logical systems. Many logical systems like first-order logic rely on existential quantifiers to express existential statements. For instance, in the logical formula the existential quantifier is applied to the predicate to express that there are horses. Following Quine, various metaphysicians assume that existential quantifiers carry ontological commitments, meaning that existential statements imply that the entities over which one quantifies are part of reality. History The history of metaphysics examines how the inquiry into the basic structure of reality has evolved in the course of history. Metaphysics originated in the ancient period from speculations about the nature and origin of the cosmos. In ancient India, starting in the 7th century BCE, the Upanishads were written as religious and philosophical texts that examine how ultimate reality constitutes the ground of all being. They further explore the nature of the self and how it can reach liberation by understanding ultimate reality. This period also saw the emergence of Buddhism in the 6th century BCE, which denies the existence of an independent self and understands the world as a cyclic process. At about the same time in ancient China, the school of Daoism was formed and explored the natural order of the universe, known as Dao, and how it is characterized by the interplay of yin and yang as two correlated forces. In ancient Greece, metaphysics emerged in the 6th century BCE with the pre-Socratic philosophers, who gave rational explanations of the cosmos as a whole by examining the first principles from which everything arises. Building on their work, Plato (427–347 BCE) formulated his theory of forms, which states that eternal forms or ideas possess the highest kind of reality while the material world is only an imperfect reflection of them. Aristotle (384–322 BCE) accepted Plato's idea that there are universal forms but held that they cannot exist on their own but depend on matter. He also proposed a system of categories and developed a comprehensive framework of the natural world through his theory of the four causes. Starting in the 4th century BCE, Hellenistic philosophy explored the rational order underlying the cosmos and the idea that it is made up of indivisible atoms. Neoplatonism emerged towards the end of the ancient period in the 3rd century CE and introduced the idea of "the One" as the transcendent and ineffable source of all creation. Meanwhile, in Indian Buddhism, the Madhyamaka school developed the idea that all phenomena are inherently empty without a permanent essence. The consciousness-only doctrine of the Yogācāra school stated that experienced objects are mere transformations of consciousness and do not reflect external reality. The Hindu school of Samkhya philosophy introduced a metaphysical dualism with pure consciousness and matter as its fundamental categories. In China, the school of Xuanxue explored metaphysical problems such as the contrast between being and non-being. Medieval Western philosophy was profoundly shaped by ancient Greek philosophy. Boethius (477–524 CE) sought to reconcile Plato's and Aristotle's theories of universals, proposing that universals can exist both in matter and mind. His theory inspired the development of nominalism and conceptualism, as in the thought of Peter Abelard (1079–1142 CE). Thomas Aquinas (1224–1274 CE) understood metaphysics as the discipline investigating different meanings of being, such as the contrast between substance and accident, and principles applying to all beings, such as the principle of identity. William of Ockham (1285–1347 CE) proposed Ockham's razor, a methodological principles to choose between competing metaphysical theories. Arabic–Persian philosophy flourished from the early 9th century CE to the late 12th century CE, integrating ancient Greek philosophies to interpret and clarify the teachings of the Quran. Avicenna (980–1037 CE) developed a comprehensive philosophical system that examined the contrast between existence and essence and distinguished between contingent and necessary existence. Medieval India saw the emergence of the monist school of Advaita Vedanta in the 8th century CE, which holds that everything is one and that the idea of many entities existing independently is an illusion. In China, Neo-Confucianism arose in the 9th century CE and explored the concept of li as the rational principle that is the ground of being and reflects the order of the universe. In the early modern period, René Descartes (1596–1650) developed a substance dualism according to which body and mind exist as independent entities that causally interact. This idea was rejected by Baruch Spinoza (1632–1677), who formulated a monist philosophy suggesting that there is only one substance with both physical and mental attributes that develop side-by-side without interacting. Gottfried Wilhelm Leibniz (1646–1716) introduced the concept of possible worlds and articulated a metaphysical system known as monadology, which views the universe as a collection of simple substances synchronized without causal interaction. Christian Wolff (1679–1754), conceptualized the scope of metaphysics by distinguishing between general and special metaphysics. According to the idealism of George Berkeley (1685–1753), everything is mental, including material objects, which are ideas perceived by the mind. David Hume (1711–1776) made various contributions to metaphysics, including the regularity theory of causation and the idea that there are no necessary connections between distinct entities. His empiricist outlook led him to criticize metaphysical theories that seek ultimate principles inaccessible to sensory experience. This skeptical outlook was embraced by Immanuel Kant (1724–1804), who tried to reconceptualize metaphysics as an inquiry into the basic principles and categories of thought and understanding rather than seeing it as an attempt to comprehend mind-independent reality. Many developments in the later modern period were shaped by Kant's philosophy. German idealists adopted his idealistic outlook in their attempt to find a unifying principle as the foundation of all reality. Georg Wilhelm Friedrich Hegel (1770–1831) developed a comprehensive system of philosophy that examines how absolute spirit manifests itself. He inspired the British idealism of Francis Herbert Bradley (1846–1924), who interpreted absolute spirit as the all-inclusive totality of being. Arthur Schopenhauer (1788–1860) was a strong critic of German idealism and articulated a different metaphysical vision, positing a blind and irrational will as the underlying principle of reality. Pragmatists like C. S. Peirce (1839–1914) and John Dewey (1859–1952) conceived metaphysics as an observational science of the most general features of reality and experience. At the turn of the 20th century in analytic philosophy, philosophers such as Bertrand Russell (1872–1970) and G. E. Moore (1873–1958) led a "revolt against idealism". Logical atomists, like Russell and the early Ludwig Wittgenstein (1889–1951), conceived the world as a multitude of atomic facts, which later inspired metaphysicians such as D. M. Armstrong (1926–2014). Alfred North Whitehead (1861–1947) developed process metaphysics as an attempt to provide a holistic description of both the objective and the subjective realms. Rudolf Carnap (1891–1970) and other logical positivists formulated a wide-ranging criticism of metaphysical statements, arguing that they are meaningless because there is no way to verify them. Other criticisms of traditional metaphysics identified misunderstandings of ordinary language as the source of many traditional metaphysical problems or challenged complex metaphysical deductions by appealing to common sense. The decline of logical positivism led to a revival of metaphysical theorizing. Willard Van Orman Quine (1908–2000) tried to naturalize metaphysics by connecting it to the empirical sciences. His student David Lewis (1941–2001) employed the concept of possible worlds to formulate his modal realism. Saul Kripke (1940–2022) helped revive discussions of identity and essentialism, distinguishing necessity as a metaphysical notion from the epistemic notion of a priori. In continental philosophy, Edmund Husserl (1859–1938) engaged in ontology through a phenomenological description of experience, while his student Martin Heidegger (1889–1976) developed fundamental ontology to clarify the meaning of being. Heidegger's philosophy inspired general criticisms of metaphysics by postmodern thinkers like Jacques Derrida (1930–2004). Gilles Deleuze's (1925–1995) approach to metaphysics challenged traditionally influential concepts like substance, essence, and identity by reconceptualizing the field through alternative notions such as multiplicity, event, and difference. See also Computational metaphysics Doctor of Metaphysics Enrico Berti's classification of metaphysics Feminist metaphysics Fundamental question of metaphysics List of metaphysicians Metaphysical grounding References Notes Citations Sources External links Metaphysics at Encyclopædia Britannica
0.772551
0.999681
0.772305
Biometrics
Biometrics are body measurements and calculations related to human characteristics and features. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological characteristics which are related to the shape of the body. Examples include, but are not limited to fingerprint, palm veins, face recognition, DNA, palm print, hand geometry, iris recognition, retina, odor/scent, voice, shape of ears and gait. Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to mouse movement, typing rhythm, gait, signature, voice, and behavioral profiling. Some researchers have coined the term behaviometrics (behavioral biometrics) to describe the latter class of biometrics. More traditional means of access control include token-based identification systems, such as a driver's license or passport, and knowledge-based identification systems, such as a password or personal identification number. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods; however, the collection of biometric identifiers raises privacy concerns. Biometric functionality Many different aspects of human physiology, chemistry or behavior can be used for biometric authentication. The selection of a particular biometric for use in a specific application involves a weighting of several factors. Jain et al. (1999) identified seven such factors to be used when assessing the suitability of any trait for use in biometric authentication. Biometric authentication is based upon biometric recognition which is an advanced method of recognising biological and behavioural characteristics of an Individual. Universality means that every person using a system should possess the trait. Uniqueness means the trait should be sufficiently different for individuals in the relevant population such that they can be distinguished from one another. Permanence relates to the manner in which a trait varies over time. More specifically, a trait with good permanence will be reasonably invariant over time with respect to the specific matching algorithm. Measurability (collectability) relates to the ease of acquisition or measurement of the trait. In addition, acquired data should be in a form that permits subsequent processing and extraction of the relevant feature sets. Performance relates to the accuracy, speed, and robustness of technology used (see performance section for more details). Acceptability relates to how well individuals in the relevant population accept the technology such that they are willing to have their biometric trait captured and assessed. Circumvention relates to the ease with which a trait might be imitated using an artifact or substitute. Proper biometric use is very application dependent. Certain biometrics will be better than others based on the required levels of convenience and security. No single biometric will meet all the requirements of every possible application. The block diagram illustrates the two basic modes of a biometric system. First, in verification (or authentication) mode the system performs a one-to-one comparison of a captured biometric with a specific template stored in a biometric database in order to verify the individual is the person they claim to be. Three steps are involved in the verification of a person. In the first step, reference models for all the users are generated and stored in the model database. In the second step, some samples are matched with reference models to generate the genuine and impostor scores and calculate the threshold. The third step is the testing step. This process may use a smart card, username, or ID number (e.g. PIN) to indicate which template should be used for comparison. Positive recognition is a common use of the verification mode, "where the aim is to prevent multiple people from using the same identity". Second, in identification mode the system performs a one-to-many comparison against a biometric database in an attempt to establish the identity of an unknown individual. The system will succeed in identifying the individual if the comparison of the biometric sample to a template in the database falls within a previously set threshold. Identification mode can be used either for positive recognition (so that the user does not have to provide any information about the template to be used) or for negative recognition of the person "where the system establishes whether the person is who she (implicitly or explicitly) denies to be". The latter function can only be achieved through biometrics since other methods of personal recognition, such as passwords, PINs, or keys, are ineffective. The first time an individual uses a biometric system is called enrollment. During enrollment, biometric information from an individual is captured and stored. In subsequent uses, biometric information is detected and compared with the information stored at the time of enrollment. Note that it is crucial that storage and retrieval of such systems themselves be secure if the biometric system is to be robust. The first block (sensor) is the interface between the real world and the system; it has to acquire all the necessary data. Most of the times it is an image acquisition system, but it can change according to the characteristics desired. The second block performs all the necessary pre-processing: it has to remove artifacts from the sensor, to enhance the input (e.g. removing background noise), to use some kind of normalization, etc. In the third block, necessary features are extracted. This step is an important step as the correct features need to be extracted in an optimal way. A vector of numbers or an image with particular properties is used to create a template. A template is a synthesis of the relevant characteristics extracted from the source. Elements of the biometric measurement that are not used in the comparison algorithm are discarded in the template to reduce the file size and to protect the identity of the enrollee. However, depending on the scope of the biometric system, original biometric image sources may be retained, such as the PIV-cards used in the Federal Information Processing Standard Personal Identity Verification (PIV) of Federal Employees and Contractors (FIPS 201). During the enrollment phase, the template is simply stored somewhere (on a card or within a database or both). During the matching phase, the obtained template is passed to a matcher that compares it with other existing templates, estimating the distance between them using any algorithm (e.g. Hamming distance). The matching program will analyze the template with the input. This will then be output for a specified use or purpose (e.g. entrance in a restricted area), though it is a fear that the use of biometric data may face mission creep. Selection of biometrics in any practical application depending upon the characteristic measurements and user requirements. In selecting a particular biometric, factors to consider include, performance, social acceptability, ease of circumvention and/or spoofing, robustness, population coverage, size of equipment needed and identity theft deterrence. The selection of a biometric is based on user requirements and considers sensor and device availability, computational time and reliability, cost, sensor size, and power consumption. Multimodal biometric system Multimodal biometric systems use multiple sensors or biometrics to overcome the limitations of unimodal biometric systems. For instance iris recognition systems can be compromised by aging irises and electronic fingerprint recognition can be worsened by worn-out or cut fingerprints. While unimodal biometric systems are limited by the integrity of their identifier, it is unlikely that several unimodal systems will suffer from identical limitations. Multimodal biometric systems can obtain sets of information from the same marker (i.e., multiple images of an iris, or scans of the same finger) or information from different biometrics (requiring fingerprint scans and, using voice recognition, a spoken passcode). Multimodal biometric systems can fuse these unimodal systems sequentially, simultaneously, a combination thereof, or in series, which refer to sequential, parallel, hierarchical and serial integration modes, respectively. Fusion of the biometrics information can occur at different stages of a recognition system. In case of feature level fusion, the data itself or the features extracted from multiple biometrics are fused. Matching-score level fusion consolidates the scores generated by multiple classifiers pertaining to different modalities. Finally, in case of decision level fusion the final results of multiple classifiers are combined via techniques such as majority voting. Feature level fusion is believed to be more effective than the other levels of fusion because the feature set contains richer information about the input biometric data than the matching score or the output decision of a classifier. Therefore, fusion at the feature level is expected to provide better recognition results. Furthermore, the evolving biometric market trends underscore the importance of technological integration, showcasing a shift towards combining multiple biometric modalities for enhanced security and identity verification, aligning with the advancements in multimodal biometric systems. Spoof attacks consist in submitting fake biometric traits to biometric systems, and are a major threat that can curtail their security. Multi-modal biometric systems are commonly believed to be intrinsically more robust to spoof attacks, but recent studies have shown that they can be evaded by spoofing even a single biometric trait. One such proposed system of Multimodal Biometric Cryptosystem Involving the Face, Fingerprint, and Palm Vein by Prasanalakshmi The Cryptosystem Integration combines biometrics with cryptography, where the palm vein acts as a cryptographic key, offering a high level of security since palm veins are unique and difficult to forge. The Fingerprint Involves minutiae extraction (terminations and bifurcations) and matching techniques. Steps include image enhancement, binarization, ROI extraction, and minutiae thinning. The Face system uses class-based scatter matrices to calculate features for recognition, and the Palm Vein acts as an unbreakable cryptographic key, ensuring only the correct user can access the system. The cancelable Biometrics concept allows biometric traits to be altered slightly to ensure privacy and avoid theft. If compromised, new variations of biometric data can be issued. The Encryption fingerprint template is encrypted using the palm vein key via XOR operations. This encrypted Fingerprint is hidden within the face image using steganographic techniques. Enrollment and Verification for the Biometric data (Fingerprint, palm vein, face) are captured, encrypted, and embedded into a face image. The system extracts the biometric data and compares it with stored values for Verification. The system was tested with fingerprint databases, achieving 75% verification accuracy at an equal error rate of 25% and processing time approximately 50 seconds for enrollment and 22 seconds for Verification. High security due to palm vein encryption, effective against biometric spoofing, and the multimodal approach ensures reliability if one biometric fails. Potential for integration with smart cards or on-card systems, enhancing security in personal identification systems. Performance The discriminating powers of all biometric technologies depend on the amount of entropy they are able to encode and use in matching. The following are used as performance metrics for biometric systems: False match rate (FMR, also called FAR = False Accept Rate): the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs that are incorrectly accepted. In case of similarity scale, if the person is an imposter in reality, but the matching score is higher than the threshold, then he is treated as genuine. This increases the FMR, which thus also depends upon the threshold value. False non-match rate (FNMR, also called FRR = False Reject Rate): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs that are incorrectly rejected. Receiver operating characteristic or relative operating characteristic (ROC): The ROC plot is a visual characterization of the trade-off between the FMR and the FNMR. In general, the matching algorithm performs a decision based on a threshold that determines how close to a template the input needs to be for it to be considered a match. If the threshold is reduced, there will be fewer false non-matches but more false accepts. Conversely, a higher threshold will reduce the FMR but increase the FNMR. A common variation is the Detection error trade-off (DET), which is obtained using normal deviation scales on both axes. This more linear graph illuminates the differences for higher performances (rarer errors). Equal error rate or crossover error rate (EER or CER): the rate at which both acceptance and rejection errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is the most accurate. Failure to enroll rate (FTE or FER): the rate at which attempts to create a template from an input is unsuccessful. This is most commonly caused by low-quality inputs. Failure to capture rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric input when presented correctly. Template capacity: the maximum number of sets of data that can be stored in the system. History An early cataloguing of fingerprints dates back to 1885 when Juan Vucetich started a collection of fingerprints of criminals in Argentina. Josh Ellenbogen and Nitzan Lebovic argued that Biometrics originated in the identification systems of criminal activity developed by Alphonse Bertillon (1853–1914) and by Francis Galton's theory of fingerprints and physiognomy. According to Lebovic, Galton's work "led to the application of mathematical models to fingerprints, phrenology, and facial characteristics", as part of "absolute identification" and "a key to both inclusion and exclusion" of populations. Accordingly, "the biometric system is the absolute political weapon of our era" and a form of "soft control". The theoretician David Lyon showed that during the past two decades biometric systems have penetrated the civilian market, and blurred the lines between governmental forms of control and private corporate control. Kelly A. Gates identified 9/11 as the turning point for the cultural language of our present: "in the language of cultural studies, the aftermath of 9/11 was a moment of articulation, where objects or events that have no necessary connection come together and a new discourse formation is established: automated facial recognition as a homeland security technology." Adaptive biometric systems Adaptive biometric systems aim to auto-update the templates or model to the intra-class variation of the operational data. The two-fold advantages of these systems are solving the problem of limited training data and tracking the temporal variations of the input data through adaptation. Recently, adaptive biometrics have received a significant attention from the research community. This research direction is expected to gain momentum because of their key promulgated advantages. First, with an adaptive biometric system, one no longer needs to collect a large number of biometric samples during the enrollment process. Second, it is no longer necessary to enroll again or retrain the system from scratch in order to cope with the changing environment. This convenience can significantly reduce the cost of maintaining a biometric system. Despite these advantages, there are several open issues involved with these systems. For mis-classification error (false acceptance) by the biometric system, cause adaptation using impostor sample. However, continuous research efforts are directed to resolve the open issues associated to the field of adaptive biometrics. More information about adaptive biometric systems can be found in the critical review by Rattani et al. Recent advances in emerging biometrics In recent times, biometrics based on brain (electroencephalogram) and heart (electrocardiogram) signals have emerged. An example is finger vein recognition, using pattern-recognition techniques, based on images of human vascular patterns. The advantage of this newer technology is that it is more fraud resistant compared to conventional biometrics like fingerprints. However, such technology is generally more cumbersome and still has issues such as lower accuracy and poor reproducibility over time. On the portability side of biometric products, more and more vendors are embracing significantly miniaturized biometric authentication systems (BAS) thereby driving elaborate cost savings, especially for large-scale deployments. Operator signatures An operator signature is a biometric mode where the manner in which a person using a device or complex system is recorded as a verification template. One potential use for this type of biometric signature is to distinguish among remote users of telerobotic surgery systems that utilize public networks for communication. Proposed requirement for certain public networks John Michael (Mike) McConnell, a former vice admiral in the United States Navy, a former director of U.S. National Intelligence, and senior vice president of Booz Allen Hamilton, promoted the development of a future capability to require biometric authentication to access certain public networks in his keynote speech at the 2009 Biometric Consortium Conference. A basic premise in the above proposal is that the person that has uniquely authenticated themselves using biometrics with the computer is in fact also the agent performing potentially malicious actions from that computer. However, if control of the computer has been subverted, for example in which the computer is part of a botnet controlled by a hacker, then knowledge of the identity of the user at the terminal does not materially improve network security or aid law enforcement activities. Animal biometrics Rather than tags or tattoos, biometric techniques may be used to identify individual animals: zebra stripes, blood vessel patterns in rodent ears, muzzle prints, bat wing patterns, primate facial recognition and koala spots have all been tried. Issues and concerns Human dignity Biometrics have been considered also instrumental to the development of state authority (to put it in Foucauldian terms, of discipline and biopower). By turning the human subject into a collection of biometric parameters, biometrics would dehumanize the person, infringe bodily integrity, and, ultimately, offend human dignity. In a well-known case, Italian philosopher Giorgio Agamben refused to enter the United States in protest at the United States Visitor and Immigrant Status Indicator (US-VISIT) program's requirement for visitors to be fingerprinted and photographed. Agamben argued that gathering of biometric data is a form of bio-political tattooing, akin to the tattooing of Jews during the Holocaust. According to Agamben, biometrics turn the human persona into a bare body. Agamben refers to the two words used by Ancient Greeks for indicating "life", zoe, which is the life common to animals and humans, just life; and bios, which is life in the human context, with meanings and purposes. Agamben envisages the reduction to bare bodies for the whole humanity. For him, a new bio-political relationship between citizens and the state is turning citizens into pure biological life (zoe) depriving them from their humanity (bios); and biometrics would herald this new world. In Dark Matters: On the Surveillance of Blackness, surveillance scholar Simone Browne formulates a similar critique as Agamben, citing a recent study relating to biometrics R&D that found that the gender classification system being researched "is inclined to classify Africans as males and Mongoloids as females." Consequently, Browne argues that the conception of an objective biometric technology is difficult if such systems are subjectively designed, and are vulnerable to cause errors as described in the study above. The stark expansion of biometric technologies in both the public and private sector magnifies this concern. The increasing commodification of biometrics by the private sector adds to this danger of loss of human value. Indeed, corporations value the biometric characteristics more than the individuals value them. Browne goes on to suggest that modern society should incorporate a "biometric consciousness" that "entails informed public debate around these technologies and their application, and accountability by the state and the private sector, where the ownership of and access to one's own body data and other intellectual property that is generated from one's body data must be understood as a right." Other scholars have emphasized, however, that the globalized world is confronted with a huge mass of people with weak or absent civil identities. Most developing countries have weak and unreliable documents and the poorer people in these countries do not have even those unreliable documents. Without certified personal identities, there is no certainty of right, no civil liberty. One can claim his rights, including the right to refuse to be identified, only if he is an identifiable subject, if he has a public identity. In such a sense, biometrics could play a pivotal role in supporting and promoting respect for human dignity and fundamental rights. Privacy and discrimination It is possible that data obtained during biometric enrollment may be used in ways for which the enrolled individual has not consented. For example, most biometric features could disclose physiological and/or pathological medical conditions (e.g., some fingerprint patterns are related to chromosomal diseases, iris patterns could reveal sex, hand vein patterns could reveal vascular diseases, most behavioral biometrics could reveal neurological diseases, etc.). Moreover, second generation biometrics, notably behavioral and electro-physiologic biometrics (e.g., based on electrocardiography, electroencephalography, electromyography), could be also used for emotion detection. There are three categories of privacy concerns: Unintended functional scope: The authentication goes further than authentication, such as finding a tumor. Unintended application scope: The authentication process correctly identifies the subject when the subject did not wish to be identified. Covert identification: The subject is identified without seeking identification or authentication, i.e. a subject's face is identified in a crowd. Danger to owners of secured items When thieves cannot get access to secure properties, there is a chance that the thieves will stalk and assault the property owner to gain access. If the item is secured with a biometric device, the damage to the owner could be irreversible, and potentially cost more than the secured property. For example, in 2005, Malaysian car thieves cut off a man's finger when attempting to steal his Mercedes-Benz S-Class. Attacks at presentation In the context of biometric systems, presentation attacks may also be called "spoofing attacks". As per the recent ISO/IEC 30107 standard, presentation attacks are defined as "presentation to the biometric capture subsystem with the goal of interfering with the operation of the biometric system". These attacks can be either impersonation or obfuscation attacks. Impersonation attacks try to gain access by pretending to be someone else. Obfuscation attacks may, for example, try to evade face detection and face recognition systems. Several methods have been proposed to counteract presentation attacks. Surveillance humanitarianism in times of crisis Biometrics are employed by many aid programs in times of crisis in order to prevent fraud and ensure that resources are properly available to those in need. Humanitarian efforts are motivated by promoting the welfare of individuals in need, however the use of biometrics as a form of surveillance humanitarianism can create conflict due to varying interests of the groups involved in the particular situation. Disputes over the use of biometrics between aid programs and party officials stalls the distribution of resources to people that need help the most. In July 2019, the United Nations World Food Program and Houthi Rebels were involved in a large dispute over the use of biometrics to ensure resources are provided to the hundreds of thousands of civilians in Yemen whose lives are threatened. The refusal to cooperate with the interests of the United Nations World Food Program resulted in the suspension of food aid to the Yemen population. The use of biometrics may provide aid programs with valuable information, however its potential solutions may not be best suited for chaotic times of crisis. Conflicts that are caused by deep-rooted political problems, in which the implementation of biometrics may not provide a long-term solution. Cancelable biometrics One advantage of passwords over biometrics is that they can be re-issued. If a token or a password is lost or stolen, it can be cancelled and replaced by a newer version. This is not naturally available in biometrics. If someone's face is compromised from a database, they cannot cancel or reissue it. If the electronic biometric identifier is stolen, it is nearly impossible to change a biometric feature. This renders the person's biometric feature questionable for future use in authentication, such as the case with the hacking of security-clearance-related background information from the Office of Personnel Management (OPM) in the United States. Cancelable biometrics is a way in which to incorporate protection and the replacement features into biometrics to create a more secure system. It was first proposed by Ratha et al. "Cancelable biometrics refers to the intentional and systematically repeatable distortion of biometric features in order to protect sensitive user-specific data. If a cancelable feature is compromised, the distortion characteristics are changed, and the same biometrics is mapped to a new template, which is used subsequently. Cancelable biometrics is one of the major categories for biometric template protection purpose besides biometric cryptosystem." In biometric cryptosystem, "the error-correcting coding techniques are employed to handle intraclass variations." This ensures a high level of security but has limitations such as specific input format of only small intraclass variations. Several methods for generating new exclusive biometrics have been proposed. The first fingerprint-based cancelable biometric system was designed and developed by Tulyakov et al. Essentially, cancelable biometrics perform a distortion of the biometric image or features before matching. The variability in the distortion parameters provides the cancelable nature of the scheme. Some of the proposed techniques operate using their own recognition engines, such as Teoh et al. and Savvides et al., whereas other methods, such as Dabbah et al., take the advantage of the advancement of the well-established biometric research for their recognition front-end to conduct recognition. Although this increases the restrictions on the protection system, it makes the cancellable templates more accessible for available biometric technologies Proposed soft biometrics Soft biometrics are understood as not strict biometrical recognition practices that are proposed in favour of identity cheaters and stealers. Traits are physical, behavioral or adhered human characteristics that have been derived from the way human beings normally distinguish their peers (e.g. height, gender, hair color). They are used to complement the identity information provided by the primary biometric identifiers. Although soft biometric characteristics lack the distinctiveness and permanence to recognize an individual uniquely and reliably, and can be easily faked, they provide some evidence about the users identity that could be beneficial. In other words, despite the fact they are unable to individualize a subject, they are effective in distinguishing between people. Combinations of personal attributes like gender, race, eye color, height and other visible identification marks can be used to improve the performance of traditional biometric systems. Most soft biometrics can be easily collected and are actually collected during enrollment. Two main ethical issues are raised by soft biometrics. First, some of soft biometric traits are strongly cultural based; e.g., skin colors for determining ethnicity risk to support racist approaches, biometric sex recognition at the best recognizes gender from tertiary sexual characters, being unable to determine genetic and chromosomal sexes; soft biometrics for aging recognition are often deeply influenced by ageist stereotypes, etc. Second, soft biometrics have strong potential for categorizing and profiling people, so risking of supporting processes of stigmatization and exclusion. Data protection of biometric data in international law Many countries, including the United States, are planning to share biometric data with other nations. In testimony before the US House Appropriations Committee, Subcommittee on Homeland Security on "biometric identification" in 2009, Kathleen Kraninger and Robert A Mocny commented on international cooperation and collaboration with respect to biometric data, as follows: According to an article written in 2009 by S. Magnuson in the National Defense Magazine entitled "Defense Department Under Pressure to Share Biometric Data" the United States has bilateral agreements with other nations aimed at sharing biometric data. To quote that article: Likelihood of full governmental disclosure Certain members of the civilian community are worried about how biometric data is used but full disclosure may not be forthcoming. In particular, the Unclassified Report of the United States' Defense Science Board Task Force on Defense Biometrics states that it is wise to protect, and sometimes even to disguise, the true and total extent of national capabilities in areas related directly to the conduct of security-related activities. This also potentially applies to Biometrics. It goes on to say that this is a classic feature of intelligence and military operations. In short, the goal is to preserve the security of 'sources and methods'. Countries applying biometrics Countries using biometrics include Australia, Brazil, Bulgaria, Canada, Cyprus, Greece, China, Gambia, Germany, India, Iraq, Ireland, Israel, Italy, Malaysia, Netherlands, New Zealand, Nigeria, Norway, Pakistan, Poland, South Africa, Saudi Arabia, Tanzania, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States and Venezuela. Among low to middle income countries, roughly 1.2 billion people have already received identification through a biometric identification program. There are also numerous countries applying biometrics for voter registration and similar electoral purposes. According to the International IDEA's ICTs in Elections Database, some of the countries using (2017) Biometric Voter Registration (BVR) are Armenia, Angola, Bangladesh, Bhutan, Bolivia, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Colombia, Comoros, Congo (Democratic Republic of), Costa Rica, Ivory Coast, Dominican Republic, Fiji, Gambia, Ghana, Guatemala, India, Iraq, Kenya, Lesotho, Liberia, Malawi, Mali, Mauritania, Mexico, Morocco, Mozambique, Namibia, Nepal, Nicaragua, Nigeria, Panama, Peru, Philippines, Senegal, Sierra Leone, Solomon Islands, Somaliland, Swaziland, Tanzania, Uganda, Uruguay, Venezuela, Yemen, Zambia, and Zimbabwe. India's national ID program India's national ID program called Aadhaar is the largest biometric database in the world. It is a biometrics-based digital identity assigned for a person's lifetime, verifiable online instantly in the public domain, at any time, from anywhere, in a paperless way. It is designed to enable government agencies to deliver a retail public service, securely based on biometric data (fingerprint, iris scan and face photo), along with demographic data (name, age, gender, address, parent/spouse name, mobile phone number) of a person. The data is transmitted in encrypted form over the internet for authentication, aiming to free it from the limitations of physical presence of a person at a given place. About 550 million residents have been enrolled and assigned 480 million Aadhaar national identification numbers as of 7 November 2013. It aims to cover the entire population of 1.2 billion in a few years. However, it is being challenged by critics over privacy concerns and possible transformation of the state into a surveillance state, or into a Banana republic.§ The project was also met with mistrust regarding the safety of the social protection infrastructures. To tackle the fear amongst the people, India's supreme court put a new ruling into action that stated that privacy from then on was seen as a fundamental right. On 24 August 2017 this new law was established. Malaysia's MyKad national ID program The current identity card, known as MyKad, was introduced by the National Registration Department of Malaysia on 5 September 2001 with Malaysia becoming the first country in the world to use an identification card that incorporates both photo identification and fingerprint biometric data on a built-in computer chip embedded in a piece of plastic. Besides the main purpose of the card as a validation tool and proof of citizenship other than the birth certificate, MyKad also serves as a valid driver's license, an ATM card, an electronic purse, and a public key, among other applications, as part of the Malaysian Government Multipurpose Card (GMPC) initiative, if the bearer chooses to activate the functions. See also Access control AFIS AssureSign BioAPI Biometrics in schools European Association for Biometrics Fingerprint recognition Fuzzy extractor Gait analysis Government database Handwritten biometric recognition Identity Cards Act 2006 International Identity Federation Keystroke dynamics Multiple Biometric Grand Challenge Private biometrics Retinal scan Signature recognition Smart city Speaker recognition Vein matching Voice analysis Notes References Further reading Biometrics Glossary – Glossary of Biometric Terms based on information derived from the National Science and Technology Council (NSTC) Subcommittee on Biometrics. Published by Fulcrum Biometrics, LLC, July 2013 Biometrics Institute - Explanatory Dictionary of Biometrics A glossary of biometrics terms, offering detailed definitions to supplement existing resources. Published May 2023. Delac, K., Grgic, M. (2004). A Survey of Biometric Recognition Methods. "Fingerprints Pay For School Lunch". (2001). Retrieved 2008-03-02. "Germany to phase-in biometric passports from November 2005". (2005). E-Government News. Retrieved 2006-06-11. Oezcan, V. (2003). "Germany Weighs Biometric Registration Options for Visa Applicants", Humboldt University Berlin. Retrieved 2006-06-11. Ulrich Hottelet: Hidden champion – Biometrics between boom and big brother, German Times, January 2007. Dunstone, T. and Yager, N., 2008. Biometric system and data analysis. 1st ed. New York: Springer. External links Surveillance Authentication methods Identification
0.774294
0.997356
0.772247