text
stringlengths 5.43k
47.1k
| id
stringlengths 47
47
| dump
stringclasses 7
values | url
stringlengths 16
815
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 4.1k
8.19k
| score
float64 2.52
4.88
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
FAQ – General
Children touch and manipulate everything in their environment. They learn best by doing, which requires movement and spontaneous investigation. In a way, the human mind is handmade. That is, through movement and touch, the child explores, manipulates, and learns about the physical world.
Montessori children are free to move about, and may work alone or with others. They choose an activity and may work at their own pace. As long as they do not disturb anyone or damage anything, and as long as they put things back where they belong when finished students have the privilege and responsibility of choosing work for themselves.
Especially at the preschool level, materials are designed to capture a child’s attention. They are intrigued to investigate the item in terms of size, shape, color, texture, weight, smell, sound, etc. They begin to learn to pay attention and more closely observe small details in the things around them. Gradually they hone their appreciation and understanding of their environment. This is a key in helping children discover how to learn.
Freedom is essential as children begin to explore. The goal of a Montessori teacher is to have his/her students fall in love with the process of focusing their complete attention on something and mastering its challenge with enthusiasm. Work dictated by adults rarely results in such enthusiasm and interest so the key is to create an absolutely intriguing environment filled to the brim with opportunities for learning that the children are free to select for themselves.
Therefore the Montessori classroom is a very deliberately prepared environment that serves as a learning laboratory in which children are allowed to explore, discover, and select their own work. The independence children gain empowers them socially and emotionally and is intrinsically involved with helping them become comfortable and confident in their own abilities. They develop the confidence to ask questions, puzzle out the answer, and learn for themselves without needing to be “spoon-fed” by a teacher or adult.
While Montessori may look unstructured to some people, it is actually quite structured at every level. The idea is to provide freedom of choice within a clear structured environment. Just because the Montessori program is highly individualized does not mean that students can do whatever they want.
Montessori teaches all of the “basics,” along with giving students the opportunity to investigate and learn subjects that are of particular interest. They are given the responsibility and freedom to make their own choices. For preschool students external structure is limited to clear-cut ground rules and correct procedures that provide guidelines and structure needed for three- and four-year-olds. By the third year, or kindergarten year, teachers introduce a daily or weekly “contract” or similar system to allow students to keep track of what they have accomplished and what they have yet to complete. So while they may have some measure of freedom, they must choose within very clear expectations. As they demonstrate their ability to follow-through, they are gradually given more responsibility to manage their own time to complete expected assignments.
Learning how to manage one’s time at an early age is an important life skill and one that takes time to practice and hone.
The mixed-age classroom actually improves a Montessori teachers’ ability to individualize learning for each child. Because a Montessori teacher has the benefit of keeping a student in his/her classroom for several years and is not faced with an entire classroom of new students every year, Montessori teachers are able to truly get to know each student as individuals and develop a very good sense of each child’s learning styles and temperaments. They get to know their students’ strengths and weaknesses, interests, and personalities extremely well. They also are already familiar with each child’s parents and family members. Montessori teachers closely monitor their students’ progress and take note of particular interests. They frequently adapt lessons and/or introduce activities relating to topics they know are of keen interest to a particular student or to specific groups of students in the class.
Many families also choose to request the same Montessori teacher for younger siblings as older siblings have had to capitalize on the strong existing relationship already in place between teacher and family.
Montessori teachers focus on the child as a person, rather than on a daily lesson plan as is the focus in most traditional classrooms. Montessori teachers lead children to ask questions, think for themselves, explore, investigate, and discover. Their ultimate objective is to help their students to learn independently and retain the curiosity, creativity, and intelligence with which they were born. Montessori teachers don’t simply present lessons; they are facilitators, mentors, coaches, and guides.
Montessori teachers typically do not spend much time teaching lessons to the whole class at once; instead, the focus is to prepare and maintain the physical, intellectual, and social/emotional environment within which the children will work. A key aspect of this is the selection of intriguing and developmentally appropriate learning activities to meet the needs and interests of each child in the class. Montessori teachers usually present lessons individually or to small groups of children at one time, limiting lessons to brief and very clear presentations. The goal is to give students just enough to capture their attention and spark their interest so they are motivated to come back on their own to work the particular material they have been shown.
Parents are sometimes concerned that by having younger children in the same class as older ones, either the younger or older students may be shortchanged. They fear that the younger children will demand all of the teachers’ time and attention, or that the teacher will focus more on kindergarten curriculum for the five-year-olds and the three- and four-year-olds will not get the emotional support and stimulation that they need. It is understandable for parents to be concerned, however, Montessori schools throughout the world consistently find a mixed-age classroom actually enhances development for every level.
The Montessori environment is designed to address the developmental characteristics normal to children in each stage.
Montessori classes are set up to encompass a two- or three-year age span. This allows younger students the inspiration of older children, who in turn benefit from serving as role models. Each child learns at her own pace and will be ready for any given lesson in her own time, not on the teacher’s schedule of lessons. In a mixed-age class, children can always find peers who are working at their current level.
Children ideally and typically stay in the same class for three years; with two-thirds of the class normally coming back each year, so the classroom culture remains quite stable.
Because a child remains in one classroom for two or three years he/she develops a strong sense of community with classmates and teachers. The age range also allows especially gifted children the stimulation of intellectual peers, without requiring that they skip a grade or feel emotionally out of place.
Montessori students are given quite a bit of leeway to pursue topics that interest them, however, this freedom is not absolute. There are expectations for what a student should know and be able to manage by a certain age.
Montessori teachers know these standards and provide the structure and support necessary to ensure that students live up to expectations. If it appears that a child needs time and support until he or she is developmentally ready to progress in a particular area, Montessori teachers provide that support and/or helps the parent to identify resources to help their child acquire such support.
It is important to realize, however, that a young child observing other students engaged in a work rather than engaging directly is not necessarily a bad thing. Sometimes younger students need to observe others first to gain the confidence to make their own selection. Montessori teachers are keenly aware of every child in the classroom and gently guide reluctant students to activities they think will spark their interest and allow them time to get used to the idea. By not unduly pressuring a child, the spark of curiosity inevitably kicks in such that the child who was reluctant at first is soon fully engaged.
Dr. Montessori identified four “planes of development” with each having its own developmental characteristics and developmental challenges. The early childhood Montessori environment (age 3-6) is crafted to work with the “absorbent mind,” “sensitive periods,” and the tendencies of children at this stage of their development.
During these early years, learning comes spontaneously without effort. They learn a variety of concepts in a hands-on way, such that when they move into the elementary grades they have a clear, concrete sense of many abstract concepts. The Montessori approach inspires children to become self-motivated, self-disciplined, and to retain the sense of curiosity that so many children lose along the way in traditional teacher-led classrooms. Montessori students tend to show care and respect toward their environment and one another and are able to work at their own pace and ability. Students who have had the benefit of a three-year Montessori experience tend embrace a joy of learning that prepares them for further challenges.
While students can join a Montessori program at any age, we find that students get the most out of their Montessori experience if they join around age 3 and stay at least through the kindergarten year. Children entering at age four or five typically adapt into the classroom very well but may not have enough opportunity to work through all of the three-year curriculum and therefore may not have had enough time to develop the same skills, work habits, or values as students who have had the benefit of a three-year cycle.
Students who are 2-1/2 to 3 years old or are 3 years old but not ready for a preschool program may enroll two, three or five half days per week in our Prep Program. The goal of the Prep Program is to help younger students learn social and emotional skills to prepare themselves to join a Montessori preschool/kindergarten class.
Students enrolled in the Prep Program may enroll in the Montessori preschool program the following school year or may be ready to join a preschool program during the course of the school year (if a space is available and only after a detailed assessment of a student’s readiness for a successful transition) as determined by the Prep Program Teacher and School Director in conjunction with the child’s parents.
Two- and three-day programs are often appealing to parents who do not need full-time care; however, we, like most other Montessori schools, find that four and five-day programs create the consistency that is so important for a Montessori age 3-6 classroom. We therefore offer five half days (morning or afternoon), five full days or four-afternoon Montessori programs for students in our Montessori 3-6 classrooms.
The primary goal of a Montessori environment is to create a culture of consistency, order, independence and empowerment. Attending only two or three days per week makes creating such a classroom culture much more difficult to achieve and much more difficult for a child who attends only off and on to embrace and get the most from. In addition, if only two or three days per week are offered, a Montessori teacher would be required to track and work with many more total students and families. By having students attend more consistently, the bonds between teacher and child/teacher and family are stronger and the Montessori teacher can concentrate on a more reasonable total number of pupils each school year and focus on each students’ needs more effectively.
However, as a way to allow younger students to get ready for a more consistent routine, we have a Prep Program, which is intended for new students ages 2-1/2 to 3 years old. It is offered as a three or five-morning per week program. The goal of the Prep Program is to help younger students learn social and emotional skills to prepare themselves to join a Montessori age 3-6 class. Students enrolled in the Prep Program may enroll in the Montessori preschool program the following school year or may be ready to join a preschool program during the course of the school year (if a space is available and only after a detailed assessment of a student’s readiness for a successful transition) as determined by the Prep Program Teacher and School Director in conjunction with the child’s parents.
To provide additional flexibility once a student is age 3 we offer our Enrichment program as a supplement to a preschool or kindergarten student who is already enrolled in a Montessori class. Students can attend Enrichment classes one, two, three, four or five days per week as an add on to their Montessori half-day schedule. We also offer before and after school programs to help parents create a schedule that works best for their family needs.
We find that students do best when their schedule is as consistent as possible and will work with you to try to find the optimal schedule for your child.
STEAM Enrichment Classes (3-6 years; available only as an add-on half day class if enrolled in half day Montessori 3-6 for the other half of the day)
Before and After School Care (available before morning classes 7:30AM – 8:45AM and after afternoon classes have ended 3:30-6:15PM)
Clubroom – Available on non-school days only for students enrolled in Sammamish Montessori School. Clubroom is available on conference days, in-service days, school breaks and public/bank holidays such as Veterans Day. This program is an option for students enrolled in SMS and is paid separately based on the actual amount of time a student attends this program; it is not rolled into the tuition payment so that families not using the service are not paying for it.
We create plenty of fun activities such as arts and crafts, projects, plan music and dancing, games, computer time, outdoor recess and sports and sometimes cooking projects. Please help us plan well by reserving your child’s space in advance. That way we can determine what types of activities would work best for the group and we can make sure we have plenty of staff members in place and ready to supervise and work with the children.
The school is closed and we do not offer our Clubroom program on Labor Day, Thanksgiving Eve and Day, Christmas Eve and Day, New Year’s Eve and Day, Dr. Martin Luther King Jr. Day, Presidents’ Day, Independence Day and Memorial Day.
Montessori classes are available as five-day-per-week morning only or five-day-per-week full day options. There may be a limited number of four-day afternoon spaces for preschool-aged children available unless all spaces have been taken by students attending five days per week.
Our STEAM program is offered as an integrated 5 full day STEAM/Montessori combination class. Children attend about half their time in Montessori and the other half in STEAM. The STEAM model supports and reinforces the learning that children acquire in their Montessori classroom and gives children more time to explore topics at their own pace. Children benefit socially in this environment where they can participate in collaborative group activities and projects. The STEAM program provides activities that strengthen a child’s fine and large motor skills, support concentration and cognitive skills and ignite the imagination.
Our Prep Program is for new students who are starting at age 2-1/2 to 3 years old. It is offered as a four or five morning per week program. The goal of the Prep Program is to help younger students learn social and emotional skills to prepare themselves to join a Montessori Preschool/Kindergarten class. Students enrolled in the Prep Program may enroll in the Montessori Preschool program the following school year or may be ready to join the preschool program during the course of the school year (if a space is available and only after a detailed assessment of a student’s readiness for a successful transition as determined by the Prep Program Teacher and School Director in conjunction with the child’s parents.
Yes. If more than one child from the same immediate family attends at the same time, there is a 5% tuition reduction for each sibling.
Registration fees and tuition deposits are nonrefundable. Enrollment is a commitment for the entire school year and the commitment you make when you enroll your child, in turn, allows the school to make commitments to teachers and fulfill the many financial obligations the school must take on to provide your child’s space in the school.
Once your child has been enrolled, we honor our commitment to you by ensuring your child’s space for the school year. This often means turning away other students who would have enrolled in your child’s space had it been available. For this reason, please make sure you have made your decision to attend Sammamish Montessori School prior to submitting an enrollment contract for your child. Please notify the school in writing as soon as possible (before August 1 of the upcoming school year) if you plan to withdraw your child from the upcoming school year to release yourself from future tuition obligations (September onwards).
If you withdraw during the school year, you must provide written notice at least one month in advance of the first day of the month of your withdrawal (for instance, if leaving March 12, you would need to provide notice by February 1). If you are unable to provide one-month written notice of withdrawal, you must pay one tuition installment in lieu of notice.
Some parents facing job transfers or new job opportunities requiring a move out of our area have been successful in obtaining some consideration from their employers who are dictating the move or as part of a relocation package.
The primary Montessori curriculum is a 3-year cycle. The third year, or kindergarten year, is when all the learning that has taken place in the previous two years reaches fruition and a child’s knowledge begins to fall into place. Your child will be challenged to reach his/her potential by his/her Montessori teacher who knows your child incredibly well and so can provide precisely what is needed next. Children build upon what they have learned, experience rapid academic and social growth and their skill level dramatically increases when they are given the opportunity to consolidate their knowledge within the Montessori classroom. Third year students are ready to explode into more complex learning and discovery and they delve into a wealth of new and interesting materials. They are guided to take on more and more complex work, begin to learn time management skills and have an increased set of expectations and privileges in the classroom. These older children also reinforce their academic skills by helping another child, a well-documented way to consolidate knowledge.
Preschool children in a Montessori classroom look forward to being one of the “big kids” in the classroom. If he/she is put into a school where the kindergartners are looked down upon as being in the “baby class” his/her cycle of maturing is interrupted. It is especially unfortunate for a child who is a younger sibling at home to miss this opportunity to shine. This year of leadership gives a child immeasurable self-esteem and intellectual confidence.
A 3% discount is provided for school year tuition paid in full by September 10 if paid in cash or by check.
To register, complete the enrollment form and attach either a check, or give your credit card number and authorization to charge registration fees and tuition deposits (registration fee ($190) plus your tuition deposit of 10% of annual tuition.) Forms must be filled out completely, signed and accompanied by payment in order to be eligible to process for registration. You may choose to mail in your registration and payment so that it is received by the deadline or deliver it by hand.
All registrations received by the deadline will be collected and on the following day applications will be processed in priority order. Priority is given to full day students and then to half day students. Applications received after the deadline will be processed in the order they are received.
Tuition is a school year program fee that may be paid in a lump sum or divided into ten (monthly) installments for your convenience. It is calculated based on the total number of actual school days in the school year and does not include holidays, vacations, in-service, and conference days. When you enroll your child for the school year, you are making a commitment for the entire school year from the first day of school in September through the last day of school in June.
To secure a space for the school year 10% of the school year tuition must be paid upon registration as a nonrefundable deposit, along with the registration fee. If parents select the monthly payment plan, tuition payments for the balance of the school year are outlined in the table below. Please note that monthly payments each represent 1/10 of the total school year tuition and that while some months have more school days and others fewer, the amount paid each month is always the same so that it is easy for parents to remember and for the school to administer.
|Monthly Payment||Amount Due||When Due|
|Nonrefundable deposit||10% of school year tuition||upon registration|
|September||10% of school year tuition||September 1st|
|October||10% of school year tuition||October 1st|
|November||10% of school year tuition||November 1st
|December||10% of school year tuition||December 1st|
|January||10% of school year tuition||January 1st|
|February||10% of school year tuition||February 1st|
|March||10% of school year tuition||March 1st|
|April||10% of school year tuition||April 1st|
|May||10% of school year tuition||May 1st|
Except for those schools that are associated with a particular religious community, Montessori schools do not teach religion.
At our school we do not participate in or promote any kind of religious instruction. We learn about holidays, such as Christmas, Hannukah, Diwali, Ramadan, Eid, and Chinese New Year, or other festivals, but all on the basis of broadening cultural knowledge and understanding. We welcome parent involvement in bringing in first-hand knowledge and understanding of these celebrations into our classrooms, however, we do ask that parents tailor any presentations and discussions to focus on cultural rather than religious aspects. Our goal is to give children a taste of the experience each celebration or festival by sharing the special foods, songs, dances, games, and age-appropriate stories.
Montessori education fundamentally aims to inspire a child’s heart. So while Montessori does not teach religion, we do embrace the great moral and spiritual themes, such as love, kindness, joy, and confidence in the fundamental goodness of life. We encourage the child to begin the journey toward being fully alive and fully human. Everything is intended to nurture within the child a sense of joy and appreciation of life.
Art, music, dance, and creativity are integrated in the curriculum and children are given many opportunities to tap into their own creativity. While each piece of Montessori equipment has a specific purpose which children are shown how to use, once students have mastered a particular concept, they may be free to explore beyond the original lesson. For instance, once preschool/kindergarten students have gained a solid understanding of size with the sensorial materials, such as smallest to biggest, narrowest to widest, they can use the materials to create their own three-dimensional designs. Creative writing is encouraged once children have mastered basic writing concepts using the moveable alphabet for younger students or pencil and paper.
Imagination plays a central role, as children explore how the natural world works, visualize other cultures and ancient civilizations, and search for creative solutions to real-life problems. Children make up their own games and stories routinely during recess. Our playground playhouses and forts are an especially fun place for children to create their own creative worlds.
Our Enrichment program also provides ample opportunity for students to be creative. The curriculum spans art, crafts, music, dance, storytelling, acting, puppetry, cooking and other creative endeavors. Enrichment science allows students many hands-on opportunities to smell, touch, taste, manipulate and test things for themselves, thereby honing problem-solving and critical thinking skills.
With the observation that competition is an ineffective tool to motivate children to learn and to work hard, Montessori schools do not set students up to compete with one another as is done in many traditional school settings (competing for grades, class rankings, grading on a curve, special awards, etc.).
In a Montessori school, the emphasis is on collaboration rather than competition. Students discover their own innate abilities and develop a strong sense of independence, self-confidence, and self-discipline. In an atmosphere in which children learn at their own pace and compete only against themselves, they learn that making a mistake and learning from one’s mistakes is normal rather than something to be fearful of. Students learn that mistakes are a natural part of the learning process. Our hope is to give students the self-confidence and courage to try things beyond their comfort zones.
While competition is not formal or teacher created, Montessori children compete with each other every day, both in class and on the playground. Dr. Montessori, was herself an extraordinary student and a very high achiever and was never opposed to competition as an idea. She just recognized that using competition to create an artificial motivation to get students to achieve was ineffective.
Montessori schools allow competition to evolve naturally among children, without adult interference unless the children begin to show poor sportsmanship. The key is the child’s voluntary decision to compete rather than having it imposed on him by the school.
When evaluating students we are more interested in following their individual progress and keeping track of their capabilities than comparing them with their peers. So that a child’s progress may be followed throughout their three-year primary cycle, the same evaluation format is used from preschool right through kindergarten. For that reason, parents should keep in mind that in many areas children cannot be expected to have reached proficiency until their kindergarten year. Teachers keep daily records of everything your child does at school and can give you information about any aspect of your child’s work should you require more details.
For elementary students a different comprehensive elementary focused report format is utilized to track progress throughout the elementary cycle. So that a child’s progress may be followed throughout a three-year elementary cycle, the same comprehensive evaluation format is used each elementary year.
There are no tests or quizzes for preschool or kindergarten students. Montessori teachers carefully observe their students at work to identify areas they have mastered and areas they need additional practice or perhaps another lesson.
While Montessori students tend to score very well on standardized tests, Montessori educators as a whole are deeply concerned that many standardized tests are inaccurate, misleading, and stressful for children. Good teachers, who work with the same children for three years and carefully observe their work, know far more about their progress than any paper-and-pencil test can reveal.
The ultimate problem with standardized tests is that they have often been misunderstood, misinterpreted, and poorly used to pressure teachers and students to perform at higher standards. Although standardized tests may not offer a terribly accurate measure of a child’s basic skills and knowledge, in most countries test-taking skills are just another Practical Life lesson that children need to master.
Yes, in general, children who are highly gifted will find Montessori to be both intellectually challenging and flexible enough to respond to them as unique individuals. Students are able to socialize with a peer group that meets their social and emotional needs while given the opportunity to move on to more challenging lessons individually.
Every child is unique and has areas of special talents, a unique learning style, and some areas that may be considered special challenges. Montessori is fundamentally designed to allow for differences. It allows students to learn at their own pace and is quite flexible in adapting for different learning styles. So in many cases, children with mild physical handicaps or mild learning disabilities may do very well in a Montessori classroom setting. On the other hand, some children do much better in a smaller, more structured classroom with much more one-on-one instruction.
Each situation has to be evaluated individually to ensure that the program can successfully meet a given child’s needs and learning style.
The Montessori approach evolved over many years as the result of Dr. Montessori’s work with different populations and age groups. One of the earliest groups with which she worked was a population of children who had been placed in a residential-care setting because of severe developmental delays.
The Method is used today with a wide range of children, but it is most commonly found in educational programs designed for the typical range of students found in most classrooms.
Sammamish Montessori School serves children ages 3 years and older, starting in preschool and continuing through kindergarten. New students ages 2-1/2 to 3 who are not yet ready potty trained and for a preschool program may begin in our Prep Program until they are ready to transition into a Montessori Preschool/Kindergarten class.
Our summer program includes options for elementary age children up to 8 years old.
Sammamish Montessori School abides by the same age requirements as Washington state public school districts.
must be 5 years old on or before August 31 of the school year
must be 6 years old on or before August 31 of the school year
and so on.
All-class naps are not included as a regular component of any classroom (Our Prep Program for 2-1/2-3 year olds is morning only). However, on an individual basis a child who is in need of a nap may rest on a mat in the office in the care of the office staff. Or, there is also often a quiet area in the classroom where a student can rest if needed. As our program offers morning and afternoon sessions, you may opt to select a schedule that meets your child’s developmental needs. Keep in mind that most children at this age begin to outgrow naps and do better with a longer consolidated nighttime sleep (versus a shorter night sleep and a nap). Make sure you are consistent with bedtimes and wake times, even on weekends, and try to modify a little at a time to make bedtimes earlier and nighttime sleep longer.
Consistent routines and getting enough sleep can make a tremendous difference to your child’s day and enable them to be ready to learn new social and academic skills at school. This is true at any age, but particularly true for younger children, especially if adapting to a new routine.
The recommended amounts of sleep per 24 hours are 11-13 hours for 3 – 5 year olds and 10-11 hours for 5 – 12 year olds. Sleep is most restorative when it is consolidated. If, when your child goes to bed, he/she falls asleep easily, wakes up easily and is not tired during the day, then he/she is probably getting enough sleep.
The best way to tell if your child is getting enough sleep is to see how he/she acts during the day. Take a moment to notice if:
- Your child falls asleep in the car almost every time you drive;
- You have to wake your child up most mornings;
- Your child seems overtired, cranky, irritable, aggressive, over emotional, hyperactive or has trouble thinking during the day;
- On some nights, your child is tired much earlier than his/her usual bedtime.
If your child falls into this pattern, then he/she might not be getting enough sleep.
Sleep deprived children may have more trouble than usual controlling their emotions. The part of the brain that helps to control our response to our feelings and actions is greatly affected by sleep deprivation. A child who does not get enough sleep may have behavior or attention problems, be more likely to hurt him/herself and just not be doing as well as expected.
A recent study published in the Archives of Pediatric and Adolescent Medicine conducted at the University of Washington also suggests children who do not get enough nighttime sleep are also at increased risk of being overweight, or overweight to obese. Researchers further noted that napping was not an effective substitute for nighttime sleep in terms of obesity prevention, citing that, “sleeping at night is deeper and therefore more restorative than sleeping during the day.” (For more information read, In Young Kids, Lack of Sleep Linked to Obesity Later.)
Children must be potty trained to attend our Montessori Preschool/Kindergarten classes. To be considered potty trained children must demonstrate that they are able, on their own initiative, to go to the bathroom with little or no adult prompting or assistance. Our aim is to help children become independent in all aspects of their development, including managing their own basic needs. We of course are able and prepared to assist with an occasional accident and/or help a child get their clothing refastened. We also regularly remind children to remember to go the bathroom. However, chronic potty accidents detract from our ability to provide academic lessons to all of the children.
For new students ages 2-1/2 to 3 years old who are not yet potty trained and ready for preschool we offer a Prep Program, which is designed to prepare younger children so that they can eventually transition into a Montessori Preschool/Kindergarten class. The transition process will take place when the child has reached age 3 and can demonstrate social and emotional readiness and is adequately toilet trained. The child’s teacher and director will determine when to move a child from the Prep Program to a 3 – 6 Montessori preschool/kindergarten classroom. The transition is also only possible if there is space available at that time.
The emphasis in the Prep Program will be on socialization and independence. Our prep students will be exposed to a wide variety of practical life and sensorial activities, play, art, stories, singing, movement and music. Children in the Prep Program do not need to be fully potty trained, as potty training will be one of the skills taught.
Extended care is available to all students enrolled in preschool, kindergarten or elementary (during summer only if enrolled in summer Discovery camp). Students may arrive before or after their class time and will be supervised in our Clubroom. Children brought to school or picked up outside of regular class time must be signed in/out in the Clubroom. A parent or other parent-authorized pickup/drop-off designee must accompany children to and from the Clubroom.
Yes, our native-speaking Spanish teacher visits each classroom one morning and one afternoon each week.
The school is open from 7 a.m. to 6:15 p.m. Monday through Friday.
What times are classes held?
Daily Class Times
Early Birds Clubroom
7:30 a.m. to 9 a.m. (Students are escorted to their morning classrooms starting at 8:45 a.m.)
Morning session (applies to Prep, Morning Preschool and Morning Kindergarten.)
9 a.m. to 11:30 a.m. (Drop off begins 8:45 a.m. and pick-up ends 11:45 a.m.)
Afternoon session (applies to Afternoon Preschool and Afternoon Kindergarten.)
12:45 p.m. to 3:15 p.m. (Drop off begins 8:45 a.m. and pick-up ends 3:30 p.m.)
Full Day (Applies to: Full Day Preschool, Half-Day Montessori + Half-Day Enrichment, Full Day Kindergarten and Elementary)
9 a.m. to 3:15 p.m. (Drop off begins 8:45 a.m. and pick-up ends 3:30 p.m.)
After School Club
3:15 p.m. to 6:15 p.m. (Students remaining at the end of class transition time are escorted from their classrooms to After School Club promptly at 3:30 p.m. )
By being allowed freedom in a controlled environment with consistent boundaries and guidelines, the child who is able to feel secure learns to love and care for other people and develops confidence and control over his/her own behavior. Our teachers intervene when a child’s behavior is disruptive or upsetting to others. The situation is handled with deep respect and sensitivity. Montessori believed that good behavior is part of the inner discipline that we strive to help each child achieve for him/her self rather than dependence upon rules imposed by others. “Punishment” takes the form of fair and logical consequences, which are fully discussed with the child. Montessori believed that children are by nature loving and caring and we strive to help them develop the vital social and emotional skills needed to participate in any community.
Montessori children tend to be socially comfortable. Because they have been encouraged to problem-solve and think independently, Montessori children are typically happy, confident, and resourceful and settle quickly and easily into new schools once they have assimilated the different expectations and ground rules.
By the end of kindergarten, Montessori children are normally curious, self-confident learners who look forward to going to school. They are typically engaged, enthusiastic learners who honestly want to learn and who ask excellent questions.
By age six most have spent three or four years in a school where they were treated with honesty and respect with clear expectations and ground rules. Within that framework, their opinions and questions were taken quite seriously.
There is nothing inherent in Montessori that causes children to have a hard time if they are transferred to traditional schools. Some may be bored or not understand why everyone in the class has to do the same thing at the same time. However, most adapt to their new setting fairly quickly, make new friends, and succeed within the definition of success understood in their new school. | <urn:uuid:73a61b7f-578b-4177-ba9e-7c842deee07b> | CC-MAIN-2022-33 | https://sammamishmontessori.com/about-us/faq-general/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00496.warc.gz | en | 0.963881 | 7,906 | 3.5 | 4 |
Area 41.94 km2
Founded 5th century BC
|Population 122,838 (2008)|
Mayor Dominique Gros (PS)
|Colleges and Universities Paul Verlaine University – Metz, Georgia Tech Lorraine|
Points of interest Basilica of Saint-Pierre-aux-Nonnains, Metz Cathedral, Museums of Metz, Opera-Theatre de Metz Metropole, Arsenal
Map of Metz
Metz ([mɛs]; [mɛt͡s]) is a city in northeast France located at the confluence of the Moselle and the Seille rivers. Metz is the prefecture of the Moselle department and the seat of the parliament of the Grand Est region. Located near the tripoint along the junction of France, Germany, and Luxembourg, the city forms a central place of the European Greater Region and the SaarLorLux euroregion.
- Map of Metz
- Ww2 the battle of metz 1944 the iron men of metz full documentary
- Raw food breakfast in 2 minutes the living applesauce by blythe metz
- Notable people
- Local law
- City administrative divisions
- Cityscape and environmental policy
- Urban ecology
- Military architecture
- Museums and exhibition halls
- Entertainment and performing arts
- Metz in the arts
- The Graoully dragon as symbol of the city
- Celebrations and events
- High schools
- University of Lorraine
- Graduate schools
- Local transport
- Religious heritage
- Civil heritage
- Administrative heritage
- Military heritage
- International relations
Metz has a rich 3,000-year-history, having variously been a Celtic oppidum, an important Gallo-Roman city, the Merovingian capital of the Austrasia kingdom, the birthplace of the Carolingian dynasty, a cradle of the Gregorian chant, and one of the oldest republics in Europe. The city has been steeped in Romance culture, but has been strongly influenced by Germanic culture due to its location and history.
Because of its historical, cultural, and architectural background, Metz has been submitted on France's UNESCO World Heritage Tentative List. The city features noteworthy buildings such as the Gothic Saint-Stephen Cathedral with its largest expanse of stained-glass windows in the world, the Basilica of Saint-Pierre-aux-Nonnains being the oldest church in France, its Imperial Station Palace displaying the apartment of the German Kaiser, or its Opera House, the oldest one working in France. Metz is home to some world-class venues including the Arsenal Concert Hall and the Centre Pompidou-Metz museum.
A basin of urban ecology, Metz gained its nickname of The Green City (French: La Ville Verte), as it has extensive open grounds and public gardens. The historic city centre is one of the largest commercial pedestrian areas in France.
A historic garrison town, Metz is the economic heart of the Lorraine region, specialising in information technology and automotive industries. Metz is home to the University of Lorraine and a centre for applied research and development in the materials sector, notably in metallurgy and metallography, the heritage of the Lorraine region's past in the iron and steel industry.
Ww2 the battle of metz 1944 the iron men of metz full documentary
Raw food breakfast in 2 minutes the living applesauce by blythe metz
In ancient times, the town was known as "city of Mediomatrici", being inhabited by the tribe of the same name. After its integration into the Roman Empire, the city was called Divodurum Mediomatricum, meaning Holy Village or Holy Fortress of the Mediomatrici, then it was known as Mediomatrix. During the 5th century AD, the name evolved to "Mettis", which gave rise to Metz.
Metz has a recorded history dating back over 3,000 years. Before the conquest of Gaul by Julius Caesar in 52 BC, it was the oppidum of the Celtic Mediomatrici tribe. Integrated into the Roman Empire, Metz became quickly one of the principal towns of Gaul with a population of 40,000, until the barbarian depredations and its transfer to the Franks about the end of the 5th century. Between the 6th and 8th centuries, the city was the residence of the Merovingian kings of Austrasia. After the Treaty of Verdun in 843, Metz became the capital of the Kingdom of Lotharingia and was ultimately integrated into the Holy Roman Empire, being granted semi-independent status. During the 12th century, Metz rose to the status of Republic and the Republic of Metz ruled until the 15th century.
With the signature of the Treaty of Chambord in 1552, Metz passed to the hands of the Kings of France. Under French rule, Metz was selected as capital of the Three Bishoprics and became a strategic fortified town. With creation of the departments by the Estates-General of 1789, Metz was chosen as capital of the Department of Moselle. After the defeat of France during the Franco-Prussian War and according to the Treaty of Frankfurt of 1871, the city was annexed into the German Empire, being part of the Imperial Territory of Alsace-Lorraine and serving as capital of the German Department of Lorraine.
Metz remained German until the end of World War I, when it reverted to France. However, after the Battle of France during the Second World War, the city was annexed once more by the German Third Reich. In 1944, the attack on the city by the U.S. Third Army freed the city from German rule and Metz reverted one more time to France after World War II.
During the 1950s, Metz was chosen to be the capital of the newly created Lorraine region. With the creation of the European Community and the later European Union, the city has become central to the Greater Region and the SaarLorLux Euroregion.
Metz is located on the banks of the Moselle and the Seille rivers, 43 km (26.7 mi) from the Schengen tripoint where the borders of France, Germany, and Luxembourg meet. The city was built in a place where many branches of the Moselle river creates several islands, which are encompassed within the urban planning.
The terrain of Metz forms part of the Paris Basin and presents a plateau relief cut by river valleys presenting cuestas in the north-south direction. Metz and its surrounding countryside are included in the forest and crop Lorraine Regional Natural Park, covering a total area of 205,000 ha (506,566.0 acres).
The climate of Lorraine is a semi-continental climate. The summers are humid and hot, sometimes stormy, and the warmest month of the year is August, when temperatures average approximately 26 °C (78.8 °F). The winters are cold and snowy with temperature dropping to an average low of −0.5 °C (31.1 °F) in January. Lows can be much colder through the night and early morning and the snowy period extends from November to February.
The length of the day varies significantly over the course of the year. The shortest day is 21 December with 7:30 hours of sunlight; the longest day is 20 June with 16:30 hours of sunlight. The median cloud cover is 93% and does not vary substantially over the course of the year.
The inhabitants of Metz are called Messin(e)s. Statistics on the ethnic and religious make up of the population of Metz are haphazard, as the French Republic prohibits making distinctions between citizens regarding race, beliefs, and political and philosophic opinions in the process of census taking.
The French national census of 2012 estimated the population of Metz to be 119,551, while the population of Metz urban agglomeration was about 389,851. Through history, Metz's population has been impacted by the vicissitudes of the wars and annexations involving the city, which have prevented continuous population growth. More recently, the city has suffered from the restructuring of the military and the metallurgy industry. The historical population for the current area of Metz municipality is as follows:
Several well-known figures have been linked to the city of Metz throughout its history. Renowned Messins include poet Paul Verlaine,Pierre Gunther, composer Ambroise Thomas, and mathematician Jean-Victor Poncelet; numerous well-known German figures were also born in Metz notably during the annexation periods. Moreover, the city has been the residence of people such as writer François Rabelais, Cardinal Mazarin, political thinker Alexis de Tocqueville, French patriot and American Revolutionary War hero Marquis Gilbert du Motier de La Fayette, and Luxembourg-born German-French statesman Robert Schuman.
The Local Law (French: droit local) applied in Metz is a legal system that operates in parallel with French law. Created in 1919, it preserves the French laws applied in France before 1870 and maintained by the Germans during the annexation of Alsace-Lorraine, but repealed in the rest of France after 1871. It also maintains German laws enacted by the German Empire between 1871 and 1918, specific provisions adopted by the local authorities, and French laws that have been enacted after 1919 to be applicable only in Alsace-Lorraine. This specific local legislation encompasses different areas including religion, social work and finance.
The most striking of the legal differences between France and Alsace-Lorraine is the absence in Alsace-Lorraine of strict secularism, even though a constitutional right of freedom of religion is guaranteed by the French government. Alsace-Lorraine is still governed by a pre-1905 law established by the Concordat of 1801, which provides for the public subsidy of the Roman Catholic, Lutheran, and Calvinist churches and the Jewish religion.
Like every commune of the present French Republic, Metz is managed by a mayor (French: maire) and a municipal council (French: conseil municipal), democratically elected by two-round proportional voting for six years. The mayor is assisted by 54 municipal councillors, and the municipal council is held on the last Thursday of every month. Since 2008, the mayor of Metz has been socialist Dominique Gros.
The city belongs to the Metz Metropole union of cities, which includes the 40 cities of the Metz urban agglomeration. Metz is the prefecture of the Moselle based in the former Intendant Palace. In addition, Metz is the seat of the parliament of the Grand Est region, hosted in the former Saint-Clement Abbey.
City administrative divisions
The city of Metz is divided into 14 administrative divisions:
Cityscape and environmental policy
Metz contains a mishmash of architectural layers, bearing witness to centuries of history at the crossroads of different cultures, and features a number of architectural landmarks. The city possesses one of the largest Urban Conservation Areas in France, and more than 100 of the city's buildings are classified on the Monument Historique list. Because of its historical and cultural background, Metz is designated as French Town of Art and History, and has been submitted on to France's UNESCO World Heritage Tentative List.
The city is famous for its yellow limestone architecture, a result of the extensive use of Jaumont stone. The historic district has kept part of the Gallo-Roman city with Divodurum's Cardo Maximus, then called Via Scarponensis (today the Trinitaires, Taison, and Serpenoise streets), and the Decumanus Maximus (today En Fournirue and d'Estrées streets). At the Cardo and Decumanus intersection was situated the Roman forum, today the Saint-Jacques Square.
From its Gallo-Roman past, the city preserves vestiges of the thermae (in the basement of the Golden Courtyard museum), parts of the aqueduct, and the Basilica of Saint-Pierre-aux-Nonnains.
Saint Louis' square with its vaulted arcades and a Knights Templar chapel remains a major symbol of the city's High Medieval heritage. The Gothic Saint-Stephen Cathedral, several churches and Hôtels, and two remarkable municipal granaries reflect the Late Middle Ages. Examples of Renaissance architecture can be seen in Hôtels from the 16th century, such as the House of Heads (French: Maison des Têtes).
The city hall and the buildings surrounding the town square are by French architect Jacques-François Blondel, who was awarded the task of redesigning and modernizing the centre of Metz by the Royal Academy of Architecture in 1755 the context of the Enlightenment. Neoclassical buildings from the 18th century, such as the Opera House, the Intendant Palace (the present-day prefecture), and the Royal Governor's Palace (the present-day courthouse) built by Charles-Louis Clérisseau, are also found in the city.
The Imperial District was built during the first annexation of Metz by the German Empire. In order to "germanise" the city, Emperor Wilhelm II decided to create a new district shaped by a distinctive blend of Germanic architecture, including Renaissance, neo-Romanesque and neo-Classical, mixed with elements of Art Nouveau, Art Deco, Alsatian and mock-Bavarian styles. Instead of Jaumont stone, commonly used everywhere else in the city, stone used in the Rhineland, such as pink and grey sandstone, granite and basalt were used. The district features noteworthy buildings including the rail station and the Central Post Office by German architect Jürgen Kröger.
Modern architecture can also be seen in the town with works of French architects Roger-Henri Expert (Sainte-Thérèse-de-l'Enfant-Jésus church, 1934), Georges-Henri Pingusson (Fire Station, 1960), and Jean Dubuisson (subdivisions, 1960s). The refurbishment of the former Ney Arsenal as a Concert Hall in 1989 and the erection of the Metz Arena in 2002, by Spanish and French architects Ricardo Bofill and French Paul Chemetov represent the Postmodern movement.
The Centre Pompidou-Metz museum in the Amphitheatre District represents a strong architectural initiative to mark the entrance of Metz into the 21st century. Designed by Japanese architect Shigeru Ban, the building is remarkable for the complex, innovative carpentry of its roof, and integrates concepts of sustainable architecture. The project encompasses the architecture of two recipients of the Pritzker Architecture Prize, Shigeru Ban (2014) and French Christian de Portzamparc (1994). The Amphitheatre District is also conceived by French architects Nicolas Michelin, Jean-Paul Viguier, and Jean-Michel Wilmotte and designer Philippe Starck. The urban project is expected to be completed by 2023. Further, a contemporary music venue designed by contextualist French architect Rudy Ricciotti stands in the Borny District.
Under the leadership of such people as botanist Jean-Marie Pelt, Metz pioneered a policy of urban ecology during the early 1970s. Because of the failure of post-war urban planning and housing estate development in Europe during the 1960s, mostly based on the concepts of CIAM, Jean-Marie Pelt, then municipal councillor of Metz, initiated a new approach to the urban environment.
Based initially on the ideas of the Chicago School, Pelt's theories pleaded for better integration of humans into their environment and developed a concept centered on the relationship between "stone and water". His policy was realized in Metz by the establishment of extensive open areas surrounding the Moselle and the Seille rivers and the development of large pedestrian areas. As a result, Metz has over 37 m2 (400 sq ft) of open areas per inhabitant in the form of numerous public gardens in the city.
The principles of urban ecology are still applied in Metz with the implementation of a local Agenda 21 action plan. The municipal ecological policy encompasses the sustainable refurbishment of ancient buildings, the erection of sustainable districts and buildings, green public transport, and the creation of public gardens by means of landscape architecture.
Additionally, the city has developed its own combined heat and power station, using waste wood biomass from the surrounding forests as a renewable energy source. With a thermal efficiency above 80%, the 45MW boiler of the plant provides electricity and heat for 44,000 dwellings. The Metz power station is the first local producer and distributor of energy in France.
As a historic Garrison town, Metz has been heavily influenced by military architecture throughout its history. From ancient history to the present, the city has been successively fortified and modified to accommodate the troops stationed there. Defensive walls from classical antiquity to the 20th century are still visible today, incorporated into the design of public gardens along the Moselle and Seille rivers. A medieval bridge castle from the 13th century, named Germans' Gate (French: Porte des Allemands), today converted into a convention and exhibition centre, has become one of the landmarks of the city. Remains of the citadel from the 16th century and fortifications built by Louis de Cormontaigne are still visible today. Important barracks, mostly from the 18th and 19th centuries, are spread around the city: some, which are of architectural interest, have been converted to civilian use, such as the Arsenal Concert Hall by Spanish architect Ricardo Bofill.
The extensive fortifications of Metz, which ring the city, include early examples of Séré de Rivières system forts. Other forts were incorporated into the Maginot Line. A hiking trail on the Saint-Quentin plateau passes through a former military training zone and ends at the now abandoned military forts, providing a vantage point from which to survey the city.
Although the steel industry has historically dominated Moselle's economy, Metz's efforts at economic diversification have created a base in the sectors of commerce, tourism, information technology and the automotive industry. The city is the economic heart of the Lorraine region and around 73,000 people work daily within the urban agglomeration. The transport facilities found in the conurbation, including the international high-speed railway, motorway, inland connections and the local bus rapid transit system, have made the city a transport hub in the heart of the European Union. Metz is home to the biggest harbour handling cereals in France with over 4,000,000 tons/year.
Metz is home to the Moselle Chamber of Commerce. International companies such as PSA Peugeot Citroën, ArcelorMittal, SFR, and TDF have established plants and centres in the Metz conurbation. Metz is also the regional headquarters of the Caisse d'Epargne and Banque Populaire banking groups.
Metz is an important commercial centre of northern France with France's biggest retailer federation, consisting of around 2,000 retailers. Important retail companies are found in the city, such as the Galeries Lafayette, the Printemps department store and the Fnac entertainment retail chain. The historic city centre displays one of the largest commercial pedestrian areas in France and a mall, the Saint-Jacques centre. In addition there are several multiplex movie theatres and malls found in the urban agglomeration.
In recent years, Metz municipality have promoted an ambitious policy of tourism development, including urban revitalization and refurbishment of buildings and public squares. This policy has been spurred by the creation of the Centre Pompidou-Metz in 2010. Since its inauguration, the institution has become the most popular cultural venue in France outside Paris, with 550,000 visitors per year. Meanwhile, Saint-Stephen Cathedral is the most visited building in the city, accommodating 652,000 visitors per year.
Museums and exhibition halls
In addition, Metz features other museums and exhibition venues, such as:
Entertainment and performing arts
Metz has several venues for the performing arts. The Opera House of Metz, the oldest working opera house in France, features plays, dance, and lyric poetry. The Arsenal Concert Hall, dedicated to art music, is widely renowned for its excellent acoustics. The Trinitarians Club is a multi-media arts complex housed in the vaulted cellar and chapel of an ancient convent, the city's prime venue for jazz music. The Music Box (French: Boite à Musique), familiarly known as BAM, is the concert venue dedicated to rock and electronic music. The Braun Hall and the Koltès Theater feature plays, and the city has two movie theaters specializing in Auteur cinema. The Saint-Jacques Square, surrounded by busy bars and pubs whose open-air tables fill the centre of the square.
Since 2014, the former bus garage has been converted to accommodate over thirty artists in residence, in a space where they can create and rehearse artworks and even build set decorations. The artistic complex, called Metz Network of All Cultures (French: Toutes les Cultures en Réseau à Metz) and familiarly known as TCRM-Blida, encompasses a large hall of 3,000 m2 (32,000 sq ft) while theater and dance companies benefit from a studio of 800 m2 (8,600 sq ft) with backstages.
Metz in the arts
Metz was an important cultural centre during the Carolingian Renaissance. For instance, Gregorian chant was created in Metz during the 8th century as a fusion of Gallican and ancient Roman repertory. Then called Messin Chant, it remains the oldest form of music still in use in Western Europe. The bishops of Metz, notably Saint-Chrodegang promoted its use for the Roman liturgy in Gallic lands under the favorable influence of the Carolingian monarchs. Messin chant made two major contributions to the body of chant: it fitted the chant into the ancient Greek octoechos system, and invented an innovative musical notation, using neumes to show the shape of a remembered melody. Metz was also an important centre of illumination of Carolingian manuscripts, producing such monuments of Carolingian book illumination as the Drogo Sacramentary.
The Metz School (French: École de Metz) was an art movement in Metz and the region between 1834 and 1870, centred on Charles-Laurent Maréchal. The term was originally proposed in 1845 by the poet Charles Baudelaire, who appreciated the works of the artists. They were influenced by Eugène Delacroix and inspired by the medieval heritage of Metz and its romantic surroundings. The Franco-Prussian War and the annexation of the territory by the Germans resulted in the dismantling of the movement. The main figures of the Metz School were Charles-Laurent Maréchal, Auguste Migette, Auguste Hussenot, Louis-Théodore Devilly, Christopher Fratin, and Charles Pêtre. Their works include paintings, engravings, drawings, stained-glass windows, and sculptures.
A festival named "passages" takes place in May. Numerous shows are presented to it.
The Graoully dragon as symbol of the city
The Graoully is depicted as a fearsome dragon, vanquished by the sacred powers of Saint Clement of Metz, the first Bishop of the city. The Graoully quickly became a symbol of Metz and can be seen in numerous insignia of the city, from the 10th century on. Writers from Metz tend to present the legend as an allegory of Christianity's victory over paganism, represented by the harmful dragon.
Local specialties include the quiche, the potée, the Lorrain pâté, and also suckling pig. Different recipes, such as jam, tart, charcuterie and fruit brandy, are made from the Mirabelle and Damson plums. Also, Metz is the cradle of some pastries like the Metz cheese pie and the Metz Balls (French: boulet de Metz), a ganache-stuffed biscuit coated with marzipan, caramel, and dark chocolate. Local beverages include Moselle wine and Amos beer.
The Covered Market of Metz is one of the oldest, most grandiose in France and is home to traditional local food producers and retailers. Originally built as the bishop's palace, the French Revolution broke out before the Bishop of Metz could move in and the citizens decided to turn it into a food market. The adjacent Chamber's Square (French: Place de la Chambre) is surrounded by numerous local food restaurants.
Celebrations and events
Many events are celebrated in Metz throughout the year. The city of Metz dedicates two weeks to the Mirabelle plum during the popular Mirabelle Festival held in August. During the festival, in addition to open markets selling fresh plums, mirabelle tarts, and mirabelle liquor, there are live music, fireworks, parties, art exhibits, a parade with floral floats, a competition, the crowning of the Mirabelle Queen and a gala of celebration.
A literature festival is held in June. The Montgolfiades hot air balloon festival is organized in September. The second most popular Christmas Market in France is held in November and December. Finally, a Saint Nicholas parade honors the patron saint of the Lorraine region in December.
Metz is home to the Football Club of Metz (FC Metz), a football association club in Ligue 1, the first division of French football (as of 2016–2017 season). FC Metz has won three times the Ligue 2 (1935, 2007, and 2014), twice the Coupe de France (in 1984 and 1988) and the French League Cup (in 1986 and 1996), and was French championship runner-up in 1998. FC Metz has also gained recognition in France and Europe for its successful youth academy, winning the Gambardella Cup 3 times in 1981, 2001, and 2010. The Saint-Symphorien stadium has been the home of FC Metz since the creation of the club.
Metz Handball is a Team Handball club. Metz Handball has won the French Women's First League championship 20 times, the French Women's League Cup eight times and the Women's France Cup seven times. The Metz Arena has been the home of Metz Handball since 2002.
Since 2003, Metz has been home to the Moselle Open, an ATP World Tour 250 tournament played on indoor hard courts, which usually takes place in September.
Metz has numerous high schools, including the Fabert High School and the Lycée of Communication. Some of these institutions offer higher education programs such as classes préparatoires (undergraduate school) or BTS (technician certificate).
University of Lorraine
Metz is also home to the University of Lorraine (often abbreviated in UdL). The university is divided into two university centers, one in Metz (material sciences, technology, and management) and one in Nancy (biological sciences, health care, administration, and management). The University of Lorraine, which ranks in 2016 among the top 15 of French universities and among top 300 of the world universities according to the 2016 Academic Ranking of World Universities, has a student body of over 55,000 and offers 101 accredited research centers organized in 9 research areas and 8 doctoral colleges.
At the end of the 1990s, the city expanded and the Metz Science Park was created in the southern area. Along with this expansion, several graduate schools took the opportunity to establish campuses in the park. At first, facilities were grouped around the lake Symphony, like Supélec in 1985 and Georgia Tech Lorraine in 1990. In 1996, the engineering school Arts et Métiers ParisTech (ENSAM) built a research and learning center next to the golf course. This opened the way to the development of a new area, where the Franco-German university (ISFATES) and the ENIM moved in 2010. These graduate schools often cooperate with the University of Lorraine. For instance, the university and ENSAM share research teams, laboratories, equipments, and doctoral programs.
Public transport includes a bus rapid transit system, called Mettis. Mettis vehicles are high-capacity hybrid bi-articulated buses built by Van Hool, and stop at designated elevated tubes, complete with disability access. Mettis has its own planned and integrated transportation system, which includes two dedicated lines that spread out into the Metz conurbation. Mettis lanes A and B serve the city's major facilities (e.g. city centre, university campus, and hospitals), and a transport hub is located next to the railway station.
Metz Railway Station is connected to the French high speed train (TGV) network, which provides a direct rail service to Paris and Luxembourg. The time from Paris (Gare de l'Est) to Metz is 82 minutes. Additionally, Metz is served by the Lorraine TGV railway station, located at Louvigny, 25 km (16 mi) to the south of Metz, for high speed trains going to Nantes, Rennes, Lille and Bordeaux (without stopping in Paris). Also, Metz is one of the main stations of the regional express trains system, Métrolor.
Metz is located at the intersection of two major road axes: the Eastern Motorway, itself a part of the European route E50 connecting Paris to Prague, and the A31 Motorway, which goes north to Luxembourg and south to the Mediterranean Sea towards Nancy, Dijon, and Lyon.
The Luxembourg International Airport is the nearest international airport, connected to Metz by Métrolor train. The Lorraine TGV Station is 75 minutes by train from France international Paris-Charles de Gaulle Airport. Finally, Metz-Nancy-Lorraine Airport is located in Goin, 16.5 km (10.25 mi) southeast of Metz..
Metz is located at the confluence of the Moselle and the Seille rivers, both navigable waterways. The marina connects Metz to the cities of the Moselle valley (i.e. Trier, Schengen, and Koblenz) via the Moselle river.
Metz is a member of the QuattroPole(FR)(DE) union of cities, along with Luxembourg, Saarbrücken, and Trier (neighbouring countries: Luxembourg, France, and Germany). Metz has a central place in the Greater Region and of the economic SaarLorLux Euroregion. Metz is also twin town with: | <urn:uuid:2bab5962-2e08-4d2e-97aa-8424de8670ed> | CC-MAIN-2022-33 | https://alchetron.com/Metz | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00696.warc.gz | en | 0.945897 | 6,662 | 2.671875 | 3 |
SALT - Parashat Tzav 5781 / 2021
In memory of Lieutenant Daniel Yaakov Mandel HY"D,
killed in battle in Nablus on Yud Gimel Nissan, 18 years ago.
Yehi zikhro barukh.
Parashat Tzav begins with the mitzva of terumat ha-deshen – the removal of ashes from the altar each morning. The daily series of rituals in the Beit Ha-mikdash began with a kohen ascending the ramp to the altar, collecting some ashes, and placing them on the ground alongside the altar.
The Gemara in Masekhet Yoma (24a) addresses the question of how much ashes the kohen was required to collect and remove from the altar each morning. The Torah (6:3) formulates this command with the word “ve-heirim” (“he shall lift,” or “he shall remove”), a term which resembles the Torah’s formulation in Sefer Bamidbar (18:26,28) in presenting the command of terumat ma’aser, the tithe taken by the Leviyim from the portions of produce they receive (“va-hareimotem,” “tarimu”). Accordingly, the Gemara considers, perhaps the kohen was required to remove each day one-tenth of the ashes on the altar. However, the Gemara notes that the term “va-hareimota” is used also later in Sefer Bamidbar (31:28) in reference to the 1/500th of the spoils taken by the nation’s soldiers from Midyan as a donation to the Mishkan. Perhaps, then, the kohen was required to take only 1/500th of the ashes each day. The Gemara concludes that neither figure is correct, as in truth, the kohen is required to take a fistful of ashes. The word “ve-heirim” is used later in Parashat Tzav (6:8) in reference to the kohen’s removing a fistful from the mincha (grain offering) and placing it on the altar. Accordingly, the command of “ve-heirim” in the context of the terumat ha-deshen, too, requires removing a fistful of ashes.
Rashi, commenting on the Gemara, clarifies that this does not mean that the kohen actually takes ashes with his hand from the altar. After all, as the fire on the altar consistently burned, the ashes were always very hot, thus making it impractical for the kohen to take ashes with his hand. Rather, the Gemara speaks here of the minimum required quantity of ashes, which were taken with a special pan designated for this purpose. Rashi writes that if the kohen wished, he could remove even more than a fistful of ashes.
Interestingly, both in Rashi’s Torah commentary here in Parashat Tzav (6:3), and in his commentary to Masekhet Yoma (12b), Rashi writes that the kohen would take a “melo machta” – the amount that would fill the pan used for the terumat ha-deshen. The Maharal of Prague, in his Gur Aryeh (in Parashat Tzav), explains that this is the maximum amount allowed. Rashi does not mean that the kohen must, or optimally should, remove a “melo machta” from the altar, but rather mentions this amount as the maximum quantity that would be removed each day.
Elsewhere, Rashi appears to point to a practical halakhic difference between the first kometz (volume of a handful) removed by the kohen, and the additional ashes. The Gemara in Masekhet Temura (34a) speaks of the prohibition to derive benefit from the ashes removed from the altar as part of the terumat ha-deshen ritual (such as using them as fertilizer). Rashi, curiously, comments that this refers to the kometz of ashes removed by the kohen. Although Rashi, as we have seen, maintains that a kohen could remove as much as a “melo machta,” here he specifies the minimum required amount of a handful. The Vilna Gaon, in his notes to Masekhet Temura, suggests that Rashi perhaps maintained that only a handful’s worth of the ashes removed from the altar becomes forbidden for benefit, whereas the additional ashes do not. As the mitzva requires removing only a handful, the excess ashes are not included in the prohibition against deriving benefit from the terumat ha-deshen ashes.
Rav Eliyahu Sosevsky, in his Lefanai Tamid commentary to Masekhet Tamid (p. 22), suggests further explaining Rashi’s view in light of Rashi’s remarks elsewhere in his writings. The Gemara in Masekhet Yoma (21a) teaches that certain forms of refuse in the Beit Ha-mikdash were miraculously absorbed by the ground. Specifically, the Gemara mentions the portions of bird offerings which are removed from the bird and not placed on the altar (see Vayikra 1:16), the ashes which collect on the incense altar, and the refuse from the lamps of the menorah. Rashi, both in his Torah commentary (Vayikra 1:16) and in his commentary to the Gemara (Pesachim 26, Me’ila 11b), maintains that the ashes of the teruma ha-deshen were likewise included in this miracle, and were supernaturally absorbed by the ground of the Temple courtyard. (Rabbeinu Tam, cited by Tosafot in Me’ila (11b) and Zevachim (64a), disagreed.) Conceivably, Rav Sosevsky writes, Rashi maintained that only a handful’s worth of ashes would be absorbed in the ground, and then the remaining ashes were permitted for use. As a practical matter, the kohen had no possibility of removing only a handful ashes, and so he needed to remove more, but the subsequent miraculous absorption of the ashes revealed which ashes fulfilled the mitzva, and which were the excess. Therefore, once a handful’s worth of ashes was absorbed into the ground, the remaining ashes were permissible, as they were shown not to have been the ashes through which the mitzva of terumat ha-deshen was fulfilled, and thus they are not included in the prohibition against using the terumat ha-deshen ashes.
In concluding its discussion in Parashat Tzav of the mincha (grain offering), the Torah discusses the distribution of the offering among the kohanim (7:9-10). The Torah appears to distinguish in this regard between different types of mincha sacrifices, awarding some types exclusively to the kohen who tended to the sacrifice, whereas requiring others to be distributed among all kohanim. Rashi (7:9), however, citing Torat Kohanim, clarifies that in truth, all mincha offerings – and, in fact, all sacrifices – are distributed among the kohanim of the beit av – the shift serving in the Temple that week. All sacrificial food which the Torah grants to the kohanim is divided among the kohanim of that week’s beit av.
The Torah presents this law by stating that the food is given to the kohanim “ish ke-achiv” – “each like his fellow” (7:10), indicating that all members of the beit av receive an equal share. However, the Gemara in Masekhet Pesachim (3b) relates a story which appears to indicate otherwise. The Gemara tells of three kohanim in the Beit Ha-mikdash who were discussing the sizes of the portions they had received. (Rashi explains that they were speaking of their portions of the lechem ha-panim, the special bread which sat on the table in the Temple all week and was then eaten by the kohanim on Shabbat.) One kohen lamented that he received just a portion the size of a bean; a second shared that his portion was the size of an olive; and a third complained that his was the size of a lizard’s tail. (The Gemara relates that the third kohen’s mention of a lizard’s tail was deemed inappropriately crass, prompting the officials to research his pedigree, and they found that, indeed, this kohen was actually not qualified to serve in the Beit Ha-mikdash.) This account certainly seems to suggest that different kohanim received different sized portions of the sacrificial food – in direct contradistinction to the implication of the verse in Parashat Tzav, which instructs that the sacrifices are distributed among the kohanim “ish ke-achiv.”
The Chatam Sofer, in his commentary to Masekhet Pesachim, explains that in truth, all these kohanim were given portions of the same sizes, but they perceived them differently. The first kohen, the Chatam Sofer explains, lamented the small size of his portion, which he regarded as insignificant as a small bean, but the second retorted that in his view, this small portion received by each kohen was as considerable as an olive. In Halakha, the consumption of the volume of an olive is considered a significant act of eating, and thus the second kohen’s response meant that he felt privileged and blessed to receive his small portion of hallowed food. The third kohen replied that to the contrary, the priesthood, in his view, is like a lizard’s tail after it is severed, which convulses, appearing alive, when in truth it is lifeless. The priesthood, in the eyes of this kohen, has an aura of stature and distinction, but is, in truth, worthless – as evidenced by the small portions of food kohanim receive.
The Chatam Sofer’s reading of this story demonstrates how different people can observe the same reality but perceive it in three drastically different, and even opposite, ways. One person sees the blessings presented by current circumstances, whereas others complain about the situation, focusing their attention on what is missing and what could be better. People who receive an “equal portion,” experiencing the same reality, react very differently. We must strive to see all we are given in life as a “ke-zayit,” as a precious blessing to appreciate and be thankful for, rather than complain about the larger portion which we desire but are as yet denied.
The second half of Parashat Tzav describes the seven-day miluim period, during which Aharon and his sons were formally consecrated as kohanim. God commanded Moshe to assemble the entire nation to the area by the entrance to the newly-constructed Mishkan to witness the events (8:3). Rashi, citing the Midrash (Vayikra Rabba 10:9), comments, “This is one of the places where the small contained the many.” The Midrash here points to the miracle that was needed for the entire nation to come together in the small area in front of the Mishkan, which was too small to contain them.
The Chatam Sofer suggests an explanation for the significance of this unusual miracle. He writes that God sought to teach Benei Yisrael the importance of the quality of “histapkut” – feeling content with even a small amount which one receives. As God was now bringing His presence to reside among the people, He wanted them to experience the “miracle” of “histapkut,” to recognize that we can, contrary to what we might at first think, manage with whatever small portion we are given. As Benei Yisrael crowded together by the entrance of the Mishkan, it appeared as though there would not be enough space for them all – but in the end, there was. Similarly, we often feel that we are unable to survive with anything less than the comfortable lifestyle we desire. The miracle at the entrance of the Mishkan shows us that we can, in fact, manage with even a small “space,” even with few possessions and without comforts and luxuries.
This lesson was conveyed now, when the divine presence took residence among Benei Yisrael, perhaps to teach that in order to “make room” for God, for sanctity, we need to be prepared to compromise, to some degree, our standards of material comfort. If we are unable to accept limits on our physical “space,” on luxury and enjoyment, then we will always be too preoccupied with expanding our “space” to devote time and attention to the Shekhina. Imbuing our lives with sanctity requires that we develop the quality of “histapkut,” and accustom ourselves to feeling satisfied and content even with modest material standards.
The Shulchan Arukh (O.C. 444:1) rules that when Erev Pesach falls on Shabbat, bedikat chametz (the search for chametz) is performed on Thursday night, the night of the 13th of Nissan. Normally, of course, we perform the search the night before Erev Pesach – the night of the 14th of Nissan. If, however, this night is Shabbat, when the search cannot be performed, then we conduct the search the previous night, the night of the 13th of Nissan.
At first glance, the search conducted on the night of the 13th can be perceived in two different ways. On the one hand, we might explain that Chazal formally instituted only one date for bedikat chametz – the night of the 14th – but when this is not possible, we have no choice but to search the home earlier. According to this perspective, the situation of Erev Pesach which falls on Shabbat resembles the case of a person who leaves home for a trip before the night of the 14th of Pesach (“ha-mefaresh ve-yotzei be-shayara”), who performs bedikat chametz the night before he departs (Shulchan Arukh, O.C. 436:1). The formal obligation of bedikat chametz cannot be performed in such a year, but we must nevertheless search to ensure the absence of chametz, just as in the case of one who, in a regular year, will be away from home on the night of the 14th. Alternatively, we might explain that the initial enactment of bedikat chametz took into account that Erev Pesach on rare occasions falls on Shabbat, and Chazal instituted from the outset that in such a year, the search should be held on the night of the 13th. According to this understanding, we fulfill the formal obligation of bedikat chametz in such a year on the night of the 13th no less than we do in a regular year, when we search on the night of the 14th, because Chazal formally designated the night of the 13th as the time for bedikat chametz in such a year.
One practical difference between these two outlooks concerns the case of one who performed bedikat chametz earlier than the night of the 13th in such a year. The Chafetz Chaim, in Mishna Berura (433:1) and Sha’ar Ha-tziyun (433:5), cites different opinions as to whether – on a regular year – one who performed a proper search before the night of the 14th, and ensured not to bring any chametz into the home thereafter, must repeat the search on the night of the 14th. Most poskim maintain that if the person searched thoroughly as Halakha requires, then he does not need to search the home again on the night of the 14th. Some poskim, however, disagree, and require one to search his home on the night of the 14th even if he had checked properly on a previous night and did not bring any chametz into the home since then. The Bach and (his son-in-law) the Taz explain that since Chazal instituted a requirement to search on the night of the 14th, one must search on this night even if he had searched earlier. (The Levush also requires searching on the night of the 14th in such a case, but for a different reason – because people cannot be assumed to avoid bringing chametz into the home earlier than the night of the 14th.) The question arises as to whether the Bach and Taz would apply this ruling also in a year when Erev Pesach falls on Shabbat. According to the first perspective presented above, in such a year, we in any event cannot fulfill the formal obligation of bedikat chametz, which was instituted to take place on the night of the 14th. As such, one who prefers performing the search earlier than the night of the 13th of Nissan may do so, and then will not be required to search on the night of the 13th. According to the second perspective, however, the night of the 13th in such a year is no different than the night of the 14th in a regular year. Chazal from the outset instituted bedikat chametz on the night of the 14th in a regular year, and on the night of the 13th when Erev Pesach falls on Shabbat. Therefore, according to the Bach and Taz, one would be required to search in such a year on the night of the 13th even if he had searched earlier.
The answer to this question may perhaps be found in the ruling cited by the Mishna Berura (470:6) from earlier poskim (Maharil, Magen Avraham, and others) that even when bedikat chametz is performed on the night of the 13th, one should not eat that night before performing the search. Just as in a regular year, when the search is performed on the night of 14th, one should not eat once night falls until he completes the search, in a year when Erev Pesach falls on Shabbat, too, one should refrain from eating on the night of the 13th until he performs bedikat chametz. This would appear to prove the second perspective presented above, that the search on the night of the 13th in such a year fulfills the formal bedikat chametz obligation just as one does on the night of the 14th in a normal year. After all, if in such a year we cannot fulfill that formal obligation of bedikat chametz, and we search out of necessity earlier like in the case of one who leaves for a trip before the night of the 14th, there would seem to be no reason to refrain from eating before the search. This prohibition applies only when we have a formal obligation to fulfill that night – just as, for example, one should not eat at night during Chanukah before lighting the candles. The fact that Halakha requires refraining from eating before searching for chametz on the night of the 13th in a year when Erev Pesach falls on Shabbat would seem to suggest that in such a year, the formal obligation of bedikat chametz is transferred to the 13th. In such a year, we do not search earlier because we cannot search when Chazal required, but rather search on the night on which Chazal required searching in a year when the 14th of Nissan falls on Shabbat.
(Taken from Rav Raphael Binyamin Cohen’s article in Umka De-parsha, Shabbat Parashat Vayikra, 5771)
Yesterday, we addressed the well-known halakha (Shulchan Arukh, O.C. 444:1) that when Erev Pesach falls on Shabbat, such that bedikat chametz cannot be performed on the night of the 14th of Nissan as usual, it is performed the previous night, on the 13th of Nissan. As we saw, this halakha could, in theory, be perceived in two different ways. One possibility is to compare this situation to one of “ha-mefaresh ve-yotzei be-shayara” – one who leaves on a trip before the night of the 14th of Nissan, and will not be home on that night, and thus performs bedikat chametz the night before he departs (Shulchan Arukh, O.C. 436:1). According to this perspective, Chazal instituted only the 14th of Nissan as the time for bedikat chametz, and in a year when this night is Shabbat, we by necessity search earlier, but this search does not fulfill the formal requirement of bedikat chametz. Alternatively, however, we might understand that Chazal from the outset instituted bedikat chametz to be performed on the night of the 14th in a regular year, and on the night of the 13th in a year when the 14th is Shabbat. According to this understanding, searching on the night of the 13th in such a year fulfills the formal obligation of bedikat chametz no less than searching on the night of the 14th in a regular year.
Seemingly, we may prove the second perspective from the fact that a berakha is recited over bedikat chametz even in a year when Erev Pesach falls on Shabbat, and the search thus takes place on the night of the 13th. In the case of “ha-mefaresh ve-yotzei be-shayara,” although some opinions (as cited in Biur Halakha, 436) require the individual to recite a berakha when he searches the night before his trip, the Rama (436:1) ruled that no berakha is recited in such a case. The Shulchan Arukh Ha-Rav explains that the traveler does not recite a berakha because he does not perform the search at the time when Chazal instituted. Accordingly, we might deduce from the fact that a berakha is recited when searching the night of the 13th when the 14th is Shabbat, that the search in such a case indeed fulfills the actual obligation of bedikat chametz, which from the outset was scheduled for the night of the 13th if the 14th falls on Shabbat. (We should emphasize that the Shulchan Arukh Ha-Rav himself rules that one recites a berakha over bedikat chametz when searching on the night of the 13th in a year when the 14th falls on Shabbat.)
It should be noted, however, that the Vilna Gaon (cited by the Mishna Berura, 436:4) offers a different reason for why (according to the Rama) the berakha is not recited in the case of one who leaves home before the night of the 14th in a regular year. The berakha recited over bedikat chametz is “al biur chametz” – making reference to the obligation to eliminate one’s chametz on the 14th of Nissan. The search for chametz is performed for the purpose of fulfilling the requirement to eliminate one’s chametz, and so the berakha is formulated in this manner, in reference to the mitzva of bi’ur – eliminating the chametz. In the case of “ha-mefaresh ve-yotzei be-shayara,” the Vilna Gaon explained, the person intends not to eliminate the chametz, but rather to remove it from the home, because he will not have the opportunity to do so later, when required. Conceivably, he might still eat or sell this chametz, as plenty of time remains before the 14th of Nissan, when eating and owning chametz becoming forbidden. Therefore, this search cannot be said to be performed as part of the bi’ur process, and thus the berakha “al bi’ur chametz” is not recited. This explanation is not applicable to the situation when Erev Pesach falls on Shabbat, because the search on the night of the 13th indeed serves the purpose of bi’ur chametz, as we eliminate the chametz the following day, on Friday (saving some chametz for Shabbat). According to the Gaon’s understanding, then, the fact that we recite the berakha over bedikat chametz in such a case does not prove that the search on the night of the 13th differs from the case of “ha-mefaresh ve-yotzei be-shayara” and fulfills the formal bedikat chametz obligation.
(Taken from Rav Raphael Binyamin Cohen’s article in Umka De-parsha, Shabbat Parashat Vayikra, 5771)
The Gemara in Masekhet Pesachim (115b) establishes that if one swallows matza whole at the seder on Pesach, without first chewing it, he has fulfilled his obligation to eat matza on this night. However, if one swallowed marror at the seder without chewing it, then he has not fulfilled his obligation to eat marror. The answer, as Rashi and the Rashbam explain, is that marror – a vegetable with a bitter taste – is meant to commemorate the “bitterness” of our ancestors’ enslavement in Egypt. Therefore, one who does not chew the marror, and thus has not experienced its taste, does not fulfill his obligation. Likewise, the Gemara earlier instructs that although one must dip the marror in the sweet charoset before eating it, one must not keep it in the charoset for too long, as it may then lose its taste, and, in the Gemara’s words, “ba’inan ta’am marror ve-leika” – “we require the taste of marror, and it is not there.” Both laws are codified in the Shulchan Arukh (O.C. 475:1,3).
The implication of these rulings is that one does not fulfill the mitzva of marror if he does not experience its bitter taste. This is indicated also by the hymn recited by some congregations on the Shabbat before Pesach, written by Rav Yosef Tuv Elem (one of the Tosafists), which says, “miba’i lei le-kaskusei tuva” – one must chew the marror very well. The Or Zarua (Pesach, 256) explains that one must chew the marror in order to experience its bitter taste.
Accordingly, the Chazon Ish (O.C. 124) ruled that although it is accepted to fulfill the mitzva of marror with lettuce, which does not have an especially bitter taste, nevertheless, one must not use the soft pieces of lettuce which have no bitter taste at all.
Rav Menashe Klein, in his Mishneh Halakhot (6:92, 7:68), disagrees. In his view, since lettuce generally has a somewhat bitter taste, the marror obligation may be fulfilled even with pieces of lettuce that have no bitter taste. He notes that the Gemara disallows dipping the marror in charoset for an extended period not because the marror will lose its bitter taste, but rather that it will lose “ta’am marror” – the taste of marror. Meaning, Halakha requires experiencing the taste of an herb that generally tastes bitter, and the taste is lost if the vegetable is excessively sweetened by the charoset. Rav Klein thus maintains that one may fulfill the mitzva of marror by eating lettuce leaves that have no bitter taste.
This question perhaps becomes relevant for patients stricken with the coronavirus who are unable to taste their food. It would seem that according to both opinions, such a patient cannot fulfill the mitzva of marror, because he cannot taste the vegetable. Although, Rav Asher Weiss (Minchat Asher – Corona, pp. 270-272) raises the possibility of distinguishing between this case and that of one who eats a vegetable without any bitter taste, or who swallows the vegetable without chewing it. In the case of a coronavirus patient, he eats a vegetable that qualifies for the mitzva, and in a manner that should normally allow him to experience the vegetable’s taste, but as a practical matter, due to his condition, he cannot taste the flavor. One could perhaps argue that since the patient eats a suitable vegetable in the proper manner, he fulfills the mitzva despite being unable to taste the marror. Nevertheless, as this line of reasoning is far from conclusive, Rav Weiss rules that a coronavirus patient who is unable to taste food should eat marror without reciting the beracha of “al akhilat marror,” in order to satisfy both possibilities.
The Tur (O.C. 487) observes the custom observed in some communities to recite the full hallel, with the introductory and concluding berakhot, in the synagogue on the first night of Pesach (and, in the Diaspora, on the second night), after arvit. The source of this practice, as cited by the Tur, is Masekhet Sofrim (20:9), which lists the first night of Pesach among the occasions when the full hallel is recited. The reason for this custom, the Tur explains, is that we will not have to recite a berakha over the hallel recitation at the seder that night. It seems that in principle, the hallel recitation at the seder requires a berakha, but in practice, a berakha is not recited, and so the custom evolved to recite hallel in the synagogue with a berakha to satisfy the requirement to recite a berakha. The likely explanation is that at the seder we divide the hallel into two sections – reciting the beginning of the hallel text at the end of maggid, before the meal, and reciting the rest of hallel after the meal. The meal would constitute an interruption in between the introductory berakha and the rest of hallel, and so we cannot recite the introductory berakha at the seder. According to the Tur, this is the reason why it became customary to recite hallel in the synagogue.
Others, however, explain this practice differently. The Tosefta (Pesachim 10:8), cited by Tosafot (Berakhot 14a), indicates that it became customary in some places to recite hallel in the synagogue for the benefit of those who were unable to recite it themselves. In communities where there were people who did not know the hallel text by heart and did not have access to printed texts, hallel was read for them in the synagogue, so they could fulfill their requirement by listening to its recitation. Conceivably, this is the origin of the practice observed in some communities to recite hallel in the synagogue on the night of the seder.
Yet another view is cited by the Ran (Pesachim 26b in the Rif) in the name of the Rashba, who maintained that “ikar takanat keri’ato be-veit ha-kenesset haya, ve-lo ba-bayit” – the primary hallel obligation on this night is to recite hallel after arvit in the synagogue. According to the Rashba, the synagogue recitation fulfills the formal hallel obligation of this night, which is why a berakha is recited over this recitation. This is in contradistinction to the conventional understanding – and the Tur’s explanation – that the primary obligation is to recite hallel at the seder, and the recitation in the synagogue is merely a custom that developed later.
The Shulchan Arukh (487:4) records this custom, whereas the Rama observed that Ashkenazic communities in his time did not recite hallel in the synagogue on the night of the seder. The Vilna Gaon (Bi’ur Ha-Gra) suggests that these two views reflect the different opinions as to the origin of this practice. The Rama perhaps maintained that this custom developed for the sake of those who were unable to recite it themselves, and so nowadays, when people can recite it themselves, there is no need to recite hallel in the synagogue. The Shulchan Aruch, by contrast, followed the position that Chazal established a formal requirement to recite hallel in the synagogue on this night, and so this practice must be observed even in our times.
Interestingly, the Vilna Gaon in a separate context seems to point to a different perspective on the halllel recitation in the synagogue on the first night of Pesach. Commenting on the practice to light Chanukah candles in the synagogue each night (Bi’ur Ha-Gra to O.C. 671:7), the Vilna Gaon writes that this custom resembles the custom to recite hallel in the synagogue on the night of the seder. In both instances, the Gaon explains, the custom developed to perform publicly, in the synagogue, a mitzva which is performed in one’s home, for the purpose of pirsumei nisa – to make a public celebration of the miracle. According to the Vilna Gaon, then, the custom to recite hallel in the synagogue on the first night of Pesach serves to publicize the miracle of the Exodus. (See Minchat Asher – Moadim, vol. 3, chapter 7.)
Conceivably, these different approaches would affect the question of whether one who, for whatever reason, cannot attend the synagogue on this night nevertheless recites hallel after arvit, if this is his normal custom when praying in the synagogue. If this hallel recitation continues the ancient practice of reciting hallel in the synagogue for the benefit of those who could not recite it on their own, then it would seem that this custom requires reciting hallel only in the synagogue. And, certainly, according to the Vilna Gaon, who understood that the purpose of this custom is to make a public celebration, it applies only when praying publicly. According to the Tur, however, hallel is recited after arvit to avoid having to recite the berakha over hallel at the seder, and this is relevant even if one prays privately.
Regardless, Rav Asher Weiss (Minchat Asher – Corona, pp. 262-265) ruled that in those communities which follow the practice of reciting hallel in the synagogue on the first night of Pesach, even one who prays privately recites hallel after arvit. Although this halakha should depend on the reason for the synagogue halllel recitation, as discussed, Rav Weiss explains that since this practice is based on Sephardic custom, its parameters are determined based on that original custom. And a number of Sephardic poskim, including the Chida (Birkei Yosef, 487:8; Sheyarei Berakha, 487:3; Moreh Be-etzba, 207) and the Kaf Ha-chayim (487:39-42), mention that even those who pray privately recite hallel after arvit. Therefore, this is the policy that should be followed by those who normally recite hallel in the synagogue on the first night of Pesach but now find themselves praying privately.
THE FIRST DECADE OF SALT ARCHIVES CAN BE FOUND AT:
MORE RECENT INSTALLMENTS OF SALT DIVREI TORAH CAN BE FOUND AT: | <urn:uuid:2dc2ad32-4cc7-43c5-bacb-24baf3225c5e> | CC-MAIN-2022-33 | https://torah.etzion.org.il/en/salt-parashat-tzav-5781-2021 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00096.warc.gz | en | 0.95281 | 7,723 | 2.796875 | 3 |
|Part of a series on|
Law and the Environment
|Pollution control law|
|Natural resources law|
Environmental law - or "environmental and natural resources law" - is a collective term describing the network of treaties, statutes, regulations, and common and customary laws addressing the effects of human activity on the natural environment.
The broad category of "environmental law" may be broken down into a number of more specific regulatory subjects. While there is no single agreed-upon taxonomy, the core environmental law regimes address environmental pollution. A related but distinct set of regulatory regimes, now strongly influenced by environmental legal principles, focus on the management of specific natural resources, such as forests, minerals, or fisheries. Other areas, such as environmental impact assessment, may not fit neatly into either category, but are nonetheless important components of environmental law.
Environmental impact assessment (EA) is the term used for the assessment of the environmental consequences (positive and negative) of a plan, policy, program, or concrete projects prior to the decision to move forward with the proposed action. In this context, the term "environmental impact assessment" (EIA) is usually used when applied to concrete projects by individuals or companies and the term "strategic environmental assessment" (SEA) applies to policies, plans and programmes most often proposed by organs of state (Fischer, 2016). Environmental assessments may be governed by rules of administrative procedure regarding public participation and documentation of decision making, and may be subject to judicial review.
Air quality laws govern the emission of air pollutants into the atmosphere. A specialized subset of air quality laws regulate the quality of air inside buildings. Air quality laws are often designed specifically to protect human health by limiting or eliminating airborne pollutant concentrations. Other initiatives are designed to address broader ecological problems, such as limitations on chemicals that affect the ozone layer, and emissions trading programs to address acid rain or climate change. Regulatory efforts include identifying and categorizing air pollutants, setting limits on acceptable emissions levels, and dictating necessary or appropriate mitigation technologies.
Water quality laws govern the release of pollutants into water resources, including surface water, ground water, and stored drinking water. Some water quality laws, such as drinking water regulations, may be designed solely with reference to human health. Many others, including restrictions on the alteration of the chemical, physical, radiological, and biological characteristics of water resources, may also reflect efforts to protect aquatic ecosystems more broadly. Regulatory efforts may include identifying and categorizing water pollutants, dictating acceptable pollutant concentrations in water resources, and limiting pollutant discharges from effluent sources. Regulatory areas include sewage treatment and disposal, industrial and agricultural waste water management, and control of surface runoff from construction sites and urban environments.
Waste management laws govern the transport, treatment, storage, and disposal of all manner of waste, including municipal solid waste, hazardous waste, and nuclear waste, among many other types. Waste laws are generally designed to minimize or eliminate the uncontrolled dispersal of waste materials into the environment in a manner that may cause ecological or biological harm, and include laws designed to reduce the generation of waste and promote or mandate waste recycling. Regulatory efforts include identifying and categorizing waste types and mandating transport, treatment, storage, and disposal practices.
Environmental cleanup laws govern the removal of pollution or contaminants from environmental media such as soil, sediment, surface water, or ground water. Unlike pollution control laws, cleanup laws are designed to respond after-the-fact to environmental contamination, and consequently must often define not only the necessary response actions, but also the parties who may be responsible for undertaking (or paying for) such actions. Regulatory requirements may include rules for emergency response, liability allocation, site assessment, remedial investigation, feasibility studies, remedial action, post-remedial monitoring, and site reuse.
Chemical safety laws govern the use of chemicals in human activities, particularly man-made chemicals in modern industrial applications. As contrasted with media-oriented environmental laws (e.g., air or water quality laws), chemical control laws seek to manage the (potential) pollutants themselves. Regulatory efforts include banning specific chemical constituents in consumer products (e.g., Bisphenol A in plastic bottles), and regulating pesticides.
Water resources laws govern the ownership and use of water resources, including surface water and ground water. Regulatory areas may include water conservation, use restrictions, and ownership regimes.
Mineral resource laws cover several basic topics, including the ownership of the mineral resource and who can work them. Mining is also affected by various regulations regarding the health and safety of miners, as well as the environmental impact of mining.
Forestry laws govern activities in designated forest lands, most commonly with respect to forest management and timber harvesting. Ancillary laws may regulate forest land acquisition and prescribed burn practices. Forest management laws generally adopt management policies, such as multiple use and sustained yield, by which public forest resources are to be managed. Governmental agencies are generally responsible for planning and implementing forestry laws on public forest lands, and may be involved in forest inventory, planning, and conservation, and oversight of timber sales. Broader initiatives may seek to slow or reverse deforestation.
Wildlife and plants
Wildlife laws govern the potential impact of human activity on wild animals, whether directly on individuals or populations, or indirectly via habitat degradation. Similar laws may operate to protect plant species. Such laws may be enacted entirely to protect biodiversity, or as a means for protecting species deemed important for other reasons. Regulatory efforts may including the creation of special conservation statuses, prohibitions on killing, harming, or disturbing protected species, efforts to induce and support species recovery, establishment of wildlife refuges to support conservation, and prohibitions on trafficking in species or animal parts to combat poaching.
Fish and game
Fish and game laws regulate the right to pursue and take or kill certain kinds of fish and wild animal (game). Such laws may restrict the days to harvest fish or game, the number of animals caught per person, the species harvested, or the weapons or fishing gear used. Such laws may seek to balance dueling needs for preservation and harvest and to manage both environment and populations of fish and game. Game laws can provide a legal structure to collect license fees and other money which is used to fund conservation efforts as well as to obtain harvest information used in wildlife management practice.
Environmental law has developed in response to emerging awareness of and concern over issues impacting the entire world. While laws have developed piecemeal and for a variety of reasons, some effort has gone into identifying key concepts and guiding principles common to environmental law as a whole. The principles discussed below are not an exhaustive list and are not universally recognized or accepted. Nonetheless, they represent important principles for the understanding of environmental law around the world.
Defined by the United Nations Environment Programme as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs," sustainable development may be considered together with the concepts of "integration" (development cannot be considered in isolation from sustainability) and "interdependence" (social and economic development, and environmental protection, are interdependent). Laws mandating environmental impact assessment and requiring or encouraging development to minimize environmental impacts may be assessed against this principle.
The modern concept of sustainable development was a topic of discussion at the 1972 United Nations Conference on the Human Environment (Stockholm Conference), and the driving force behind the 1983 World Commission on Environment and Development (WCED, or Bruntland Commission). In 1992, the first UN Earth Summit resulted in the Rio Declaration, Principle 3 of which reads: "The right to development must be fulfilled so as to equitably meet developmental and environmental needs of present and future generations." Sustainable development has been a core concept of international environmental discussion ever since, including at the World Summit on Sustainable Development (Earth Summit 2002), and the United Nations Conference on Sustainable Development (Earth Summit 2012, or Rio+20).
Defined by UNEP to include intergenerational equity - "the right of future generations to enjoy a fair level of the common patrimony" - and intragenerational equity - "the right of all people within the current generation to fair access to the current generation's entitlement to the Earth's natural resources" - environmental equity considers the present generation under an obligation to account for long-term impacts of activities, and to act to sustain the global environment and resource base for future generations. Pollution control and resource management laws may be assessed against this principle.
Defined in the international law context as an obligation to protect one's own environment, and to prevent damage to neighboring environments, UNEP considers transboundary responsibility at the international level as a potential limitation on the rights of the sovereign state. Laws that act to limit externalities imposed upon human health and the environment may be assessed against this principle.
Public participation and transparency
Identified as essential conditions for "accountable governments . . ., industrial concerns," and organizations generally, public participation and transparency are presented by UNEP as requiring "effective protection of the human right to hold and express opinions and to seek, receive and impart ideas," "a right of access to appropriate, comprehensible and timely information held by governments and industrial concerns on economic and social policies regarding the sustainable use of natural resources and the protection of the environment, without imposing undue financial burdens upon the applicants and with adequate protection of privacy and business confidentiality," and "effective judicial and administrative proceedings." These principles are present in environmental impact assessment, laws requiring publication and access to relevant environmental data, and administrative procedure.
One of the most commonly encountered and controversial principles of environmental law, the Rio Declaration formulated the precautionary principle as follows:
- In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.
The principle may play a role in any debate over the need for environmental regulation.
- The concept of prevention . . . can perhaps better be considered an overarching aim that gives rise to a multitude of legal mechanisms, including prior assessment of environmental harm, licensing or authorization that set out the conditions for operation and the consequences for violation of the conditions, as well as the adoption of strategies and policies. Emission limits and other product or process standards, the use of best available techniques and similar techniques can all be seen as applications of the concept of prevention.
Polluter pays principle
The polluter pays principle stands for the idea that "the environmental costs of economic activities, including the cost of preventing potential harm, should be internalized rather than imposed upon society at large." All issues related to responsibility for cost for environmental remediation and compliance with pollution control regulations involve this principle.
Early examples of legal enactments designed to consciously preserve the environment, for its own sake or human enjoyment, are found throughout history. In the common law, the primary protection was found in the law of nuisance, but this only allowed for private actions for damages or injunctions if there was harm to land. Thus smells emanating from pig stys, strict liability against dumping rubbish, or damage from exploding dams. Private enforcement, however, was limited and found to be woefully inadequate to deal with major environmental threats, particularly threats to common resources. During the "Great Stink" of 1858, the dumping of sewerage into the River Thames began to smell so ghastly in the summer heat that Parliament had to be evacuated. Ironically, the Metropolitan Commission of Sewers Act 1848 had allowed the Metropolitan Commission for Sewers to close cesspits around the city in an attempt to "clean up" but this simply led people to pollute the river. In 19 days, Parliament passed a further Act to build the London sewerage system. London also suffered from terrible air pollution, and this culminated in the "Great Smog" of 1952, which in turn triggered its on legislative response: the Clean Air Act 1956. The basic regulatory structure was to set limits on emissions for households and business (particularly burning coal) while an inspectorate would enforce compliance.
Notwithstanding early analogues, the concept of "environmental law" as a separate and distinct body of law is a twentieth-century development. The recognition that the natural environment was fragile and in need of special legal protections, the translation of that recognition into legal structures, the development of those structures into a larger body of "environmental law," and the strong influence of environmental law on natural resource laws, did not occur until about the 1960s. At that time, numerous influences - including a growing awareness of the unity and fragility of the biosphere; increased public concern over the impact of industrial activity on natural resources and human health; the increasing strength of the regulatory state; and more broadly the advent and success of environmentalism as a political movement - coalesced to produce a huge new body of law in a relatively short period of time. While the modern history of environmental law is one of continuing controversy, by the end of the twentieth century environmental law had been established as a component of the legal landscape in all developed nations of the world, many developing ones, and the larger project of international law.
Environmental law is a continuing source of controversy. Debates over the necessity, fairness, and cost of environmental regulation are ongoing, as well as regarding the appropriateness of regulations vs. market solutions to achieve even agreed-upon ends.
Allegations of scientific uncertainty fuel the ongoing debate over greenhouse gas regulation, and are a major factor in debates over whether to ban particular pesticides. In cases where the science is well-settled, it is not unusual to find that corporations intentionally hide or distort the facts, or sow confusion.
It is very common for regulated industry to argue against environmental regulation on the basis of cost. Difficulties arise in performing cost-benefit analysis of environmental issues. It is difficult to quantify the value of an environmental value such as a healthy ecosystem, clean air, or species diversity. Many environmentalists' response to pitting economy vs. ecology is summed up by former Senator and founder of Earth Day Gaylord Nelson, "The economy is a wholly owned subsidiary of the environment, not the other way around." Furthermore, environmental issues are seen by many as having an ethical or moral dimension, which would transcend financial cost. Even so, there are some efforts underway to systemically recognize environmental costs and assets, and account for them properly in economic terms.
While affected industries spark controversy in fighting regulation, there are also many environmentalists and public interest groups who believe that current regulations are inadequate, and advocate for stronger protection. Environmental law conferences - such as the annual Public Interest Environmental Law Conference in Eugene, Oregon - typically have this focus, also connecting environmental law with class, race, and other issues.
An additional debate is to what extent environmental laws are fair to all regulated parties. For instance, researchers Preston Teeter and Jorgen Sandberg highlight how smaller organizations can often incur disproportionately larger costs as a result of environmental regulations, which can ultimately create an additional barrier to entry for new firms, thus stifling competition and innovation.
Around the world
Global and regional environmental issues are increasingly the subject of international law. Debates over environmental concerns implicate core principles of international law and have been the subject of numerous international agreements and declarations.
Customary international law is an important source of international environmental law. These are the norms and rules that countries follow as a matter of custom and they are so prevalent that they bind all states in the world. When a principle becomes customary law is not clear cut and many arguments are put forward by states not wishing to be bound. Examples of customary international law relevant to the environment include the duty to warn other states promptly about icons of an environmental nature and environmental damages to which another state or states may be exposed, and Principle 21 of the Stockholm Declaration ('good neighbourliness' or sic utere).
Numerous legally binding international agreements encompass a wide variety of issue-areas, from terrestrial, marine and atmospheric pollution through to wildlife and biodiversity protection. International environmental agreements are generally multilateral (or sometimes bilateral) treaties (a.k.a. convention, agreement, protocol, etc.). Protocols are subsidiary agreements built from a primary treaty. They exist in many areas of international law but are especially useful in the environmental field, where they may be used to regularly incorporate recent scientific knowledge. They also permit countries to reach agreement on a framework that would be contentious if every detail were to be agreed upon in advance. The most widely known protocol in international environmental law is the Kyoto Protocol, which followed from the United Nations Framework Convention on Climate Change.
While the bodies that proposed, argued, agreed upon and ultimately adopted existing international agreements vary according to each agreement, certain conferences, including 1972's United Nations Conference on the Human Environment, 1983's World Commission on Environment and Development, 1992's United Nations Conference on Environment and Development and 2002's World Summit on Sustainable Development have been particularly important. Multilateral environmental agreements sometimes create an International Organization, Institution or Body responsible for implementing the agreement. Major examples are the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) and the International Union for Conservation of Nature (IUCN).
International environmental law also includes the opinions of international courts and tribunals. While there are few and they have limited authority, the decisions carry much weight with legal commentators and are quite influential on the development of international environmental law. One of the biggest challenges in international decisions is to determine an adequate compensation for environmental damages. The courts include the International Court of Justice (ICJ); the international Tribunal for the Law of the Sea (ITLOS); the European Court of Justice; European Court of Human Rights and other regional treaty tribunals.
According to the International Network for Environmental Compliance and Enforcement (INECE), the major environmental issues in Africa are “drought and flooding, air pollution, deforestation, loss of biodiversity, freshwater availability, degradation of soil and vegetation, and widespread poverty.” The U.S. Environmental Protection Agency (EPA) is focused on the “growing urban and industrial pollution, water quality, electronic waste and indoor air from cookstoves.” They hope to provide enough aid on concerns regarding pollution before their impacts contaminate the African environment as well as the global environment. By doing so, they intend to “protect human health, particularly vulnerable populations such as children and the poor.” In order to accomplish these goals in Africa, EPA programs are focused on strengthening the ability to enforce environmental laws as well as public compliance to them. Other programs work on developing stronger environmental laws, regulations, and standards.
The Asian Environmental Compliance and Enforcement Network (AECEN) is an agreement between 16 Asian countries dedicated to improving cooperation with environmental laws in Asia. These countries include Cambodia, China, Indonesia, India, Maldives, Japan, Korea, Malaysia, Nepal, Philippines, Pakistan, Singapore, Sri Lanka, Thailand, Vietnam, and Lao PDR.
The European Union issues secondary legislation on environmental issues that are valid throughout the EU (so called regulations) and many directives that must be implemented into national legislation from the 28 member states (national states). Examples are the Regulation (EC) No. 338/97 on the implementation of CITES; or the Natura 2000 network the centerpiece for nature & biodiversity policy, encompassing the bird Directive (79/409/EEC/ changed to 2009/147/EC)and the habitats directive (92/43/EEC). Which are made up of multiple SACs (Special Areas of Conservation, linked to the habitats directive) & SPAs (Special Protected Areas, linked to the bird directive), throughout Europe.
EU legislation is ruled in Article 249 Treaty for the Functioning of the European Union (TFEU). Topics for common EU legislation are:
- Climate change
- Air pollution
- Water protection and management
- Waste management
- Soil protection
- Protection of nature, species and biodiversity
- Noise pollution
- Cooperation for the environment with third countries (other than EU member states)
- Civil protection
The U.S. Environmental Protection Agency is working with countries in the Middle East to improve “environmental governance, water pollution and water security, clean fuels and vehicles, public participation, and pollution prevention.”
The main concerns on environmental issues in the Oceanic Region are “illegal releases of air and water pollutants, illegal logging/timber trade, illegal shipment of hazardous wastes, including e-waste and ships slated for destruction, and insufficient institutional structure/lack of enforcement capacity”. The Secretariat of the Pacific Regional Environmental Programme (SPREP) is an international organization between Australia, the Cook Islands, FMS, Fiji, France, Kiribati, Marshall Islands, Nauru, New Zealand, Niue, Palau, PNG, Samoa, Solomon Island, Tonga, Tuvalu, USA, and Vanuatu. The SPREP was established in order to provide assistance in improving and protecting the environment as well as assure sustainable development for future generations.
The Environment Protection and Biodiversity Conservation Act 1999 is the center piece of environmental legislation in the Australian Government. It sets up the “legal framework to protect and manage nationally and internationally important flora, fauna, ecological communities and heritage places”. It also focuses on protecting world heritage properties, national heritage properties, wetlands of international importance, nationally threatened species and ecological communities, migratory species, Commonwealth marine areas, Great Barrier Reef Marine Park, and the environment surrounding nuclear activities. Commonwealth v Tasmania (1983), also known as the "Tasmanian Dam Case", is the most influential case for Australian environmental law.
The Brazilian government created the Ministry of Environment in 1992 in order to develop better strategies of protecting the environment, use natural resources sustainably, and enforce public environmental policies. The Ministry of Environment has authority over policies involving environment, water resources, preservation, and environmental programs involving the Amazon.
The Department of the Environment Act establishes the Department of the Environment in the Canadian government as well as the position Minister of the Environment. Their duties include “the preservation and enhancement of the quality of the natural environment, including water, air and soil quality; renewable resources, including migratory birds and other non-domestic flora and fauna; water; meteorology;" The Environmental Protection Act is the main piece of Canadian environmental legislation that was put into place March 31, 2000. The Act focuses on “respecting pollution prevention and the protection of the environment and human health in order to contribute to sustainable development." Other principle federal statutes include the Canadian Environmental Assessment Act, and the Species at Risk Act. When provincial and federal legislation are in conflict federal legislation takes precedence, that being said individual provinces can have their own legislation such as Ontario's Environmental Bill of Rights, and Clean Water Act.
According to the U.S. Environmental Protection Agency, "China has been working with great determination in recent years to develop, implement, and enforce a solid environmental law framework. Chinese officials face critical challenges in effectively implementing the laws, clarifying the roles of their national and provincial governments, and strengthening the operation of their legal system." Explosive economic and industrial growth in China has led to significant environmental degradation, and China is currently in the process of developing more stringent legal controls. The harmonization of Chinese society and the natural environment is billed as a rising policy priority.
With the enactment of the 2008 Constitution, Ecuador became the first country in the world to codify the Rights of Nature. The Constitution, specifically Articles 10 and 71-74, recognizes the inalienable rights of ecosystems to exist and flourish, gives people the authority to petition on the behalf of ecosystems, and requires the government to remedy violations of these rights. The rights approach is a break away from traditional environmental regulatory systems, which regard nature as property and legalize and manage degradation of the environment rather than prevent it.
The Rights of Nature articles in Ecuador's constitution are part of a reaction to a combination of political, economic, and social phenomena. Ecuador's abusive past with the oil industry, most famously the class-action litigation against Chevron, and the failure of an extraction-based economy and neoliberal reforms to bring economic prosperity to the region has resulted in the election of a New Leftist regime, led by President Rafael Correa, and sparked a demand for new approaches to development. In conjunction with this need, the principle of "Buen Vivir," or good living—focused on social, environmental and spiritual wealth versus material wealth—gained popularity among citizens and was incorporated into the new constitution.
The influence of indigenous groups, from whom the concept of "Buen Vivir" originates, in the forming of the constitutional ideals also facilitated the incorporation of the Rights of Nature as a basic tenet of their culture and conceptualization of "Buen Vivir."
The Environmental Protection Law outlines the responsibilities of the Egyptian government to “preparation of draft legislation and decrees pertinent to environmental management, collection of data both nationally and internationally on the state of the environment, preparation of periodical reports and studies on the state of the environment, formulation of the national plan and its projects, preparation of environmental profiles for new and urban areas, and setting of standards to be used in planning for their development, and preparation of an annual report on the state of the environment to be prepared to the President."
In India, Environmental law is governed by the Environment Protection Act, 1986. This act is enforced by the Central Pollution Control Board and the numerous State Pollution Control Boards. Apart from this, there are also individual legislations specifically enacted for the protection of Water, Air, Wildlife, etc. Such legislations include :-
- The Water (Prevention and Control of Pollution) Act, 1974
- The Water (Prevention and Control of Pollution) Cess Act, 1977
- The Forest (Conservation) Act, 1980
- The Air (Prevention and Control of Pollution) Act, 1981
- Air (Prevention and Control of Pollution) (Union Territories) Rules, 1983
- The Biological Diversity Act, 2002 and the Wild Life Protection Act, 1972.
- Batteries (Management and Handling) Rules, 2001
- Recycled Plastics, Plastics Manufacture and Usage Rules, 1999
- The National Green Tribunal established under the National Green Tribunal Act of 2010 has jurisdiction over all environmental cases dealing with a substantial environmental question and acts covered under the Water (Prevention and Control of Pollution) Act, 1974;
- Water (Prevention and Control of Pollution) Cess Rules, 1978
- Ganga Action Plan, 1986
- The Forest (Conservation) Act, 1980
- The Public Liability Insurance Act, 1991 and the Biological Diversity Act, 2002. The acts covered under Indian Wild Life Protection Act 1972 do not fall within the jurisdiction of the National Green Tribunal. Appeals can be filed in the Hon'ble Supreme Court of India.
- Basel Convention on Control of TransboundaryMovements on Hazardous Wastes and Their Disposal, 1989 and Its Protocols
- Hazardous Wastes (Management and Handling) Amendment Rules, 2003
The Basic Environmental Law is the basic structure of Japan’s environmental policies replacing the Basic Law for Environmental Pollution Control and the Nature Conservation Law. The updated law aims to address “global environmental problems, urban pollution by everyday life, loss of accessible natural environment in urban areas and degrading environmental protection capacity in forests and farmlands.”
The three basic environmental principles that the Basic Environmental Law follows are “the blessings of the environment should be enjoyed by the present generation and succeeded to the future generations, a sustainable society should be created where environmental loads by human activities are minimized, and Japan should contribute actively to global environmental conservation through international cooperation.” From these principles, the Japanese government have established policies such as “environmental consideration in policy formulation, establishment of the Basic Environment Plan which describes the directions of long-term environmental policy, environmental impact assessment for development projects, economic measures to encourage activities for reducing environmental load, improvement of social infrastructure such as sewerage system, transport facilities etc., promotion of environmental activities by corporations, citizens and NGOs, environmental education, and provision of information, promotion of science and technology."
The Ministry for the Environment and Office of the Parliamentary Commissioner for the Environment were established by the Environment Act 1986. These positions are responsible for advising the Minister on all areas of environmental legislation. A common theme of New Zealand’s environmental legislation is sustainably managing natural and physical resources, fisheries, and forests. The Resource Management Act 1991 is the main piece of environmental legislation that outlines the government’s strategy to managing the “environment, including air, water soil, biodiversity, the coastal environment, noise, subdivision, and land use planning in general.”
The Ministry of Natural Resources and Environment of the Russian Federation makes regulation regarding “conservation of natural resources, including the subsoil, water bodies, forests located in designated conservation areas, fauna and their habitat, in the field of hunting, hydrometeorology and related areas, environmental monitoring and pollution control, including radiation monitoring and control, and functions of public environmental policy making and implementation and statutory regulation."
Vietnam is currently working with the U.S. Environmental Protection Agency on dioxin remediation and technical assistance in order to lower methane emissions. In March 2002, the U.S and Vietnam signed the U.S.-Vietnam Memorandum of Understanding on Research on Human Health and the Environmental Effects of Agent Orange/Dioxin.
- Environmental racism
- Environmental racism in Europe
- Indigenous rights
- International law
- List of environmental law journals
- For example, the United Nations Environment Programme (UNEP) has identified eleven "emerging principles and concepts" in international environmental law, derived from the 1972 Stockholm Conference, the 1992 Rio Declaration, and more recent developments. UNEP, Training Manual on International Environmental Law (Chapter 3).
- UNEP Manual, ¶¶ 12-19.
- UNEP Manual, ¶¶ 20-23.
- UNEP Manual, ¶¶ 24-28.
- UNEP Manual, ¶¶ 58.
- Rio Declaration Principle 16; UNEP Manual ¶ 63.
- Aldred's Case (1610) 9 Co Rep 57b; (1610) 77 ER 816
- R v Stephens (1866) LR 1 QB 702
- Rylands v Fletcher UKHL 1
- See generally R. Lazarus, The Making of Environmental Law (Cambridge Press 2004); P. Gates, History of Public Land Law Development.
- See, e.g., DDT.
- The Christian Science Monitor (22 June 2010). "Merchants of Doubt". The Christian Science Monitor.
- In the United States, estimates of environmental regulation's total costs reach 2% of GDP. See Pizer & Kopp, Calculating the Costs of Environmental Regulation, 1 (2003 Resources for the Future).
- Nelson, Gaylord (November 2002). Beyond Earth Day: Fulfilling the Promise. Wisconsin Press. ISBN 0-299-18040-9.
- "Can the World Really Set Aside Half of the Planet for Wildlife?". Smithsonian.
- "Climate Coalition Vows 'Peaceful, Escalated' Actions Until 'We Break Free from Fossil Fuels'". Common Dreams.
- "A Guide to Environmental Non-Profits". Mother Jones.
- Teeter, Preston; Sandberg, Jorgen (2016). "Constraining or Enabling Green Capability Development? How Policy Uncertainty Affects Organizational Responses to Flexible Environmental Regulations". British Journal of Management. doi:10.1111/1467-8551.12188.
- Hardman Reis, T., Compensation for Environmental Damages Under International Law, Kluwer Law International, The Hague, 2011, ISBN 978-90-411-3437-0.
- "ECtHR case-law factsheet on environment" (PDF). Retrieved 2012-11-08.
- "INECE Regions- Africa". Retrieved 18 October 2012.
- "Africa International Programs". Environmental Protection Agency. Retrieved October 18, 2012.
- "AECEN". www.aecen.org. Retrieved 2015-08-27.
- "EPA Middle East". Environmental Protection Agency. Retrieved 23 October 2012.
- "INECE Regions - Asia and the Pacific". Retrieved October 18, 2012.
- "Agreement Establishing SPREP". Retrieved October 18, 2012.
- Taylor, Prue; Stroud, Lucy; Peteru, Clark (2013). Multilateral Environmental Agreement Negotiator’s Handbook: Pacific Region 2013 (PDF). Samoa / New Zealand: Secretariat of the Pacific Regional Environment Programme / New Zealand Centre for Environmental Law, University of Auckland. ISBN 978-982-04-0475-5.
- "EPBC Act". Retrieved October 18, 2012.
- Commonwealth v Tasmania (1983) 158 CLR 1 (1 July 1983)
- "Apresentação". Retrieved 23 October 2012.
- "Department of the Environment Act". Retrieved 23 October 2012.
- "Environment Canada". Retrieved 23 October 2012.
- See Canada's Legal System Overview.
- EPA, China Environmental Law Initiative.
- Vermont Law School, China Partnership for Environmental Law; C. McElwee, Environmental Law in China: Mitigating Risk and Ensuring Compliance.
- NRDC, Environmental Law in China.
- Wang, Alex (2013). "The Search for Sustainable Legitimacy: Environmental Law and Bureaucracy in China". Harvard Environmental Law Review. 37: 365.
- Rachel E. Stern, Environmental Litigation in China: A Study in Political Ambivalence (Cambridge University Press 2013)
- Community Environmental Legal Defense Fund (CELDF). 2008. http://www.celdf.org/, accessed April, 2012.
- Gudynas, Eduardo. 2011. Buen Vivir: Today's Tomorrow Development 54(4):441-447.
- Becker, Marc. 2011 Correa, Indigenous Movements, and the Writing of a New Constitution in Ecuador. Latin American Perspectives 38(1):47-62.
- "Law 4". Retrieved 23 October 2012.
- "THE ENVIRONMENT (PROTECTION) ACT, 1986". envfor.nic.in. Retrieved 2015-08-27.
- "THE INDIAN WILDLIFE (PROTECTION) ACT, 1972". envfor.nic.in. Retrieved 2015-08-27.
- Rhuks Temitope, "THE JUDICIAL RECOGNITION AND ENFORCEMENT OF THE RIGHT TO ENVIRONMENT:DIFFERING PERSPECTIVES FROM NIGERIA AND INDIA", NUJS LAW REVIEW, January 2, 2015
- Surendra Malik, Sudeep Malik. Supreme Court on Environment Law (2015 ed.). India: EBC. ISBN 9789351451914.
- "The Basic Environment Law". Retrieved 23 October 2012.
- "Ministry for the Environment". Retrieved 23 October 2012.
- "Ministry of Natural Resources and Environment of the Russian Federation". Retrieved 27 June 2015.
- "Vietnam International Programs". Environmental Protection Agency. Retrieved October 18, 2012.
- Akhatov, Aydar (1996). Ecology & International Law. Мoscow: АST-PRESS. 512 pp. ISBN 5-214-00225-4 (English) / (Russian)
- Bimal N. Patel, ed. (2015). MCQ on Environmental Law. ISBN 9789351452454
- Farber & Carlson, eds. (2013). Cases and Materials on Environmental Law, 9th. West Academic Publishing. 1008 pp. ISBN 978-0314283986.
- Faure, Michael, and Niels Philipsen, eds. (2014). Environmental Law & European Law. The Hague: Eleven International Publishing. 142 pp. ISBN 9789462360754 (English)
- Malik, Surender & Sudeep Malik, eds. (2015).Supreme Court on Environment Law. ISBN 9789351451914
- Martin, Paul & Amanda Kennedy, eds. (2015). Implementing Environmental Law. Edward Elgar Publishing
- United Nations Environment Programme
- ECOLEX (Gateway to Environmental Law)
- Environmental Law Alliance Worldwide(E-LAW)
- Centre for International Environmental Law
- Wildlife Interest Group, American Society of International Law
- EarthRights International
- Interamerican Association for Environmental Defense
- United Kingdom Environmental Law Association
- Lexadin global law database
- Upholding Environmental Laws in Asia and the Pacific
- American Bar Association Section of Environment, Energy and Resources
- U.S. Environmental Protection Agency
- Environmental Law Institute (ELI)
- "Law Journals: Submission and Ranking, 2007-2014," Washington and Lee University, Lexington, VA
- West Coast Environmental Law (non-profit law firm)
- Canadian Environmental Law Association
- Environmental Law Centre (of Alberta) | <urn:uuid:889cfa3b-0d5f-4303-b28d-f6a9a00e5f31> | CC-MAIN-2022-33 | https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Environmental_law.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00694.warc.gz | en | 0.907059 | 7,849 | 3.53125 | 4 |
The following essay explores the dark art of Ketman (a form of play-acting for the purposes of achieving success) as it manifests in present day, Western society.
Pictured above: Roman Republican or Early Imperial Relief depicting the poet Menander with masks of New Comedy, 1st century B.C. – early 1st century A.D.. (Princeton University Art Museum)
What is Ketman?
Ketman is the individual practice of social deception for the purposes of personal advancement and material gain. It’s tactical, clinical, and socially cleansed in the bureaucratic bathhouse. Ketman is a form of play-acting, except it’s play-acting for keeps in the real world. It’s slightly more sociopathic than it is schizophrenic. The underlying metaphysic (or approach to life) supposes that the bearers of Ketmanic doctrine walk in the full light, and the secrets of that light must be kept from the benighted: those with whom they often share a common ethnicity, but with whom they secretly disagree fundamentally, and consequently, among whom the players of Ketman must strive and struggle to survive without betraying publicly the truth about who they really are and what they honestly believe. More clever than the silly, fellow-believers who comprise their friends and colleagues, the enlightened players of Ketman analyse the customs and the state machinery, and leverage them to their profit, in a game of Whoever dies with the most toys wins. That’s the materialist spirit of Ketman as a Persian social phenomenon of the 1850s, as I understand it from Czeslaw Milosz’s book, The Captive Mind. (It is perhaps worth noting that there is also a positive, spiritual form of Ketman that Milosz touches upon. This essay, however, concerns itself with the materialist, spiritually harmful side of the teaching.)
Milosz’s purpose in invoking the idea of Ketman is to help describe the psychic shift he witnessed in his social circles and in the press as the people succumbed to totalitarian oppression in Soviet Poland, where what Milosz calls “the New Faith” exercised a tyrannical grip on all aspects of life. So Milosz’s Ketman is a new beast, quite different from the Persian kind. Here’s how he describes it, using the subject of poetry as a figure for the greater state of play:
“Poetry as we know it, can be defined as the individual temperament refracted through social convention. The poetry of the New Faith can, on the contrary, be defined as social convention refracted through the individual temperament.”Captive Mind
Milosz’s Ketman, then, is a hollowing out of the individual; a sort of Aliens or Body Snatchers or even Exorcist-type scenario, where a parasite—or parasitic social movement or demon—displaces the individual and takes control of the body in pursuit of ends that are not in the best interests of the host.
This analogy suggests unconscious psychological displacement. Ketman, however, is a more slippery art than that. It is a game, often played consciously, that nevertheless turns unconscious as individuals learn to play-act their role in the totalitarian state. Here’s Milosz:
“Conscious acting, if one practices it long enough, develops those traits which one uses most in one’s role. . .After long acquaintance with his role, a man grows into it so closely that he can no longer differentiate his true self from the self he simulates.”Captive Mind
I picture a hand puppeteering Guy Smiley, and the Guy Smiley puppet spreading, taking over first the arm and gradually devouring the whole body like a snake until all the puppeteer is entirely engulfed in a new skin and consumed by its new smiley purpose.
Some personalities are especially suited to Ketman and thrive under dictatorships. Happy to finally be having their turn on the wheel of fortune, this class of people take pride in their skill set and their success, enjoying a perverse pleasure in what we today would call gaslighting:
“To say something is white when one thinks it’s black, to smile inwardly when one is outwardly solemn, to hate when one manifests love, to know, when one pretends not to know, and thus to play one’s adversary for a fool (even as he is playing you for one)—these actions lead one to prize one’s own cunning above all else.”Captive Mind
To summarise, Milosz presents a form of Ketman that amounts to a conscious effort on the part of an individual to supplant his own personality, his own desires, his own beliefs in favour of a bureaucratic role in the machinery of fakery and total social control. Once the schizoid displacement is achieved, the original personality fades into the background, while the state persona dominates. The better one is at psychological compartmentalisation, the prouder one is likely to be of the achievement of Borg-like self-annihilation. As a consequence of this pride, a master of Ketman is often cruel to those he perceives as naive enough to have scruples and abide by notions of honesty, integrity, courage, honour, loyalty, equality, and liberty . . . because the practitioner of Ketman thrives during periods of ethical decline and values cunning above all else. The rational, humane world, on the other hand, represents an order in which he is powerless. Ethical behaviour is, therefore, anathema to a natural expert of Ketman.
“It is impossible to enumerate all the forms of Ketman,” Milosz writes. But he makes an effort to supply us with a sense of Ketman’s many guises. Indeed what renders Milosz’s exploration of the concept so useful to us is his introduction of subcategories, such as Metaphysical Ketman, National Ketman, Aesthetic Ketman, Professional Ketman, Sceptical Ketman, and Ethical Ketman, among others. By way of this taxonomy, Milosz extends the potential of the term to describe the same impulse as it manifests in various areas of human activity. Once a reader understands how Ketman insinuates itself into every aspect of life in an authoritarian regime, he is better equipped to spot the disease of play-acting in his own state and determine just how pervasive the corruption is.
Czeslaw Milosz (1911-2004) was a Polish poet and literary figure who won the Nobel Prize for Literature in 1980. He defected from Communist Poland to the West in 1951 to escape the oppressive regime that came to power after World War II. In 1953 he published The Captive Mind, a collection of essays exploring how Communism corrupted the minds of his fellows.
(Photo by Horst Tappe/Hulton Archive/Getty Images)
With that abridged provenance of the term behind us, let’s turn to Ketman in the western world today. It is most certainly among us, but it is no longer Milosz’s Ketman, no longer an intellectual game worthy of being compared to chess. Today’s Ketman has very little intellectual dimension, though we mustn’t discount the strategic expertise slithering out of the Pharmaceutical-Industrial Complex. But even giving that beast its due, today’s Ketman is an intellectual disease arising from New Atheist demystification, a cultural reincarnation of ennui, a white-shoe cynicism and contempt for “the system.” And this “system” we speak of, which at first glance may appear to be a nebulous mirage, we are each of us, in fact, intimately familiar with: it is the impersonal bureaucratic edifice that stands between us (the grassroots) and the formidable surpluses the central banks, the leading corporations and the governments are sitting on. Meanwhile, those surpluses include human beings. So everything . . . everyone . . . is disposable.
This socio-cultural atmosphere nurtures and rewards cunning, and thereby installs Ketman. The high school, the college and the university have become training grounds, inculcating students, mainly with lessons on how to cunningly ply the Ketmanic tool kit. And students are incentivised with grades, and grades are too often, now in the teaching racket, mere number games involving superfluous grade distribution to water down failure of a student to give a shit. The administration calls this “student success”; and they rate a teacher’s performance using this metric of how many scoundrels a teacher has enabled, rewarded for viciousness and set loose upon society. All of this is Ketman because all involved know they have been a party to both moral and civil corruption, often referring to it amongst themselves as “bullshit.”
Let’s take a peek at the tool kit presented to a student to encourage a turn at the game of Ketman. A student can make a disability claim and gain the privilege of extra time to write exams and complete assignments. A student might file an anonymous complaint against a recalcitrant teacher who stands firmly on principle. Often enough, the teacher in question is not in the least ethical, and is also a practitioner of Ketman; but then again, at times a teacher is firm for good reason. No matter, the student can complain of sexual harassment, of misgendering, of feeling unsafe, or accuse a teacher of spreading misinformation, or of using offensive terminology and lord knows what else. Students who find doing their homework uncomfortable, and feel they are owed a passing grade until society atones for its sins, confront teachers and intimidate them with their “feelings” regarding their use of language or teaching materials. An anonymous complaint can be levelled at a teacher, and before they know it, they’re in HR’s banana republic, kangaroo court facing a tribunal of administrators. How many times out of ten are students who pull at the levers of such socially corrective regulations, in fact taking pleasure in exploiting the cracks a broken system? Let’s call this society-wide Ketman that emerges from a culture of disposability, Systemic Ketman.
Here comes the part I find most disturbing because it coils at the heart of human error and is the portal to all manner of evil. In order to make these victim claims viable, the individual filing one must present anxiety. In other words, one is encouraged to method act for bureaucratic leverage. Keep in mind that those who would merit a high grade for intelligence, hard work and progress do not generally seek bureaucratic solutions to their homework. Those students who would make the best philosophers, researchers, investigative journalists, or honest, visionary politicians rarely resort to lying to themselves because they are too self-conscious and feel shame when they cheat. To play-act strikes them as immature and dishonest. These students are penalised by Systemic Ketman: they are penalised by having their commitment and their talents diminished in the cooked ledger book of “student success”; and because these students of merit represent the world as it ought to be, they are despised by the impostors against whom they must vie for advancement. (See imposter syndrome.) In short, most disturbing about this disposable society is that folks are trained to lie to themselves and resent true virtue; and once you go down that road, it is very difficult finding your way back home, emotionally and psychologically speaking.
In the same category of mendacious self-corruption, and somewhere in the zone of Educational Ketman and Expert Ketman we find an all-too-common breed of professor who believes that having paid his dues, he has been blessed with a sort of Gnostic relationship to truth that precludes his having to exercise his sitzfleisch. Consider the case of a fellow who teaches Research Methods, who balked when informed that Wikipedia and Snopes were not acceptable sources of reference for his conclusions. His response: “How dare you criticise me! I teach Research Methods. This is my area of expertise.” This practitioner of Ketman uses his certification to pose as an expert. In other words, research methods are apparently unnecessary if one is an expert®.
Money is also disposable at present, hence fraud, racketeering, endless lockdowns and government shakedowns; even war is cheap. When you run out of money, you just print more. When you can just cook up more junk, there is no accountability. I know a young man who maxed out his credit card at $20,000 and never paid it, and never had to! He was also issued a new card before a few years were up. It cost the bank too much to hunt him down for payment, and benefitted the bank far more to hook him up with credit. I urge the reader to pause here and consider a moment the implications—the surpluses the bank must be hoarding for that equation to work . . . To his father’s chagrin, he was rewarded for his Ketman. Certain corporations are deemed too big to fail—a phrase resonant with Ketman. Rewards are handed out to blackguards, the best practitioners of Ketman. Inflation is their friend, so they tell you it’s yours too.
The TD bank strikes me today as what one would expect of a McDee’s Bank, if such a thing existed. When I first put money in that institution in the late 1980s, they employed knowledgeable staff and service reps, whereas now one is generally more informed than the person serving you. With online services in play, the folks on staff are generally unqualified to advise on anything. When you peek in on a branch, note how the tellers and managers seem to be biding their time, waiting for the robots to take over. Adding insult to injury, one is obligated to rent one’s bank account like a money hotel for your cash despite the diminished use of cash and despite the diminished use of manpower and time. The deal I had with the bank when I joined up was that they worked hard to earn and maintain my trust, and I, in return, would do my banking with them. They never dreamed back then of toying with me, of waxing clever and gaslighting me by telling me they were now providing me with a “service,” that now a “bank account” was a “financial product”—so that by the magic of lawyer trickery bullshit and bureaucratic incantations, my account is transformed into a money resort . . . And presto! Which would you like, sir, a basic or all-inclusive?
Shopping Ketman is a more light-hearted form of playing Pretend in a world full of cheap, disposable junk: the practice of purchasing an item, usually from a large box-store, without any intention of retaining it, but only to hold and use for the duration of the return period stipulated in store policy. Some play this game with expensive clothes, including shoes worn for one outing. Who wants to buy a computer printer when the built-in obsolescence period on these things is so short, that by the time you get through the backup inks you bought with the machine, the cartridges are obsolete? Those who understand Ketman respond with “Fooled me once…”; and the next time they “buy” a printer, they use it for their project and return it by the best-before date for a full refund.
Of a similar species is Energy Ketman, or green® energy and appliance efficiency® to reduce electricity consumption. Your average electric dishwasher built in the 1980s actually washed your dishes. One loaded the machine with food-encrusted plates and cutlery over a couple of days, as one does, and when it was full, one added detergent from a box, turned the thing on and wow! Later, in the 90s, new efficiency® models of dishwasher came to market, and these appliances have had no business calling themselves dishwashers because they are essentially ornamental. The form of efficiency® involved is actually a devolution of energy, where the buck is passed to the stakeholder® who now must wash his dishes before placing them in the machine which provides a secondary wash for good measure—which is hardly an example of efficiency or even of usefulness. These new appliances would be better billed as “dishwashing aids,” because “dishwasher” is false advertising. Now imagine this sort of approach to energy efficiency® on a wider scale, with experts®—so narrow minded, they can see through keyholes—legislating endless policies to make tidy ledger books for the Ministry of Climate Change, while in fact passing the energy buck elsewhere down the production, pollution and waste line.
This isn’t quite a fiduciary form, but I’d be remiss not to mention Sexual Ketman, a form of social posturing and friend® manipulation: this might involve posting a sexually provocative image of oneself or of sharing® beauty shots on the Facebook or whereeverhaveyou for the purposes of exhorting “affirmation” from one’s “network” or “community”—which is mechanically compelled to provide affirmation when given the signal. Sexual Ketman includes all manner of virtue signalling. I remember a friend I had in my twenties telling me he’d pretended to be a Quebecois revolutionary for one night so he could sleep with a woman. How many men pretend they are feminists, or pretend so convincingly, they don’t recognise the pretence and are devoured by their own puppet . . . for the express purpose of getting laid? How many women do the same? Sexual partners are disposable. Sexual fantasies are disposable. And in hyper-populated cities, one’s very identity is disposable. So one’s beliefs are equally disposable.
Linguistic Ketman has recently gone commando with regard to a whole spectrum of terms. There has, throughout my life, been a continual game of Political Correctness (PC), for example, around how to politely make reference to intellectually delayed persons. On the tip of every tongue sits the word retarded, but we mustn’t use it. So many terms have come and gone to replace the original word, it is difficult to keep track. Inevitably, one dances around it hoping to propitiate the PC sensibility which flutters around every conversation like a dark spirit of surveillance. Similarly, writers feel the need to use “inclusive” language to signal their compliance with politically correct policies, using they and their when they know damn well the gender of the person indicated; fumbling with written stammerings like s/he his/hers or the tedious “he and she” which symbolise and instantiate self-censorship along with compelled speech.
Consider how no one actually cares about these pieties, except bureaucrats, and for them it’s not that they care; it’s simply their purpose. The administrative purpose lies in its power which is circumscribed by the job description and a rule to be administered. Petty rules are therefore essential to administrative practice. Proof of work to a bureaucrat is a spreadsheet showing some form of numeric growth. To make activity mensurable, administrators encourage robotic, repeatable production inputs and outcomes. And to get their way, they implement petty, arbitrary rules, or hold back resources. Qualitative life suffers under administrative regimes because quality has no bureaucratic value. Beauty has no bureaucratic value. Love has no bureaucratic value. Pleasure and happiness wind up in a ledger book of some kind, cherry picking the metrics best suited to arrive at a result most positive to the ledger-book regime. Once the metrics are determined, a system is locked in place to reproduce output results that look good according to the administrator’s book-keeping. The reality being measured, however, may have little to nothing to do with the means of measurement. In fact, too often the bureaucratic avenue causes more harm than good much the way someone trying to be helpful at a worksite might pick up a 12-foot iron bar and swing it round without realising how many heads he’s caved in: perhaps he meant to be helpful, but if so, he also caused collateral damage enough that we would have been better off without his help.
Think a moment about how the word, fact, has recently been touched by a legal sleight of hand, such that a fact is nothing other than a species of opinion: hence, Facebook fact checkers are more correctly referred to as *fact checkers® which refers to nothing more than an opinion police employed to promote bureaucratically sanctioned opinions. As a result, what is presently tagged “misinformation” by our *fact checkers® is not in fact misinformation: instead, both fact and misinformation have been transformed into virtual reality synthetics, avatars, fungible, electronic badges. Make no mistake! The cynical manipulation of the word fact is a conscious act of Ketman, and it’s the Devil’s way of shifting reality. We are not comforted to learn that George Soros has just funded a fact checking, “good news” initiative. Soros is known for his philanthropic support of the American Democratic party and leftist activism. Ergo, over the next while, we can expect an intensification of the politicisation of facts and misinformation.
It behoves us to dwell a moment on how bureaucrats recruit language into a game of definitions to imbue words with administrative purpose. The covid scam, for instance, is predicated on a strategic alteration of the definition of the term pandemic from a word denoting a deadly disease with high infection rate, to a word without connection to disease, danger, or death. Once words fall into the hands of Ketman, the bureaucrat can tell you that up is down, that a man menstruates, that a woman is a social construct, that dangerous is safe®, that safe is dangerous and that vaccines® do not confer immunity from death and disease, nor do they prevent transmission, but are nevertheless *vaccines® (see the fine print).
This is the condition that ensues when words are deemed disposable. Hence the idea that vaccinations and boosters are a short-term solution (which somehow isn’t a bad thing) and are required “to remain up to date” on a 5-month basis forever (possibly every 3 months in the near future). If batteries had to be replaced for something that often, consumers would reject the product and “buy” with no intention of ownership. Sobering to consider that with acceptance of a jab every 6 months, one is essentially buying into the idea of renting one’s health, and by extension, of leasing one’s body. Worth the effort to figure out to whom you are ceding ownership, no? It strikes me as a drug dealer’s typical approach to business. But this situation smacks more of a drug cartel that engages in human trafficking.
So, as a result of this health scam, the latest Ketman game in town is Booster Ketman. We are beyond Vaccine Ketman, here. Listen to this repartee overheard while helping some fellas move the other day:
“Did you get the boost?”
“Yeah. I qualified right away. You know how?”
“Well, I had a double of the Astra Zeneca, right? . . . and this time, I went for a Pfizer.”
“Riiight! I get it. Yeah. Pfizer. That would do it.”
Massive highway-side billboards blast in emergency red, “Boost Up!” – “Book your booster now.” Boost Up is notable for its double meaning, its implied social boost, required for you to keep up, with the vaguest whiff of a threat encoded there that if you miss your boost, you may fall behind and become an outcast. Some expert Ketman there, right out of a Marketing and PR room. It sounds a little Nike-like with its Just do it vibe. Boost Up! And the above conversation reflects the sport of it, too, a cynical gaming of the system: the reason for switching brands was to get his dose sooner than he otherwise would have. Sure there was a risk, but that’s part of the booster fun; it’s about being part of the experiment, of taking part directly in The Science™. And queue jumping is a Ketman favourite. Moreover, the above quoted conversation is the complete extent of the exchange I witnessed. Nothing regarding health preceded or succeeded that quick, matter of fact chatter. It came up suddenly and out of nowhere, like a friendly rivalry between close friends or cousins. Boosting Up® is now a game of who can outmanoeuvre the system to get boosted® first.
If health is disposable, then human life is not far behind. Indeed, human life is presently felt to be disposable. Hence the acceptance of collateral damage due to lockdowns and vaccine injuries. Hence the promotion of unrestricted abortion, including very late-term.
A further example of Linguistic Ketman that is not to be missed is the recent smear piece perpetrated on the word, Freedom, by the Canadian propaganda arm of the Trudeau dictatorship, the CBC, and meant to undermine the positive optics of the 2022 Freedom Convoy. In an article published February 14th, 2022, one can read how “The word has become common among far-right groups, experts say.” Experts® indeed. Via these luminaries, our souls are enriched by wisdoms such as, “For many, freedom is a malleable term — one that’s open to interpretation.” Another expert plies us with, “You can define it and understand it and sort of manipulate it in a way that makes sense to you and is useful to you.” Sound familiar? Ketman Classic: gaslighting. Freedom is a social construct. One imagines the Trudeau apparatchik cooking up this crack during late hour Zoom sessions, with unscrupulous think-tankers desperately trying to dig their way out of the bad optics of a government standing against freedom. The article continues to insult the average intelligence by suggesting that “freedom” is a word resonant with hate that leads directly to violence of the sort that erupted at the U.S. Capitol during the January 6th insurrection®. One recalls a time when terrorists were being rebranded as freedom fighters. Now peaceful freedom fighters are being rebranded as domestic terrorists and perpetrators of “violent freedom.”
A post by a vaccine activist (from late 2021) compared vaccine refusal to refusal to use snow tires on one’s car without it occurring to him that if any batch of snow tires caused collateral damage (including injury and death), there’d be a recall and an investigation into quality control. Likely someone would be fired and possibly jailed for negligence. What we’re witnessing here is Medical Ketman, a play-acting sphere in which the rules applied elsewhere do not apply to public health interventions: this is a form of Linguistic Ketman, a favourite tool of administrative and lawyerly prestidigitation.
A most glaring example of this phenomenon of fakery today is the preamble one hears from politicians who understand that the medical emergency is a bureaucratically engineered power grab, and a dangerous one that will lead to unrestrained segregation, dehumanisation and tyranny . . . and still they feel the Ketmanic obligation to utter their sincerely play-acted acknowledgment of the (false) severity of the disease and the (false) safety and efficacy of the vaccines. In other words, before he can take a stand against the administrative coup d’état, the supposedly well-meaning politician surrenders the battlefield by offering up the linguistic grounds of opposition; for surely, if the disease is indeed so severe and the safety and efficacy of the vaccines so certain, vaccine mandates are not opposable on any meaningful grounds other than vaguely moralistic ones of purely academic interest. The politician may understand this trouble, but nevertheless he feels that should he not lead with such preamble, he will be publicly shredded and cancelled, and his chances to ameliorate the situation will be lost. Some may perceive such a politician as a coward, and this may be so; but he is also practicing Ketman. And let us not jump to conclusions, the best players of Ketman never betray whether they are acting defensively or offensively.
The theatre of acknowledgment presents us with another example of compelled preambles. One who wishes to be known as upstanding and worthy among progressives must signal one’s allegiances using acknowledgement preambles or be deemed a denier and a hater. These words are presently being bureaucratised, and will very soon be terms bearing legalistic, administrative condemnation, entailing fines and sanctions as per a social credit system. If one fails to acknowledge that he stands on un-ceded First Nations territory, for example, or if he fails to acknowledge the immense harm being caused by anthropogenic climate change, he will be in breach of a bureaucratic policy and will be promptly excommunicated and deprived of livelihood without hope of a defence.
No one involved in this system has to believe he is participating in a good or worthy system because, as far as he can tell, he’s just doing his job, just filling the role set before him, just playing the game of Ketman. And at this stage of the game of pretend, he does not see that he has any choice. After all, he is told, This is how things work. It’s settled. The stars move. This is how. The climate changes. This is how. Shut up! Only stupid people ask questions. Speak these words at the appointed times. Meet with the robed officiate. Take your safety sacrament and take part in the body of The Science™ or face the inquisition and extended isolation.
True, Ketman has been with us here in the West a long time. Perhaps, there’s always a little Ketman going on, and no such thing as politics without it and no community without its politics. There’s always some measure of fakery and skullduggery afoot. The question then must be, to what extent? And how exactly did the dry rot creep up on us? How do we stop it? And if we succeed in halting the progress of this horrifying game of pretend, how do we prevent the nihilistic cynicism, this insubstantial, groundless ground from finding its way under our feet again in the future?
What I have presented here is a mere reintroduction to Ketman for our age and our desperate moment as we face intensifying state control, engineered fear, manufactured panic, along with cynical scapegoating and mindless segregation. As Milosz indicated, Ketman presents innumerable shapes, and this essay only scratches the surface. Doubtless, readers will discover it everywhere once they make a point of watching themselves and their friends with an eye for it. My concern is that as the totalitarian dome closes over our heads, many are mistaking their cynicism for a shield, and many are mistaking their Ketman for a superior cleverness, when in fact these are the vaseline of submission, forms of self-distancing and self-mechanising that lead to heartlessness and cruelty.
Dare we take comfort in the fact that Ketman will not prove a winning strategy for much longer because what’s at stake is our long-term health and quality of life? More people are awakening to this realisation, but still too slowly. By force of Ketman, we are not permitting ourselves to think certain thoughts, or to research facts or to speak up and say something at our place of employment. And let’s face it, not everyone knows how to conduct research. And when someone with no research skills “researches” “the facts,” he’s liable to wind up in the teeth of the *fact checkers® reading “good® information®” or alternatively on the shoals of the flat earth, wearing a foil hat. To these research tourists, branding and marketing are everything. Consequently, people are accepting a shake® in place of a milkshake because they don’t understand the lawyer trickery bullshit terms of the scam. These days, when we think we’re playing the system, it’s not true; it is we who are being played.
What does this mean in practical terms? Many of us are still using Facebook and its Meta offspring, still using Google and Youtube. We are telling ourselves that we are, after all, subverting the system from within. But that is simply not the case. So long as we continue to support the companies colluding with corrupt governments to censor us and suppress the truth, to help spread disinformation and label truth as disinformation®, to sow fear and division . . . so long as we provide their investors with incentive, through our participation, to finance these evil activities, we are collaborators and not at all the resistance.
Asa Boxer is an award-winning writer. His latest book The Narrow Cabinet: A Zombie Chronicle is available as of spring 2022, and can be found at the usual online shops. Boxer is known for his grit, sardonic humour and accessible poetry. | <urn:uuid:d72e3a5d-f732-4f0f-97af-8605e8d8ea0c> | CC-MAIN-2022-33 | https://thesecularheretic.com/political-sociopathy-the-dark-art-of-ketman/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00094.warc.gz | en | 0.954378 | 7,028 | 2.671875 | 3 |
Shetty VP, Rambhia KD, Khopkar US. 18 July 2017, posting date. Descriptive pathology of nerves in leprosy, Chapter 9.1. In Scollard DM, Gillis TP (ed), International textbook of leprosy. www.internationaltextbookofleprosy.org.
Leprosy, which is caused by Mycobacterium leprae (M. leprae) (see Chapter 5.1), is primarily a disease of peripheral nerves. It is the only mycobacterial disease affecting the peripheral nerves, which are otherwise highly resistant to bacterial invasion. The target of nerve invasion by M. leprae is the Schwann cell. Man is the natural host of this infection, but a similar illness is found in the nine-banded armadillo . Most of the disabilities and deformities associated with leprosy are the result of nerve damage.
The involvement of peripheral nerves was first recorded by Danielssen and Boeck in 1848 . In 1873, Hansen discovered the infectious agent that causes leprosy, M. leprae . In 1882, Virchow suggested that the peripheral nerve might be the primary seat of infection in leprosy . However, it was Fite who categorically stated that “to pathologists all leprosy is neural leprosy” . After studying the histology of early skin lesions, Khanolkar concluded that the cutaneous nerves are the earliest to be involved in all forms of leprosy . Dehio and Gerlach proposed that the inflammatory response ascended from the cutaneous nerves to the proximal nerve trunks , and the surgical exploration of leprous nerves by Antia et al. further strengthened the idea of disto-proximal spread of M. leprae .
Structure of a Peripheral Nerve
A peripheral nerve is composed of two types of cells: neurons and Schwann cells (Figure 1A). Peripheral nerves are derived from the embryonic ectoderm . A neuron is comprised of a cell body, dendrites, and axons (Figure 1B). The cell body contains the vital organelles necessary for the sustenance of the neuron. Dendrites are short branches from the cell body that are responsible for the transmission of information to the cell body. The axons are long, tubular extensions from the cell body that transmit electrical signals away from the cell body. Multiple axons grouped together form a single nerve fascicle; several nerve fascicles unite to form a peripheral nerve trunk.
Schwann cells spiral closely around an axon, producing myelin. The myelin forms a lipid-rich sheath that surrounds and protects the axon, increasing the velocity of transmission of electrical signals along its length. The myelin sheath is interrupted along its length by the Nodes of Ranvier, where the myelin sheath of one Schwann cell ends and another begins. An axon covered with protective myelin and individual Schwann cells constitutes a myelinated nerve. When multiple axons are surrounded and embedded in the invaginations of one Schwann cell, they comprise unmyelinated nerves . In unmyelinated nerves, which are enclosed by only a single or a few layers of Schwann cell plasma membrane, the formation of myelin is absent. It is important to note that the Schwann cells of unmyelinated nerves lack lysosomes, which has been postulated as a reason for the presence and survival of a large number of M. leprae in unmyelinated nerves .
Histology of a peripheral nerve
A peripheral nerve is composed of multiple nerve fascicles that are made up of axons. Each fascicle is surrounded by a concentric sheath of cells interspersed with collagen fibrils (Type 1 and Type 2) and elastic fibers. This sheath is termed the ‘perineurium’. The innermost layer of the perineurial cells has tight intercellular bridges that maintain the blood-nerve barrier and preserve the endoneurial environment. Multiple nerve fascicles are enclosed within a covering of connective tissue called the ‘epineurium’. The epineurium that covers the entire nerve is called the ‘epifascicular epineurium’, while the layer separating the different fascicles is called the ‘interfascicular epineurium’. The epineurium consists of Type 1 and Type 3 collagen fibrils, fibroblasts, elastic fibers, and mast cells. Vasa nervorum are the blood vessels that comprise arterioles and venules, which carry oxygenated and de-oxygenated blood to and from the nerves. The lymphatic system is present only in the epineurium; there are no lymphatics in the perineurium or the endoneurium.
Biopsy of a peripheral nerve is a valuable tool for the diagnosis of leprosy when the clinical examination and the skin biopsy report are inconclusive. A biopsy is of immense value in confirming a diagnosis in cases of pure neural leprosy.
A clinically involved nerve must be selected for the biopsy. The ideal nerve is the sural nerve, although a sensory branch of the radial cutaneous nerve can be biopsied. In more than 90% of patients, the sural nerve is largely a sensory nerve; in the remaining 10% of patients, it contains only a few motor fibers . The sural nerve is commonly involved in leprosy and is easily accessible for surgery. Following the biopsy, a formalin-fixed nerve segment is dissected in 2–4 pieces that are arranged transversely and longitudinally in a paraffin block to cut sections of 3–4 mm thickness . If other neuropathies are also likely, a laboratory specializing in peripheral nerve studies should be consulted prior to biopsy. The laboratory can provide instructions that enable additional specialized procedures, such as teased-nerve studies, to be performed.
Techniques To Examine the Nerve
In addition to routine Hematoxylin and Eosin staining and Fite (acid-fast stain) staining, special techniques may be used to visualize or demonstrate the histopathology of the leprosy-affected nerve.
Resin-embedded one micron thick (semi-thin) cross and longitudinal sections provide a comprehensive and detailed picture of the relevant structures, including axons and myelin sheaths, with better resolution and morphological accuracy than paraffin sections. Toluidine blue and methylene blue azure II stains should be used, as they provide better contrast. The brown paraphenylene-diamine stain that does not fade easily can also be used. The longitudinal semi-thin sections facilitate analysis of the nodes of Ranvier and of adjacent internodes. Longitudinal semi-thin sections may provide a good, although not perfect, alternative to teased-fiber preparations .
Transmission electron microscopy (TEM)
Transmission electron microscopy (TEM) of ultrathin sections contrast-enhanced with uranyl acetate and lead citrate help to detect fine structural changes under high resolution.
The teased-fiber technique is the most useful method for studying peripheral myelinated nerve fibers in their continuity. It is possible to assess the size of myelin segments and visualize the pathologic changes affecting the internodal regions, the paranodal regions, and the axons using this technique . Fiber teasing is performed on pre-stained, proximo-distally oriented portions of peripheral nerves. It is a useful technique for the examination of nerves in leprosy, as segmental demyelination and early stages of Wallerian or secondary axonal degeneration can be recognized in teased fibers.
Pathology of Nerves in Leprosy
The pattern of inflammatory infiltration in the nerve may be classified broadly as
- Nerves with mainly endoneurial infiltrate
- Nerves with epineurial infiltrate
- Nerves with both endo- and epineurial infiltrate
Nerves in Tuberculoid Leprosy
The histopathological hallmark of tuberculoid leprosy is the presence of epithelioid cell granulomas. It is thought that M. leprae become lodged in the Schwann cells of the nerves, usually at cooler sites, sites of trauma, or superficial sites of nerve entrapment. These bacilli in the parasitized nerves soon start multiplying, although to a limited extent. Subsequently, the bacterial antigens are exposed and recognized by the host immunity, triggering a granulomatous response (see Chapter 9.2). Microscopic examination of the affected nerve in tuberculoid leprosy reveals a well-localized, longitudinal lesion involving one or a few nerve fascicles. The adjacent areas may appear normal.
Severe changes in nerves in tuberculoid leprosy
Figures 2A–2D provide images from a case of borderline tuberculoid leprosy. As depicted in the figures, infiltrating cells comprising well-differentiated groups of epithelioid cells and Langhan’s type of giant cells, surrounded by a large number of lymphocytes, are seen both within and around the involved nerve fascicles (Figures 2B, 2C). Destruction of the nerve parenchyma, including the perineurium and the protective barrier, is apparent in the heavily infiltrated fascicles. These changes are seen in leprosy patients with good cell-mediated immune (CMI) responses (see Chapter 6.2). The nerve architecture is disturbed to the extent that some of the fascicles are totally destroyed, whereas others may be only partially involved or may appear to be almost normal (Figure 2A, fascicles A, B, and C).
There is a positive correlation between the extent of inflammatory infiltrate and the destruction of collagen matrix and neural tissue. The presence of thick and folded myelin surrounding atrophied axons and an increase in the number of small, thinly myelinated fibers, suggesting regeneration, have been demonstrated in the periphery of the granuloma .
Caseous necrosis is a peculiar feature of granulomatous responses in tuberculoid and borderline tuberculoid neural leprosy . Chandi et al. proposed the term ‘segmental necrotizing granulomatous neuritis’ (SNGN) for the histopathological finding of caseous necrosis with epithelioid cell granulomas (Figures 2A, 2D). A caseous abscess is usually observed in the major nerve trunks, but it may occasionally be seen in cutaneous nerves arising in a patch . Histologically, there is an amorphous eosinophilic caseous mass composed of degenerated nuclei surrounded by epithelioid cells, giant cells, and lymphocytes. The presence of plasma cells, an indicator of the humoral immune response, is also a conspicuous feature of such lesions (Figure 2D). Antia and Mistry have suggested that a critical proportion of antigen, antibody, and CMI could result in the formation of caseous necrosis . Mycobacterial antigens, instead of intact bacilli, are usually found in these lesions and have been detected in TT and BT nerves by Mshana et al. and Barros et al. .
Structural changes elicited through studies of semi-thin sections, TEM, and fiber-tease preparations, along with histopathology, have revealed characteristic features in nerves across the leprosy spectrum.
Tuberculoid leprosy nerves with mild to moderate changes
Microscopic examination of the affected nerve in tuberculoid leprosy reveals the presence of sub-perineural edema. A few fascicles, or part of a fascicle, may show infiltration with epithelioid cells and lymphocytes. If bacilli are present, they are seen in the Schwann cells of nerve lesions. Fiber density may be markedly reduced in the affected fascicle and may be normal in the uninvolved fascicles . Quantitatively, there is an increase in very small myelinated fibers of less than 3µ in diameter. Groups of fibers with extensive demyelination are seen in close association with aggregates of inflammatory cells, suggesting that localized primary demyelination occurs in areas of active cellular infiltration .
Studies by Shetty et al. have suggested that demyelination is the prime mode of nerve involvement. Two different mechanisms lead to demyelination:
- Primary demyelination, which occurs in areas of acute infiltration.
- Secondary demyelination, following axonal atrophy (absent inflammatory infiltrate), affecting nerves away from the inflammatory foci .
Perineurial changes in tuberculoid nerves
Infiltration of the perineurium leads to the loss of perineurial cell junctions and basal lamina. In cases of gross infiltration, identification of the perineurial cells is difficult as they assume a ‘fibroblast-like appearance.’ Multilayering of perineurial cells is not common in TT-BT nerves .
In active tuberculoid lesions, there is an increase in vascularity, obliteration of the lumen due to swollen endothelial cells bulging into the lumen, and marked reduplication of the endothelial basement membrane. Mycobacteria are seldom seen in endothelial cells or pericytes in tuberculoid leprosy nerves. Rarely, bacillemia has been reported in the tuberculoid form of leprosy .
Nerves in Lepromatous Leprosy (BL to LL)
Nerve lesions in lepromatous leprosy (LL) are characterized by uninhibited multiplication of bacteria, predominantly in the Schwann cells, as a result of a lack of efficient CMI (see Chapters 6.1; Chapter 6.2) against M. leprae. In BL-LL leprosy, there is focal, diffuse involvement of nerves. However, the general architecture of the nerves remains better preserved despite a heavy bacterial presence, in keeping with the very low toxicity of the pathogen and the symbiotic relationship it enjoys with the host. Microscopic features of BL and sub polar lepromatous leprosy (LLs) are depicted in Figures 3A–3G.
Histopathological characteristic features of nerve lesions in BL leprosy include a predominance of macrophage infiltrates, a good number of plasma cells, and scanty, mostly dispersed, lymphocytes. The further a lesion is toward the polar lepromatous end of the spectrum, the fewer the lymphocytes. A reduction in nerve fiber density and a proportionate increase in intraneural collagen may be seen.
In BL leprosy, there may be variations in the extent of bacillation and infiltration between the fascicles of an involved nerve (Figures 3A, 3B, 3C). The bacillary proliferation occurs primarily in the Schwann cells, but also in the macrophages (such as in the skin, where they are commonly found in the macrophages). The bacilli are arranged parallel to the long axis of the nerve within the Schwann cells (Figure 3G). In the early stages of a lesion, histiocytes and foci of lymphocytic infiltrates may be seen in the peri-vascular areas (Figures 3B, 3F). In chronic or long-standing lesions, they are randomly distributed and macrophages assume foamy cytoplasmic changes (Figures 3D, 3E).
The presence of plasma cells is also a characteristic feature of BL-LL lesions. Occasionally, mast cells are also seen in these nerves. While the presence of histiocytes is a sign of active and recent onset infection, foamy macrophages indicate a chronic or long-standing infection. The perineurial proliferation, or multilayering, is a common feature in lepromatous leprosy (BL-LL) and gives an ‘onion skin appearance.’ In the chronic stage, foamy macrophages are also found in between and around the perineurium (Figure 4A, 4B).
Epithelioid cells are never found in the lepromatous pole. The presence of epithelioid cells in resolved lepromatous leprosy indicates an upgrading reaction. Similarly, a nerve abscess is rarely found in lepromatous leprosy .
Nerves in lepromatous leprosy (LL)
Polar lepromatous leprosy (LL) is characterized by firm, cord-like thickening, which is the result of extensive fibrosis. Histopathologically, there is extensive nerve fiber loss with a concomitant increase in endoneurial collagen. Infiltration with foamy macrophages and an absence of lymphocytes are prominent features. The perineurium may be thick and extensively multilayered. The subperineurial area may contain a granular proteinaceous matrix and pockets of collagen. Numerous bacilli are seen in the foamy macrophages and Schwann cells are frequently packed in clusters or bundles (Figure 3G). In the absence of any infiltrating cells, it is not uncommon to find Schwann cells loaded with bacilli in a clinically cord-like (fibrotic) nerve from an untreated polar lepromatous (LLp) case.
Lepromatous leprosy with mild to moderate changes
In patients with LL, the nerves may be clinically normal or asymptomatic for a long period of time, even though they may be microscopically involved. In such cases, there is minimal or no cellular infiltrate but a very high bacterial load, especially in the Schwann cells. In addition to the mycobacterial positivity, there may be mild perivascular infiltrate involving the endoneurial and epineurial regions with minimal vascular dilatation. Studies have revealed that in early LL lesions, there is significant reduction in the density of unmyelinated nerve fibers . Although the density of myelinated nerve fibers may be normal or slightly reduced, these fibers show a reduction in the axon caliber and a change in the axon to myelin thickness ratio (g ratio), which suggests axonal atrophy . Hence, early in the evolution of such cases a nerve biopsy is an important method for arriving at the diagnosis.
Mid Borderline Leprosy Nerves (BB)
Mid borderline leprosy is an immunologically unstable type of leprosy. It has better CMI than BL, and a higher bacterial load as compared to BT, making this type more prone to acute reactional states. Nerve involvement or damage is more diffuse than in BT and TT lesions. Heavy infiltration of lymphocytes and multinucleated giant cells, if present, suggests a Type 1 reaction, whereas dense aggregates of lymphocytes in the peri-vascular areas suggest an influx of recent origin and an impending Type 1 reaction (Figure 5).
Pure Neuritic Lesions
Pure neuritic leprosy is not categorized separately in the Ridley-Jopling classification system. This type of leprosy, once exclusive to India , is now found everywhere, although in only a small percentage of patients . Matrix metalloproteinases (MMPs) and tumor necrosis factor alpha (TNF-alpha) play important and related roles in the pathogenesis of nerve injury in this type of leprosy . A histopathology of pure neural lesions reveals a spectrum ranging from tuberculoid to borderline lepromatous type , . There have been no reports of sub-polar or polar lepromatous leprosy in neuritic leprosy so far.
Early nerve lesions
There is evidence of pathological change and the presence of bacilli in the nerves of leprosy patients along the spectrum as well as in familial contacts without any clinical evidence of leprosy .
Early tuberculoid lesions
In an early tuberculoid lesion, there is a zone of marked sub-perineurial edema that leads to a net increase in the total fascicular area . This zone comprises proteinaceous granular matrix interspersed with small pockets of collagen. Occasionally, there may be a few abnormalities, such as thinly myelinated nerve fibers, atrophied axons and unusually small myelinated fibers, a loss of fibers, or the presence of unusual fiber complexes. Additionally, a few macrophages and fibroblasts may be found in the endoneurium. These changes reflect the slowly progressive nature of a leprosy infection.
Regressing nerve lesions
The spontaneous regression of early skin lesions is known to occur, especially in the tuberculoid spectrum . Post-treatment, the nerve lesions of leprosy show histological features of healing and repair that are comparable with a regenerating peripheral nerve injury . In addition, telltale evidence of non-specific chronic inflammation comprised of lymphocytes or macrophages, depending on the type of leprosy, persists for years. It is important to note that in a regressing nerve lesion, there is evidence of nerve regeneration and ongoing pathology in the form of inflammation and the presence of M. leprae (dead or alive.) The neural structures may be replaced with collagen. This fibrosis develops slowly, over several years (Figure 6). Pyknotic ill-defined cells in between the perineurial cells at the sub-perineurial zone, the perivascular zone, and the endoneurium have been described in nerve biopsies obtained ten years after treatment concluded . These features are helpful in ascertaining and arriving at a differential diagnosis.
Regressing BL-LL nerve lesions
In regressing BL-LL nerve lesions, large numbers of non-solid acid-fast bacilli may be present in the Schwann cells and foamy macrophages, with a tendency to localize around the blood vessels. The presence of perineurial septa, compartmentalization of the endoneurium (mini fascicle formation), and thickening of the perineurium, with the number of perineurial lamellae increasing to double or quadruple, are fairly common. Axonal atrophy and a large number of regenerating fibers have been found in patients receiving multi-drug therapy (MDT) .
Pathology of Nerves in Reactions
Reactions (see Chapter 2.2) in leprosy are periodic episodes of acute inflammation caused by the response of the immune system to the invasion of skin or nervous tissue by M. leprae. Based on the underlying immune mechanisms, the reactions in leprosy are classified as Type 1 or Type 2. These reactions occur frequently in leprosy, and they cause considerable, severe, and, at times, extensive nerve damage.
Type 1 Reaction
A Type 1 reaction (see Chapter 2.2), also referred to as a reversal reaction, occurs in borderline cases. According to the Coombs and Gell immunologic classification system, it is a type 4 hypersensitivity, or a delayed hypersensitivity type of reaction . A large influx of lymphocytes within and around the nerve, along with extensive edema, is the hallmark feature of nerves in a Type 1 reaction. A total loss of blood, nerve, and perineurial barriers result in a fuzzy and watery appearance to the nerve (Figure 7). Damage to nerve fibers is severe at the sites of reaction, possibly due to the release of large quantities of proteolytic enzymes.
Type 2 Reaction
A Type 2 reaction (see Chapter 2.2), also known as Erythema Nodosum Leprosum (ENL), occurs in borderline lepromatous (BL) and lepromatous leprosy (LL), where the bacillary load is high. ENL is characterized by edema, an influx of polymorphonuclear infiltrate accompanied by localized areas of breakdown of foamy cells and connective tissue. Eosinophils, mast cells, and vasculitis may also be seen. Occasionally, fibrinoid degeneration in the arteriole along with leucocytoclasia may be seen in the epineurial, perineurial, or endoneurial vasculature. A common site for fibrinoid degeneration is the area where vessels pierce the inner layer of the perineurium. Microscopic or hot abscesses may be formed when the foci of acute inflammatory infiltrate surrounding arteritic lesions coalesce. In the resolution phase, the neutrophils are replaced by lymphocytes and plasma cells, whereas the endoneurial elements are destroyed and fibrosis occurs.
Various studies of semi-quantitative analyses of bacterial load in multibacillary nerves have revealed maximum infection in the Schwann cells of non-myelinated fibers (BI = 4 to >6+ in all nerves), followed by macrophages (BI = 2 to 6+ in all nerves), perineurial cells (BI = 2 to 3+ in 80 percent of the nerves), and endothelial cells (BI = 2 to 3+ in 50 percent of the nerves) . The bacterial load in Schwann cells of myelinated fibers formed a very minor component (BI = 1 to 2+ in 20 percent of the nerves), while the presence of bacilli in the axons was rare and could be considered a pathophysiological spillover . Boddingius has postulated that the axons do not play a major role in bacterial dissemination .
Silent neuropathy (see Chapter 2.5) is defined as sensory or motor impairment in leprosy without the skin signs of a Type 1 or Type 2 reaction, without evident nerve tenderness, and without spontaneous complaints of nerve pain (burning or shooting pain), paraesthesia, or numbness. The possible mechanisms (see Chapter 9.2) by which nerves are damaged, leading to silent neuropathy, are Schwann cell pathology, nerve fibrosis, a CMI reaction, and an intraneural Type 2 reaction .
Neural involvement is the hallmark of leprosy. The histopathology of nerves in leprosy is studied relatively little as compared to its cutaneous counterpart. Nevertheless, leprosy neuropathy is one of the most studied peripheral neuropathies. It is probably the only disorder of the nervous system that has drawn the attention of anatomists, surgeons, pathologists, immunologists, clinicians, and researchers alike, with each making an immense contribution towards understanding its devastating effect.
The spectrum of immuno-pathological features that are seen and described in skin lesions are reflected in leprosy-affected nerves. The difference lies in the structure of a nerve. Nerves are equipped with different levels of protective barriers, but lack the lymphatics that guard against an invasion of infiltrating cells, thereby forming an environment that is conducive for bacterial proliferation. The study of the pathology of nerves in leprosy provides an exceptional view of the response when the immune system is ineffective, when it is in a state of balance (no reaction), and when it is in a state of imbalance (reaction).
We would like to acknowledge the expertise and efforts of Mr. Ramchandra Chelli in preparing the histopathology slides and CA Nidhi Gulati for helping with the diagrams.
- ^ Walsh GP, Storrs EE, Burchfield HP, Cottrell EH, Vidrine MF, Binford CH. 1975. Leprosy-like disease occurring naturally in armadillos. J Reticuloendothel Soc 18:347–351.
- ^ Danielssen DC, Boeck W. 1848. Traite de la spedalskhed ou Elephantiasis des Grecs (LA Cosson, trans). JB Baillière, Paris, France.
- ^ Hansen GA. 1873. Untersogelser angãende spedalskhedens årsager tiedels ud forte sammen medforstander hartwig. Norske Mag Laegevidensk 4:1. Quoted in Stanford JL, Rook GA, Convit J, Godal T, Kronvall G, Rees RJ, Walsh GP, Preliminary taxonomic studies on the leprosy bacillus. Br J Exp Pathol, 1974, 56:579–585.
- ^ Virchow R. 1882. Der kiefer aus der schipka-höhle und der kiefer von la naulette. Zeitschrift für Ethnologie, 14:277–310. Quoted in editorial notes. 1952. Lepr India 24:35–46.
- ^ Fite GL. 1943. Leprosy from histopatholgic point of view. Arch Pathol Lab Med 35:611–644.
- ^ Khanolkar VR. 1951. Studies in the histology of early lesions in leprosy, p 1–18. ICMR (Indian Council of Medical Research) Special Report (Series No. 19).
- ^ Dehio K. 1897, October. On the Lepra Anesthetica and the pathogenetic relation of its disease-appearances. In Proceedings of the International Scientific Leprosy Conference in Berlin. Biswas MG, translator. Translated and reprinted in Lepr India, 1952, 24:78–83.
- ^ Antia NH, Divekar SC, Dastur DK. 1966. The facial nerve in leprosy. 1. Clinical and operative aspects. Int J Lepr 34:103–117.
- ^ Ridley DS. 1988. The structure of a peripheral nerve, p 15–19. In Ridley DS, Pathogenesis of leprosy and related diseases. Butterworth-Heinemann, London, England.
- ^ Breathnach AS. 1977. Electron microscopy of cutaneous nerves and receptors. J Invest Dermatol 69:8–26.
- ^ Boddingius J. 1981. Mechanisms of nerve damage in leprosy, p 64–73. In Humber DP (ed), Immunobiologic aspects of leprosy, tuberculosis and leishmaniasis. Excerpta Medical, Oxford.
- ^ Amoiridis G, Schöls L, Ameridis N, Przuntek H. 1997. Motor fibers in the sural nerve of humans. Neurology 49:1725–1728.
- a, b Weis J, Brandner S, Lammens M, Sommer C, Vallat JM. 2012. Processing of nerve biopsies: a practical guide for neuropathologists. Clin Neuropathol 31:7–23.
- ^ Krinke GJ, Vidotto N, Weber E. 2000. Teased-fiber technique for peripheral myelinated nerves: methodology and interpretation. Toxicol Pathol 28(1):113–121.
- a, b Brand PW. 1959. Temperature variation and leprosy deformity. Int J Lepr 27:1–7.
- ^ Dastur DK, Ramamohan Y, Shah JS. 1972. Ultrastructure of tuberculoid nerves in leprosy. Neurol India 20(Supp 1):89–99.
- a, b Chandi SM, Chacko CJ, Fritschi EP, Job CK. 1980. Segmental necrotizing granulomatous neuritis of leprosy. Int J Lepr Other Mycobact Dis 48(1):41–47.
- a, b Antia NH, Mistry NF. 1985. Plasma cells in caseous necrosis of nerves in leprosy. Lepr Rev 56:331–335.
- ^ Mshana RN, Humber DP, Harboe M, Belehu A. 1983. Demonstration of mycobacterial antigens in nerve biopsies from leprosy patients using peroxidase-antiperoxidase immunoenzyme technique. Clin Immunol Immunopathol 29:359–368.
- ^ Barros U, Shetty VP, Antia NH. 1987. Demonstration of Mycobacterium leprae antigen in nerves of tuberculoid leprosy. Acta Neuropathol 73:387–392.
- a, b, c, d, e Antia NH, Shetty VP. 1999. Pathology of nerve damage in leprosy, p 79–137. In Antia NH, Shetty VP (eds), The peripheral nerve in leprosy and other neuropathies. Oxford University Press, Calcutta, India.
- ^ Shetty VP, Mehta LN, Antia NH, Irani PF. 1977. Teased fibre study of early nerve lesions in leprosy and in contacts, with electrophysiological correlates. J Neurol Neurosurg Psychiatry 40:708–711.
- ^ Jacobs JM, Shetty VP, Antia NH. 1987. Teased fibre studies in leprous neuropathy. J Neurol Sci 79:301–313.
- ^ Desikan KV. 1985. Bacteremia in leprosy, chapter 3, p 788–797. In Dharmendra (ed), Leprosy, vol 2. Samant & Company, Bombay, India.
- ^ Enna CD, Brand PW. 1970. Peripheral nerve abscess in leprosy. Report of three cases encountered in dimorphous and lepromatous leprosy. Lepr Rev 41:175–180.
- a, b Teles RM, Antunes SL, Jardim MR, Oliveira AL, Nery JA, Sales AM, Sampaio EP, Shubayev V, Samo EN. 2007 Expression of metalloproteinases (MMP-2, MMP-9, and TACE) and TNF-alpha in the nerves of leprosy patients. J Peripher Nerv Syst 12:195–204.
- ^ Antunes SL, Chimelli L, Jardim MR, Vital RT, Nery JA, Corte-Real S, Hacker MA, Sarno EN. 2012. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency. Mem Inst Oswaldo Cruz 107:246–253.
- ^ Noordeen SK. 1972. Epidemiology of (poly) neuritic type of leprosy. Lepr India 44:90–96.
- ^ Uplekar MW, Antia NH. 1986. Clinical and histopathological observations on pure neuritic leprosy. Indian J Lepr 58:513–521.
- ^ Kaur G, Girdhar BK, Girdhar A, Malaviya GN, Mukherjee A, Sengupta U, Desikan KV. 1991. A clinical, immunological, and histological study of neuritic leprosy patients. Int J Lepr Other Mycobact Dis 59:385–391.
- ^ Antia NH. 1974. The significance of nerve involvement in leprosy. Plast Reconstr Surg 54:55–63.
- ^ Antia NH. 1980. Study of evolution of nerve damage in leprosy—a general introduction. Lepr India 52:3–4.
- ^ Nolasco JO. 1952. Histologic studies on the primary lesions of leprosy in children of leprous parents; other related studies, including one case with necropsy. J Philipp Med Assoc 28:1–19.
- ^ Morris JH, Hudson AR, Weddell G. 1972. A study of degeneration and regeneration in the divided rat sciatic nerve based on electron microscopy. IV. Changes in fascicular microtopography, perineurium and endoneurial fibroblasts. Z Zellforch Microsk Anat Histochem 124:165–203.
- ^ Shetty VP, Suchitra K, Uplekar MW, Antia NH. 1992. Persistence of Mycobacterium leprae in the peripheral nerve as compared to the skin of multidrug-treated leprosy patients. Lepr Rev 63:329–336.
- ^ Jacobs JM, Shetty VP, Antia NH. 1993. A morphological study of nerve biopsies from cases of multibacillary leprosy given multidrug therapy. Acta Neuropathol 85:533–541.
- ^ Roitt IM. 1974. Hypersensitivity, p 129. In Essential Immunology, 2nd ed. Oxford: Blackwell Scientific, Oxford.
- a, b Shetty VP, Antia NH. 1996. A semi quantitative analysis of bacterial load in different cell types in leprous nerves using transmission electron microscope. Indian J Lepr 68:105–109.
- ^ Boddingius J. 1974. The occurrence of Mycobacterium leprae within axons of peripheral nerves. Acta Neuropathol 27:257–270.
- ^ van Brakel WH, Khawas IB. 1994. Silent neuropathy in leprosy: an epidemiological description. Lepr Rev 65:350–360. | <urn:uuid:1a0dff5a-c6be-4876-a319-2f4635747ee8> | CC-MAIN-2022-33 | https://internationaltextbookofleprosy.org/chapter/descriptive-pathology-nerves-leprosy | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00497.warc.gz | en | 0.880653 | 7,973 | 3.25 | 3 |
Ask Us Anything About… Hemorrhagic Stroke
There are two types of stroke — ischemic and hemorrhagic — with hemorrhagic being less common. It happens when a blood vessel breaks and bleeds into the brain. Within minutes, brain cells begin to die.
In this interview, neurologist Dr. David Wilkinson talks about the symptoms, causes and treatment of hemorrhagic stroke — and why it’s critical to seek prompt medical attention when stroke is suspected.View full transcript of video
Scott Gilbert – From Penn State Health, this is Ask Us Anything About Hemorrhagic Stroke. I’m Scott Gilbert. Well, there are two main types of stroke, one caused by a blood clot. The other caused when an artery ruptures. Our focus today is on the latter, known as the hemorrhagic stroke. Here to answer our questions about the symptoms, causes, and treatments for hemorrhagic stroke is Dr. David Wilkinson, a Neurological Surgeon at Penn State Health Milton S. Hershey Medical Center. Dr. Wilkinson, good to have you with us today. Maybe we could start by having you walk us through what happens inside the brain in a hemorrhagic stroke and kind of what makes it different than ischemic stroke.
Dr. Wilkinson – Great. Yeah, thanks, Scott, and thanks so much for having me. It’s an honor to be on. So yeah, you know, exactly like you said. So an ischemic versus a hemorrhagic stroke sometimes a, you know, ischemic stroke which is the more common type, is really a blockage stroke where a blood vessel is blocked. And so the brain that gets blood from that, you know, isn’t getting the blood that it should. In a hemorrhagic stroke or a bleeding stroke, that’s when we see blood being outside the vessels really in a space where it’s not supposed to be. And that can be within the brain itself. And we call that an intracerebral hemorrhage. Or it can be in some of the area that surrounds the brain. But, you know, overall, it’s a stoke where blood has gotten outside the vessels where it’s usually supposed to be. And then that can cause injury and irritation to the brain that’s around it.
Scott Gilbert – Yeah, so what causes more damage then, the initial rupture itself or the build-up of blood and then the pressure it exerts on nearby tissue?
Dr. Wilkinson – Yeah, also a great question. So I think it can differ for each patient. So in a lot of patients, particularly those who have rupture of an aneurysm, which I think of a little bit, that can be kind of a catastrophic event where the pressure all of a sudden gets out. And that, unfortunately, can be almost instantly fatal. Sometimes though, it’s more of a — more of a leak. And so people just have a bad headache. But the blood has leaked out, and they’re still awake. They’re still able to talk. But then, hours or even days later, that blood can cause irritation and can cause, you know, dysfunction of the brain around it. So I think the short answer to question is it depends. And in some cases, it can be that initial rupture which injures us so much. But then, in others, it’s days later the blood products that the body is trying to absorb are causing irritation and causing problems.
Scott Gilbert – And when it comes to hemorrhagic stroke versus ischemic, is one thought to be or typically more severe than the other?
Dr. Wilkinson – Yeah, that’s also a good question. So you can have — you can have a mild or a severe stroke of either type. And so, you know, there are probably more minor ischemic strokes where people have minimal symptoms. You can even have what’s called a TIA or a transient ischemic attack where the symptoms go away. And hemorrhagic strokes are usually, you know, a little bit more severe and certainly on a, you know, the higher percentage of people I think with hemorrhagic strokes can have severe ones and go on to pass away, unfortunately.
Scott Gilbert – You’re watching Ask Us Anything About Hemorrhagic Stroke from Penn State Health. I’m Scott Gilbert alongside Dr. David Wilkinson. He’s a Neurological Surgeon at the Milton S. Hershey Medical Center. And we welcome your questions, your comments. You can add them to the comment field here below this Facebook post. We’ll pose those questions to him live, or if you’re watching this interview on playback, we’ll get you a typed-out response to your question. But we do address all questions to the best of our ability. So you know, with stroke, Dr. Wilkinson, we often hear the acronym BEFAST, each letter standing for a particular symptom or something to remember. Does BEFAST apply with a hemorrhagic stroke as well?
Dr. Wilkinson – Yeah, good question. So BEFAST does apply to a hemorrhagic stroke because patients can exhibit those same symptoms. And just to walk through, you know, for those who haven’t heard it. So BEFAST, so the B stands for balance. So if somebody, you know, has lost, you know, is having trouble balancing. The E stands for eyes. So sometimes, if their eyes are going one way or the other and they don’t seem to be moving around normally or looking straight ahead. The F stands for face. So if somebody has drooping on one side of the face, that’s a sign of stroke. The A stands for arms. So you know, if you ask them to lift their arms, sometimes they’ll be able to lift one but not the other. And then the ST go together. That’s for speech testing, so if they’re having problems speaking. So any of those can be affected by hemorrhagic stroke. The one additional thing that you need to be aware of with hemorrhagic stroke is usually, hemorrhagic stroke or bleeding strokes, people have severe headaches. And you don’t necessarily have that with the — actually, most often, you do not have that with an ischemic stroke. So patients with a bleeding stroke, especially one from a ruptured aneurysm, a lot of times they talk about having, you know, the sudden onset of the worst headache of their life, which is one way that sometimes you can differentiate it. And sometimes headache will be the only symptom actually. So sometimes they won’t have any of those BEFAST signs that we talked about. But they’ll just have a severe headache. So that’s really the, I think, the addition to BEFAST that’s important to think about for hemorrhagic stroke is, you know, people having a headache. Now, it can be difficult because, you know, headaches are quite common in the general population. But what we, you know, what we think about is — or what we really get concerned about is that headache, you know, unlike any headache you’ve ever had before. We, you know, the term worst headache of your life is a lot of time what we’re thinking about when people have a headache or when people have a hemorrhagic stroke.
Scott Gilbert – And a rather sudden onset of that headache, right? So it’s that initial bursting of the blood vessel that causes the pain, not so much the gradual blood build-up over time?
Dr. Wilkinson – I think it can be both. But you’re right. It most often is a sudden onset headache, especially with a ruptured aneurysm. But I wouldn’t, you know, if you have a severe headache, I wouldn’t discount it just because it, you know, just because it didn’t necessarily come on really suddenly. And this is hard, you know, for people who have migraines, who are headache sufferers. You know, it can be hard to differentiate sometimes.
Scott Gilbert – I would think so. This is Ask Us Anything About Hemorrhagic Stroke from Penn State Health. We’re getting some great information today from Dr. David Wilkinson. He’s a Neurological Surgeon at the Milton S. Hershey Medical Center. We welcome your questions, your comments. Just add those to the comment field, and we’ll get to those as quickly as we can here over the course of this interview. When someone is experiencing stroke symptoms, is it better for them to, you know, I would say: drive themselves to the hospital? Probably not. Be driven to the hospital by a loved one, or call 9-1-1? What’s the best course of action?
Dr. Wilkinson – Yeah, so definitely 9-1-1, you know. And that’s because, you know, that’s the fastest way to get them the care that they need. Even if you think, you know, hey, I could drive them to the hospital pretty quickly. You know, the first responders who, you know, who can come with 9-1-1, they can actually, you know, one of the — one of the things we try to do is treat this as quickly as we can and they, you know, if they have a suspected stroke victim, they’ll call ahead and so they’ll have everybody, you know, ready and waiting for them at the, you know, at the hospital. They can start mobilizing people to, you know, to give drugs or provide therapy that’s needed. Stroke is one of these things where every second counts, you know. They say time is brain. And so I would absolutely recommend calling 9-1-1, you know, if you suspect somebody is having a stroke, either a bleeding stroke or a blockage stroke or, as we say, ischemic or hemorrhagic strokes.
Scott Gilbert – And like you say, they can get the team ready, especially if they’re having like a comprehensive stroke center like the Milton S. Hershey Medical Center, right?
Dr. Wilkinson – That’s exactly right, yup.
Scott Gilbert – What are some of the causes. People are probably wondering, well, how do I avoid having a hemorrhagic stroke? Sounds kind of scary. What are the best ways to reduce your risk factors for one?
Dr. Wilkinson – Yeah, yeah, good question. So the best way, you know, there’s things you can control and things you can’t. So the number one things you can, or at least have some control on, is smoking is a big risk factor. So smoking also, you know, excessive alcohol use can be a risk factor. And then blood pressure control. A lot of the, you know, a lot of the times, people who have these have had blood pressure problems. Other risk factors that you, you know, that we can’t control, age is, you know, these are more common as we get older. In aneurysms, women are actually more affected than men, and so for reasons we don’t completely understand. But the main ones, you know, things you can do something about, the main ones would be smoking and controlling your blood pressure.
Scott Gilbert – And how about arteriovenous malformation? Can you talk about what that is, what’s going on inside the vessels, and how that can lead to one?
Dr. Wilkinson – Yeah. Yeah, great question. So an arteriovenous malformation is sort of a tangle of vessels that are abnormal. And so usually, you know, blood goes from a high-pressure artery, then it goes through capillaries, and then it goes to a vein. In arteriovenous malformation, it sort of skips that capillary phase. And so you have high-pressure blood that’s in vessels that are not normal. And so they’re a little bit thinner than normal. And these we generally think of as congenital, or you’re born with them. And so often, they’re one of the most important causes of hemorrhagic stroke, particularly in young people. Children or, you know, people in their 20s or 30s when they have an intracerebral hemorrhage, it’s often due to an arteriovenous malformation.
Scott Gilbert – Now, when it comes to arteriovenous malformations, aneurysms for that matter, it sounds like they lurk silently in there. Is there any way that they can be spotted and even fixed before they lead to a stroke?
Dr. Wilkinson – Yeah, also a good question. Yeah, so you mentioned arteriovenous malformations, and then aneurysms are a little bit different. So aneurysms, we generally don’t think you’re born with them. And they develop as we age. Again, smoking and blood pressure are things that can be associated with those. And like you said, we usually don’t know they’re there until they rupture. Aneurysms are probably more common than most people think. I think general probably about three percent of people in the general population have an aneurysm, but the — or have a brain aneurysm. But the majority of them, we don’t know about it, and they don’t cause a problem. Increasingly we’re seeing these found incidentally. So someone has, you know, imaging for another reason, and they see the aneurysm. And there are things we can do to treat the aneurysm before it rupturing. And each aneurysm needs to be evaluated to evaluate whether the risks of treatment outweigh the benefits. As I mentioned, most aneurysms will not rupture, but certainly, some may. And that can be one of the hard things is figuring out which, you know, if one is found, figuring out if it’s at risk for rupturing, if it’s, you know, if the risks of treatment are outweighed by the benefits of treatment.
Scott Gilbert – We welcome your questions for Dr. Wilkinson. Just add them to the comments here below the Facebook chat. And we’ll add those to the list and get those questions answered for you as part of this interview. Now, Dr. Wilkinson, I am — with an ischemic stroke, the treatment goal is to clear the blockage, of course. How are hemorrhagic strokes treated?
Dr. Wilkinson – Yeah. Yeah, great question. So it differs by the type of hemorrhagic stroke. So the first one I’ll talk about is subarachnoid hemorrhage. And so that’s bleeding into the area kind of surrounding — immediately surrounding the brain. Excuse me. And the most common cause except for trauma of that is of an — they’re caused by ruptured aneurysm. And so the main thing we do first to treat that is to secure the aneurysm. That’s either done through the vessel — excuse me — through the vessel. We can sometimes put coils in it. Or by doing open surgery where you open up and take off part of the skull. You can go in and put a clip across the aneurysm. And you know, whoever is treating you will recommend the best one for your particular aneurysm and for that particular patient. Intracerebral hemorrhage, which is bleeding within the brain itself, oftentimes we just need to control the blood pressure and monitor those patients. Most patients with intracerebral hemorrhage don’t need to have surgery. But there are some who do based on the size of the hemorrhage or, you know, if there’s something underlying like an arteriovenous malformation.
Scott Gilbert – Your watching Ask Us Anything About Hemorrhagic Stroke from Penn State Health. We have a few minutes left with Dr. David Wilkinson. We welcome your questions in the comment [inaudible] here below this Facebook post. So I’m curious about how the brain recovers after a stroke because the brain does have some pretty cool and unusual powers to do so in many cases. For example, if blood flow is restored to a section of the brain that was affected, I understand it can actually resume function, right?
Dr. Wilkinson – It can, yeah, it generally has to be restored pretty quickly and, you know. So that’s why in — particularly in blockage strokes, we really try to restore blood flow as quickly as we can. The brain in hemorrhagic stroke it’s a little bit different. A lot of people’s dysfunction and problems that they get can be caused by pressure. And so often, there’s things we’ll do like putting in a drain to relieve that pressure. And once that pressure is relieved, the brain can function better. One of the amazing things, particularly in young patients, is that you know, patients can recover. And there can be reprograming where, you know, that a part of the brain takes over a function that used to be performed by a different part of the brain. And you know, and we call that neuroplasticity, which again, is easier when we’re young. It’s a little bit harder in, you know, when patients are older and have strokes. But there are, yeah, there are ways in which the brain can recover. And we see in patients who’ve had a ruptured aneurysm we see their recovery continue usually over the course of at least months, sometimes up to a year, they can continue to progress in their recoveries.
Scott Gilbert – Sounds good. We have a question now from Trishia. [phonetic] I hope I’m saying your name right. “Is there any type of skin rashes or anything like that,” she asks, “that may develop or anything visible on the skin from veins and arteries when someone has hard to control high blood pressure, arteriostatic blood pressure, diabetes, and such that puts them at a higher risk of stroke? If so, what are some remedies?” So talking about skin rashes kind of external evidence of such issues.
Dr. Wilkinson – Yeah, usually you know, not that I’m aware of is there any, you know, particular skin findings where you can, you know, know for sure that you have blood pressure problems. Really the best thing, you know, if you do have things like high blood pressure or hypertension or diabetes, is to have those monitored regularly, you know, by seeing your family doctor and have those things checked.
Scott Gilbert – Thank you for that question. You know, I’m curious, Dr. Wilkinson, what factors determine the long-term effects that a hemorrhagic stroke will have? Because as we know, some people recover almost fully if not completely. Some people don’t survive it.
Dr. Wilkinson – Yeah. So the best predictor of how somebody is going to do long-term is the initial — is how they present initially or how they show up to the hospital. So you know, for people with ruptured aneurysms, one of those sort of rules of thumb is that, you know, unfortunately, a third don’t make it to the hospital. And then a third are, you know, pretty disabled and go on to have, you know, impaired function and then a third, you know, make good recoveries. And you know, oftentimes return to what they were doing before, return to their work and family. But the best predictor is how severely affected somebody is when they initially present to the hospital. It’s not the only predictor. There are other things. The type of bleed and the location of the bleed can affect that also.
Scott Gilbert – Okay, real interesting stuff, very helpful to know. Dr. David Wilkinson, Neurological Surgeon at Hershey Medical Center, thanks for taking the time to talk today. I appreciate it.
Dr. Wilkinson – Great. Thank you. Thanks so much.
Scott Gilbert – And I want to thank everybody who tuned in, those who asked questions here for Ask Us Anything About Hemorrhagic Stroke from Penn State Health.
Dr. Wilkinson – Thanks, Scott.Show Full TranscriptCollapse Transcript
If you're having trouble accessing this content, or would like it in another format, please email Penn State Health Marketing & Communications. | <urn:uuid:dd2dee0d-2ac7-4529-95f3-366fd75559a6> | CC-MAIN-2022-33 | https://pennstatehealthnews.org/2021/05/ask-us-anything-about-hemorrhagic-stroke/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00493.warc.gz | en | 0.945363 | 4,391 | 2.703125 | 3 |
Brief Introduction: The high-cold regions in China include the Qinghai Tibetan Plateau, and the alpine regions of Gansu, Inner Mongolia and Xinjiang, with a total area of about 2.9 million square kilometers. Due to the complexity of topography and geomorphology, the worldwide researches more and more focus on the surface processes of the Qinghai Tibetan Plateau and its adjacent areas. The High-cold Region Observation and Research Network for Land Surface Processes & Environment of China (HORN) has gradually formed. It integrates 17 stations of Chinese Academy of Sciences, for long term observations and researches of land surface processes, including glaciers, permafrost, lades, alpine ecosystem in the high-cold regions of China. It provides a platform support for integrated researches of earth system, through condensation of scientific problems, integration of monitoring resources, improvement of observation capability and level, long-term continuous monitoring of surface processes and environmental changes in cold regions. It also provides data support for revealing the law of climate change and water resources formation and transformation in the headwaters of big rivers, exploring the changes of ecosystem structure and service function, grasping the mechanism of natural disasters such as ice and snow freezing and thawing, and promoting the sustainable development of regional economy and society, etc. A network integrated center is set up to organize research and carry out the specific implementation of network construction. It consists of an office, an observation technology service group and a data integration management group. The participating units of HORN should sign construction/research contracts in order to implement contract-based management, perform all tasks in the contracts and accept the examination and acceptance of the network organization. The network construction should give priority to scientific research, coordinated development, relatively balanced allocation of infrastructure and observation instruments, and free sharing of data within the network. For the principle of sharing and opening, the observatories of the network are open to the whole country. The network cooperates with relevant units through consultation, agreement or contract according to specific tasks and costs; the original observation data are gradually shared based on the principle of first the network, then the department and then the society. The network carries out planned and coordinated cooperation with foreign scientific research institutions and universities, which can improve the level of network observation and expand the content of observation through the cooperation. The HORN is managed by the Chinese Academy of Sciences in the allocation of funds and resources.
Number of Datasets: 107
"China's surface climate data daily value data set (V3.0)" contains 699 benchmarks and basic weather stations in China. Since January 1951, the station's air pressure, temperature, precipitation, evaporation, relative humidity, wind direction and wind speed, and sunshine hours. The number and the daily value data of the 0cm geothermal element. After the quality control of the data, the quality and integrity of each factor data from 1951 to 2010 is significantly improved compared with the similar data products released in the past. The actual rate of each factor data is generally above 99%, and the accuracy of the data is close. 100%. China Earth International Exchange Station Climate Data Daily Value Dataset (V3.0), mainly based on the ground-based meteorological data construction project archived "1951-2010 China National Ground Station data corrected monthly report data file (A0/A1/ A) The basic data set was developed. This data can provide a variety of basic drive data for other scientific research.
2022-04-03 25603 1625
This data set includes the daily averages of the temperature, pressure, relative humidity, wind speed, precipitation, global radiation, P2.5 concentration and other meteorological elements observed by the Qomolangma Station for Atmospheric and Environmental Observation and Research from 2005 to 2016. The data are aimed to provide service for students and researchers engaged in meteorological research on the Tibetan Plateau. The precipitation data are observed by artificial rainfall barrel, the evaporation data are observed by Φ20 mm evaporating pan, and all the others are daily averages and ten-day means obtained after half hour observational data are processed. All the data are observed and collected in strict accordance with the Equipment Operating Specifications, and some obvious error data are eliminated when processing the generated data.
2022-03-02 4837 223
This meteorological data is the basic meteorological data of air temperature, relative humidity, wind speed, precipitation, air pressure, radiation, soil temperature and humidity observed in the observation site (86.56 ° e, 28.21 ° n, 4276m) of the comprehensive observation and research station of atmosphere and environment of Qomolangma, Chinese Academy of Sciences from 2019 to 2020. Precipitation is the daily cumulative value. All data are observed and collected in strict accordance with the instrument operation specifications, and some obvious error data are eliminated when processing and generating data The data can be used by students and scientific researchers engaged in meteorology, atmospheric environment or ecology (Note: when using, it must be indicated in the article that the data comes from Qomolangma station for atmospheric and environmental observation and research, Chinese Academy of Sciences (QOMS / CAS))
2022-01-27 849 0
This data set records the meteorological data in the observation field of Ngari Station for Desert Environment Observation and Research (33 ° 23.42 ′ N, 79 ° 42.18 ′ E, 4270 m asl) from 2019 to 2020, with a time resolution of days. It includes the following basic parameters: air temperature (℃), relative humidity (%), wind speed (m/s), wind direction (°), air pressure (hPa), precipitation (mm), water vapor pressure (kPa), downward short wave radiation (W/m^2), Upward short wave radiation (W/m^2), Downward long wave radiation(W/m^2), Upward long wave radiation(W/m^2), Net radiation(W/m^2), Surface albedo (%), soil temperature (℃), soil water content (%). Sensor model of observation instrument: atmospheric temperature and humidity: HMP45C; Precipitation: t200-b; Wind speed and direction: Vaisala 05013; Net radiation: Kipp Zonen NR01; Air pressure: Vaisala PTB210; Soil temperature: 109 temperature probe; Soil moisture content: CS616. Data collector: CR1000. The time resolution of the original data is 30 min. The data can be used by scientific researchers engaged in meteorology, atmospheric environment or ecology.
2022-01-26 912 0
1) Data content (including elements and significance): 19 stations of Alpine network (Southeast Tibet station, Namuco station, Everest station, mustage station, Ali station, Golmud station, Tianshan station, Qilian mountain station, Ruoergai station (2 points in total, Northwest Institute and Chengdu Institute of Biology), Yulong Snow Mountain station and Naqu station (including stations, Qinghai Tibet Institute, Northwest Institute and Geography Institute), Haibei Station, Sanjiangyuan station, Shenza station,, Lhasa station and Qinghai Lake Station) meteorological observation data set of Qinghai Tibet Plateau in 2020 (temperature, precipitation, wind direction and speed, relative humidity, air pressure, radiation and flux) 2) Data source and processing method: Excel format for field observation of 19 stations of Alpine network 3) Data quality description: Daily resolution of the station 4) Data application achievements and prospects: Based on the long-term observation data of field stations of the alpine network and overseas stations in the pan third pole region, a series of data sets of meteorological, hydrological and ecological elements in the pan third pole region are established; Complete the inversion of meteorological elements, lake water quantity and quality, aboveground vegetation biomass, glacier and frozen soil change and other data products through intensive observation in key areas and verification of sample plots and sample points; Based on the Internet of things technology, a multi station networked meteorological, hydrological and ecological data management platform is developed to realize real-time acquisition, remote control and sharing of networked data. In addition, the data set is an update of the meteorological data of the surface environment and observation network in China's high and cold regions (2019).
2022-01-22 908 0
Location: Kaima village, village 4, Luoma Town, seni District, Naqu City, Tibet Autonomous Region; Coordinates: 92 ° 6 ′ 19 ″ E , 31 ° 16 ′ 35 ″ N; Underlying surface type: Alpine meadow next to a tiny hamlet and the Naqu river; Data elements: upward short wave radiation, downward short wave radiation, upward long wave radiation, downward long wave radiation, net radiation sensor temperature, short wave net radiation, long wave net radiation, albedo,, net radiation sensor temperature, short wave net radiation, long wave net radiation, albedo, air temperature, relative humidity, soil heat flux, soil temperature (0cm), soil temperature (10cm), soil temperature (20cm), soil temperature (30cm), Soil temperature (100cm), soil temperature (150cm), soil temperature (200cm), Soil temperature (250cm), Soil volume water content, atmospheric pressure, photosynthetic effective radiation, wind speed, wind speed, wind direction, solar radiation and net radiation. Data source: Naqu automatic weather station, raw data unprocessed. Data quality description: authenticity, completeness and accuracy of data. Data application achievements and prospects: provide raw data for scientific researchers. Provide basic meteorological data for various scientific experiments..
2021-12-16 593 76
This data is mainly the temperature data of the meteorological station set up by the Southeast Tibet station of the Chinese Academy of Sciences in April 2014, located in a ri village, Ranwu Town, Basu County, Changdu City, by the lake in Ranwu, with a geographical location of 96.7699e, 29.4364n and 3920m The model of the instrument probe is hmp155a, the probe is 2m away from the surface, and the underlying surface is alpine meadow. Some original data are missing. It is obtained by correction and interpolation through the flux station also located in the area, the nearby sidaoban meteorological station and the Ranwu station of the Meteorological Bureau. This data is a rare shared data in the region, which can be used as the background basic data of regional climate, rivers, lakes, glaciers, ecology, etc. When using data, the article should reflect the Southeast Tibet station of Chinese Academy of Sciences, and higher precision data can be contacted with the data author.
2021-12-14 960 38
As the "water tower" in Asia, the Qinghai Tibet Plateau provides water resources for major rivers in Asia. BC aerosol emitted from biomass and fossil fuel combustion has a strong absorption effect on radiation, which has an important impact on the energy budget and distribution of the earth system. It is an important factor of climate and environmental change. Black carbon aerosols emitted from the surrounding areas of the Qinghai Tibet Plateau can be transported to the interior of the plateau through the atmospheric circulation and settle on the snow and ice surface, which has an important impact on precipitation and glacier material balance. Black carbon meters are set up at five stations on the Qinghai Tibet Plateau, and aethalometer is used to measure the content of Atmospheric Black Carbon online. The data time resolution is day by day, which provides a data basis for assessing the impact of black carbon on the climate and environment of the Qinghai Tibet Plateau and the cross-border transmission of air pollutants. This data is an update of the previously released observation data of five stations of atmospheric black carbon content on the Qinghai Tibet Plateau (2018) and the observation data of five stations of atmospheric black carbon content on the Qinghai Tibet Plateau (2019). The information of the five sites is as follows: Namuco: 30 ° 46'N, 90 ° 59'e, 4730 m a.s.l Everest station: 28.21 ° n, 86.56 ° e, 4276 m a.s.l Southeastern Tibet: 29 ° 46'N, 94 ° 44'e, 3230 m a.s.l Ali station: 33.39 ° n, 79.70 ° e, 4270 m a.s.l Mustard: 38 ° 24'n, 75 ° 02'e, 3650 m a.s.l
2021-12-02 628 0
This data set includes the PM2.5 mass concentration of atmospheric aerosol particles at Southeast Tibet station, Ali station, mostag station, Everest station and Namuco station (unit: mm) μ g/m3)。 Aerosol PM2.5 fine particles refer to particles with aerodynamic equivalent diameter less than or equal to 2.5 microns in the ambient air. It can be suspended in the air for a long time, which has an important impact on air quality and visibility. The higher its concentration in the air, the more serious the air pollution. The concentration characteristic data of PM2.5 is output at the frequency of obtaining a set of data every 5 minutes, which can realize the analysis of aerosol mass concentration at different time scales such as hour, day and night, season and interannual, which provides the analysis of changes and influencing factors of aerosol mass concentration at different locations in the Qinghai Tibet plateau at different time scales, as well as the evaluation of local air quality, It provides important data support. This data is an update of the published data set of PM2.5 concentration of aerosol particles at different stations on the Qinghai Tibet Plateau (2018 and 2019).
2021-11-28 737 0
1) Data content: daily water level change data of Nam Co in 2019. The coordinates of observation points are 90.96 ° E, 30.77 ° N, 4730m above sea level, and the underlying surface is alpine grassland. (2) Data source and processing method: measure by manually reading the water level gauge. The original observation data shall be processed and quality controlled by a specially assigned person according to the observation records. (3) Data quality description: because the data is obtained by manual reading of water gauge, it is greatly affected by the harsh environment, and the data is missing and discontinuous in some periods. (4) Data application prospect: the data can be applied to scientific research fields such as Lake hydrology and hydrological process in high and cold areas.
2021-11-09 987 458
1) Data content (including elements and significance): the data includes the daily values of air temperature (℃), precipitation (mm), relative humidity (%), wind speed (M / s) and radiation (w / m2) 2) Data source and processing method; Air temperature, relative humidity, radiation and wind speed are daily mean values, and precipitation is daily cumulative value; Data collection location: 29 ° 39 ′ 25.2 ″ n near the forest line on the east slope of Sejila Mountain; 94°42′25.62″E; 4390m; The underlying surface is natural grassland; Collector model Campbell Co CR1000, acquisition time: 10 minutes. Digital automatic data acquisition. The temperature and relative humidity instrument probe is hmp155a; The wind speed sensor is 05103; The precipitation is te525mm; The radiation is li200x; 3) Data quality description; The original data of air temperature, relative humidity and wind speed are the average value of 10 minutes, and the precipitation is the cumulative value of 10 minutes; The daily average temperature, relative humidity, precipitation and wind speed are obtained by arithmetic average or summation. Due to the limitation of the sensor, the precipitation in winter may have a certain error. 4) Data application achievements and prospects: this data is the update of the existing data "Sejila Mountain meteorological data (2007-2017)" and "basic meteorological data of Sejila east slope forest line of South Tibet station of Chinese Academy of Sciences (2018)". The data time scale span is large, which is convenient for scientists or graduate students in Atmospheric Physics, ecology and atmospheric environment. This data will be updated from time to time every year.
2021-11-05 960 42
This data is obtained from the automatic weather station of Namucuo multi circle comprehensive observation and research station, Chinese Academy of Sciences. The geographical coordinates are 30.77 ° N, 90.96 ° E, 4730m above sea level, and the underlying surface is alpine grassland. The data set elements include soil temperature, soil water content and electrical conductivity, which are measured at three measurement depths (0, 10 and 20 cm respectively). The time range is February 2019 December 2020. Data quality: it has passed noise control and graphic inspection. The data is stored in Excel file. The data is stable and continuous during the monitoring period. At the same time, this data has broad application prospects and can serve graduate students and scientists with backgrounds such as climatology, physical geography and ecology.
2021-11-05 642 444
This data is obtained from the automatic weather station of Nam Co Station for Multisphere Observation and Research, Chinese Academy of Sciences. Geographical coordinates are 30.77° N, 90.96° E, and the altitude is 4730m, and the precipitation data have been corrected. Data set elements include temperature, precipitation, relative humidity, wind speed, total radiation and air pressure. The time range is from January 1st, 2019 to December 29th, 2020. During the monitoring period, the data is stable and continuous. Through the analysis of meteorological data, it is important to understand the local climate change in the region. At the same time, this data has a wide application prospect and can serve graduate students and scientists with backgrounds such as atmospheric science, hydrology, climatology, physical geography and ecology.
2021-11-04 964 596
The meteorological data are the basic meteorological data such as air temperature, relative humidity, wind speed, precipitation and air pressure observed in the observation field of Southeast Tibet station of Chinese Academy of Sciences (94.738286 ° e, 29.76562 ° n, 3326m), and the underlying surface is forest grassland. The time resolution of the original data is 10min, the air temperature, relative humidity, wind speed and air pressure are calculated by arithmetic mean, and the precipitation is the daily cumulative value. The meteorological station was set up at the end of 2006 and the probes were replaced in August 2020. Please note that the models of instrument probes before and after the update are as follows: the model of temperature and humidity probe was changed from HMP45C to hmp155; The model of air pressure probe is changed from PTB220 to ptb110; The model of wind speed sensor is changed from 034b to 0513, and the model of rain gauge sensor is rg13h The data can be used by students and researchers engaged in meteorology, atmospheric environment or ecology (Note: when using, it must be indicated in the article that the data comes from South East Tibetan Plateau station for integrated observation and research of alpine environment, CAS)
2021-11-01 1100 66
1) Data content (including elements and significance): 21 stations (Southeast Tibet station, Namucuo station, Zhufeng station, mustag station, Ali station, Naqu station, Shuanghu station, Geermu station, Tianshan station, Qilianshan station, Ruoergai station (northwest courtyard), Yulong Xueshan station, Naqu station (hanhansuo), Haibei Station, Sanjiangyuan station, Shenzha station, gonggashan station, Ruoergai station( Chengdu Institute of biology, Naqu station (Institute of Geography), Lhasa station, Qinghai Lake Station) 2018 Qinghai Tibet Plateau meteorological observation data set (temperature, precipitation, wind direction and speed, relative humidity, air pressure, radiation and evaporation) 2) Data source and processing method: field observation at Excel stations in 21 formats 3) Data quality description: daily resolution of the site 4) Data application results and prospects: Based on long-term observation data of various cold stations in the Alpine Network and overseas stations in the pan-third pole region, a series of datasets of meteorological, hydrological and ecological elements in the pan-third pole region were established; Strengthen observation and sample site and sample point verification, complete the inversion of meteorological elements, lake water quantity and quality, above-ground vegetation biomass, glacial frozen soil change and other data products; based on the Internet of Things technology, develop and establish multi-station networked meteorological, hydrological, Ecological data management platform, real-time acquisition and remote control and sharing of networked data.
2021-10-15 6099 112
Based on the long-term observation data of each field station in the alpine network and overseas stations in the pan third polar region, a series of data sets of meteorological, hydrological and ecological elements in the pan third polar region are established; the inversion of data products such as meteorological elements, lake water quantity and quality, aboveground vegetation biomass, glacial and frozen soil changes are completed through enhanced observation and sample site verification in key regions; based on the IOT Network technology, the development and establishment of multi station network meteorological, hydrological, ecological data management platform, to achieve real-time access to network data and remote control and sharing. In 2018, the hydrological data set of surface process and environmental observation network in China's alpine region mainly collects the daily measured hydrological (runoff, water level, water temperature, etc.) data of Qilianshan station, Southeast Tibet station, Zhufeng station, Yulong Xueshan station, Namucuo station, Ali station, mostag and other seven stations.
2021-10-15 7032 66
The Tibetan Plateau has an average altitude of over 4000 m and is the region with the highest altitude and the largest snow cover in the middle and low latitudes of the Northern Hemisphere regions. Snow cover is the most important underlying surface of the seasonal changes on the Tibetan Plateau and an important composing element of ecological environment. Ice and snow melt water is an important water resource of the plateau and its downstream areas. At the same time, plateau snow, as an important land-surface forcing factor, is closely related to disastrous weather (such as droughts and floods) in East Asia, the South Asian monsoon and in the middle and lower reaches of the Yangtze River. It is an important indicator of short-term climate prediction and one of the most sensitive responses to global climate change. The snow depth refers to the vertical depth from the surface of the snow to the ground. It is an important parameter for snow characteristics and one of the conventional meteorological observation elements. It is the key parameter of snow water equivalent estimation, climate effect studies of snow cover, the basin water balance, the simulation and monitoring of snow-melt, and snow disaster evaluation and grading. In this data set, the Tibetan Plateau boundary was determined by adopting the natural topography as the leading factor and by comprehensive consideration of the principles of altitude, plateau and mountain integrity. The main part of the plateau is in the Tibetan Autonomous Region and Qinghai Province, with an area of 2.572 million square kilometers, accounting for 26.8% of the total land area of China. The snow depth observation data are the monthly maximum snow depth data after quality detection and quality control. There are 102 meteorological stations in the study area, most of which were built during the 1950s to 1970s. The data for some months or years for sites existing during this period were missing, and the complete observational records from 1961 to 2013 were adopted. The temporal resolution is daily, the spatial coverage is the Tibetan Plateau, and all the data were quality controlled. Accurate and detailed plateau snow depth data are of great significance for the diagnosis of climate change, the evolution of the Asian monsoon and the management of regional snow-melt water resources.
2021-04-09 6223 598
The data of aerosol optical depth were daily collected at Qomolangma Station for Atmospheric and Environmental Observation and Research with An automatic sun/sky scanning radiometer (Cimel 318), over the period from Jan. to Dec. The data were measured at 2020. 340, 380, 440, 500, 675, 870 and 1020 nm channel with uncertainty of 0.01 - 0.02.
2021-03-30 1002 336
(1) Daily average of atmospheric black carbon concentration(ng/m3) at the NASDE. (2) Instruments: Aethalometer (AE33). This instrument collected data with a resolution of one minute. The abnormal data collected at the start-up or faulty stage were manually excluded before analysis further. We generated daily average based on the National Ambient Air Quality Standard of China (GB 3095-2012). (3) From May to November, 2018, a wildlife Conservation Station nearby was constructed, which frequentlyexposed largeamounts of particles, thus the BC concentration was far beyond that collected in the same season of other years. The data in this period shouldbeusedwith greatcaution. Due to problems in the instrument or electric power supply, thedata was lost in other periods. (4) The instrument was placed at the Ngari Station for Desert Environment Observation and Research (79.70° E, 33.39°N, 4270 m above sea level).
2021-03-08 1401 26
The data set is measured by YSI exo2 water quality multi parameter measuring instrument on the Bank of middle lake of Ranwu lake from April to November every year from 2014 to 2020. The sampling interval is 0.25s-1s. The data is the average value after the instrument is stabilized. The sampling geographic coordinates are: longitude 96.795296, latitude 29.459066, altitude 3925m. The measurement parameters are water temperature, conductivity, dissolved oxygen and turbidity, and the specific parameter unit is indicated in the meter. Data culling part of the obvious outliers, the document is empty, please pay attention to the use. The data will be updated from time to time, and can be used by researchers of water chemistry, Lake microorganism or lake physical and chemical properties in Ranwu Lake Basin.
2021-02-28 2470 64 | <urn:uuid:6ef4d0c4-4b78-4a63-9ecb-d0f738d996f4> | CC-MAIN-2022-33 | https://heihedata.org/en/special/obs/?page=2&introduce=True&order_by=-ts_created&q= | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00496.warc.gz | en | 0.896301 | 5,597 | 2.6875 | 3 |
A Superlative Penguin
The Antipodes Islands are in the middle of nowhere. More precisely, they are three and a half days of vomiting southeast of my home in Dunedin, New Zealand. I am the sort of person who gets motion sickness on escalators, and as I lay strapped into my bunk on the Breaksea Girl, I asked myself over and over, “Why am I doing this?”
The answer was penguins. Of the world’s sixteen species, all have been studied in detail except one. I was after the last, the erect-crested penguin. This penguin owes its anonymity more to its location than to any lack of cuteness or scientific interest. Erect-crested penguins breed on the Antipodes and on a similarly isolated group of islands nearly 200 miles to the north, the Bounty Islands. Both are home to little more than seabirds, seals, and shipwrecks.
Erect-crested penguins are, quite simply, the most striking of penguins. Upright parallel combs of blonde feathers sit incongruously above their eyes, like Marilyn Monroe’s eyebrows on steroids, lending the penguins a feminine beauty. But what drew me most to these birds was that they were rumored to exhibit an extremely bizarre behavior. There had been only two prior attempts to study these penguins scientifically—one conducted about thirty years ago, late in the breeding season, and a more recent one lasting a mere five days, during the period of egg laying. The authors of this last study asserted, remarkably, that these penguins, which lay two eggs, deliberately eject the first egg from the nest soon after it is laid.
It was another twist to one of the stranger stories in the animal kingdom. The erect-crested penguin is one of five species known as crested penguins. All five lay two eggs but rear only one chick. Furthermore, in contrast to all other birds, they lay a second egg that is larger than the first, and it is the chick from the second egg that is most likely to survive. Biologists have long sought the answers to two questions: Why produce two eggs if only one chick can survive? Why is the second egg larger? I was, I told myself with each roll of the boat, in search of answers to such questions as much as I was after the “last” penguin.
Late during the fourth night I heard the anchor being let out, and mercifully, the wild pitching eased. I went up on deck to get my first glimpse of the place my two companions and I would be calling home for the next two months. In the morning gloom, I peered straight out at the 600-foot-high cliff aptly named Perpendicular Head. At its base, huge waves crashed relentlessly. I had never seen a place less likely to offer sanctuary, less likely to be called home.
Fortunately, not all the cliffs that surrounded the island were as high as this, but unfortunately, in the small cove that offered the only reasonable landing site, the waves were as fierce as those that pummeled Perpendicular Head. Despite the apparent proximity of the penguins, which I could make out as groups of dots at the base of the cliffs, at that instant they seemed very far away.
We had no alternative but to choose a much less desirable landing site, at Stella Bay. To call our approach a landing is really to glorify it; it was much more like a controlled crash. Wearing wet suits, Martin Renner (a former student of mine), Dave Houston (a biologist from New Zealand’s Department of Conservation), and I jumped from a dinghy into the freezing water. A wave immediately slammed us into the rocks, cutting open my knee. We grasped kelp to anchor us, so as not to be taken out in the backwash, and clambered over its slimy fronds to the jumble of boulders that constituted the beach. We then had to wade back in and retrieve repeated dinghy-loads of gear that were tossed to us between one wave and the next: packs of clothes, boxes of food, drums of fuel, generators, scientific equipment, and wood for mending a hut constructed on the island about a century earlier to shelter castaways. By the time we had finished, we were all cut and bruised. After tossing us a box of sandwiches, the captain of the Breaksea Girl waved good-bye and steamed off to the northwest, leaving the three of us—the entire human population of the Antipodes Islands—in an enveloping drizzle.
Next we had to get our gear up a 70-foot-high cliff; carry it through waist-high tussock grass, which is just about impossible to walk through without falling over every few steps; and, finally, wade through a bog. It took us two full days of backbreaking work to lug all our supplies up to the hut.
While completing this task, we were able to make our first casual observations of the penguins. Stella Bay is home to a colony of about 300 breeding pairs. At this stage, however, virtually all those present were single males, which typically arrive at the colony a week or more before the females. Why they should do so is not at all clear; it’s not as if they have a swag of boxes and generators to hoist up to their nest sites before getting down to the business of courtship. The classic explanation is that they come early to secure a site before the females get there, but other penguins manage this without the need for the males to arrive so far ahead of the females. It is said that the males use this period to fight like gladiators for nest space, with the victors presumably getting the choicest sites. But we saw little evidence of this; there was hardly any fighting at all.
It would be wrong to assume from this that male erect-crested penguins are not territorial. They will defend their chosen site if they have to, even at their own peril. The biggest fight we observed was between a male fur seal and a male penguin. The fur seal lunged at the penguin on its nest, shook it vigorously, and tossed it about fifteen feet away. With a deep wound to its chest, the penguin made the mistake of returning to attack the big hairy intruder. This time the seal reached down, grabbed the penguin, and shook it so violently that its head came away from its body. To our surprise, the seal—supposedly a fish eater—set about devouring its foe. A quick check of the colony revealed five similar carcasses. This had been no crime of passion but the premeditated act of a serial killer. We envisaged this rogue seal slowly eating his way through the entire Stella Bay colony as the breeding season progressed. Fortunately for the penguins, two days later a group of elephant seals arrived, banishing the much smaller fur seals to the nether regions of the beach, and the carnage stopped.
We chose to study the penguins breeding in Anchorage Bay, in the lee of Perpendicular Head. Our main study colony was situated atop a rock stack, and from a knoll above it we were afforded an unimpeded view of all the nests—the perfect place to make observations. But first we had to measure and mark the penguins so that they could be recognized individually. Penguins really do all look alike.
The birds proved to be remarkably unperturbed by our presence. We set up shop near the colony and began a processing operation. First I would catch a penguin, using a fishnet like those used to land trout. It was a relatively easy task to approach a penguin quietly and let the net fall gently over it, pulling the bird toward me as I did so. I had to be fairly deft with the next bit: grabbing the penguin around the ankles with my left hand and then quickly grasping the back of its neck with my right. These penguins have sturdier bills than most others. The top mandible ends in a vicious hook that is used to grasp fish but that is quite capable of ripping open your arm or any other part of your anatomy within striking distance. I would carry the penguin to Martin and Dave, who would weigh it; place a numbered stainless-steel band on the right flipper; and measure foot, flippers, bill, and crest. I then took a small blood sample from a flipper, and Martin photographed the crest. Finally, I painted both a letter and a number on its back in white enamel so the individual could be recognized from a distance. This was the key to our behavioral study, because it meant that we would never need to handle the penguin again. Each bird could be completely processed in this way in less than five minutes. In all, we marked 271 individuals before beginning to observe the colony continuously throughout the daylight hours.
The first thing we noticed was that not a lot happened. Even after all the females had arrived and most males were paired up, these guys were positively lethargic compared with other penguins I had studied. The most riveting thing to happen during the entire courtship period occurred on the shoreline in front of us.
A big bull elephant seal had been lying there sleeping when another cruised up like a submarine, inflating its huge proboscis and blowing bad breath in a deep growl. Our erstwhile beach companion raised its head and inflated its own nose. The seal in the water caterpillared up the stones, and the two giants faced each other, their bodies bent at right angles, their noses quivering like jelly-filled socks, their mouths wide open. For a while it seemed that they were going to do battle with their breath—the smell must have been lethal at such a point-blank range.
The intruder flung its head at our resident and bit the side of his body with its huge canine teeth. Our man was no slouch in this department either, and he struck back with a vicious blow to the intruder’s back, tearing two parallel, foot-long cuts in its blubber. They continued to trade blows, thumping their chests together and biting each other’s body. It seemed an evenly matched contest—until the resident received three unanswered strikes to his right side. Perceptibly, he changed. The intruder leaned more into him; he arched back further. Inch by inch, the intruder shuffled the resident out to sea. In the surf, our guy put up one last stand: bloody open mouths were held close together, and then a final lunge, a final bite, and it was all over.
Now that was competition. The mating game we were witnessing in the penguin colony was gentle and benign by comparison. I was used to observing the mating behavior of Adélie penguins in Antarctica, where the courtship period is a frenzy of fighting and fornicating. The erect-crested penguins, in contrast, just did not seem to have their hearts in it. They rarely fought, and whereas Adélie pairs copulate every three hours or so, erect-crested partners consummated their relationship only once every thirty hours. The blood samples we had taken revealed that the males had relatively low levels of testosterone, which might have explained their lack of both aggression and libido. But the females, too, were out of sorts. While female Adélie penguins will copulate within minutes of arriving at the colony and pairing up, female erect-crested penguins were likely to reject a male’s initial advances. These penguins seemed to arrive at the colony only half ready to reproduce.
We settled into a routine of observation stints to watch this protracted, if tame, courtship ritual for clues to the penguins’ behavior. After the females had been at the colony for about two weeks, the first eggs were laid. While this signaled an exciting change for us, the penguins were much more blasé: to our surprise, neither mothers nor fathers were inclined to do much about it. Some attempted to incubate halfheartedly, but many simply stood beside the egg and ignored it. Another surprise was that the vast majority of the erect-crested penguins made absolutely no attempt to construct a nest. Other penguins collect stones or grasses to line their nests (except king and emperor penguins, which incubate their single egg on their feet), but these birds were content simply to drop their eggs on bare ground or even on the tops of large rocks, often on steep slopes. Without a decent nest, any knock or bump was likely to send an egg rolling away.
In an experiment we conducted in a nearby colony, we created supernests, surrounding some rudimentary nests with large stones so that the first eggs could not roll away. But these survived no longer than those in control nests or in the main study colony. Although the first eggs remained within the vicinity of the nests, they were neglected: some rolled against the rocks, and many broke, probably after being trod on or pecked at.
We found that about four of every ten first eggs are lost before the second is even laid, but the arrival of the second egg seals the fate of the first. I suspect this is largely for mechanical reasons. The first is not much bigger than a chicken’s egg from a supermarket; the second is nearly twice that size. And while first eggs are pale green, second eggs are white. During the five or six days between the laying of the two eggs, the first one—if it survives that long—gets quite dirty as well. A female that has just laid her second egg responds more strongly to the stimulus of the large, bright white egg and will push it into her brood patch, a feather-free area of vascularized skin on her tummy. She will then attempt to draw in the small first egg, but it is like trying to sit on a football and a tennis ball at the same time—an awkward proposition exacerbated by the eggs’ aspherical shape, which makes them prone to rolling unpredictably. Females seem to find it difficult to get comfortable. They stand up repeatedly, turning around in the nest and trying to adjust the eggs.
Almost inevitably, the smaller of the eggs, being less snug against the female’s body, will be dislodged and will roll away. At least another four of every ten first eggs are lost this way on the very day the second egg is laid. And the longest we observed any first egg to survive was six days after the second appeared. But this was not deliberate rejection, as claimed by earlier researchers. Females tested at a colony about half a mile away readily tried to retrieve and incubate a first egg that had rolled away if we replaced it within a few inches of the nest. The combination of parental neglect, differences in egg size, and poor nests seems to conspire against the prospects of the first eggs.
But simply knowing how first eggs are lost does not explain why the penguins persist in laying two eggs and why the second one gets all the attention. Some have suggested that crested penguins lay two eggs because the first is an insurance policy in case the second, larger one is lost. But for erect-crested penguins, at least, this scenario seems ludicrous: more than 80 percent of these so-called insurance policies are lost before or on the day the second egg is laid, and none of the remainder last for more than a week.
Often when one tries to decipher why animals do what they do, a good place to start is with food. Penguins can be divided into two broad groups: those that feed inshore and those that feed offshore. Crested penguins are of the latter kind, swimming just about as far as their flippers can take them to find food and rush it back to the chick. The costs of finding and transporting food over such distances make it unlikely that these penguins could ever bring back enough food to feed two offspring. So why bother laying two eggs? DNA evidence suggests that the ancestors of crested penguins laid two eggs. However, given the circumstances crested penguins face, surely it would be to the females’ advantage to reduce their clutch size by simply stopping their laying after the first egg.
For whatever reason, erect-crested penguins have a long courtship period of two weeks or more. That means that males and females are ashore, and unable to feed, for an extended period of time. While penguins are quite capable of dieting for phenomenally long periods, when it comes to producing the energy and nutrients needed for egg laying, fasting females must convert their reserves of fat and protein, since they can’t use nutrients derived directly from food. The little work done on this suggests that erect-crested and other crested penguins depend more upon converting their reserves for manufacturing eggs than other penguins do.
Conversion of fats and proteins for egg formation, like most of reproduction, is a hormone- mediated process. Hormones are like chemical postcards that the brain sends around the body to tell it what to do, and it seems crested penguins arrive at the colony with comparatively few of these missives getting delivered to their reproductive system. The synthesis of hormones can be influenced by external events, such as calls made by other penguins or the physical presence of eggs. Indeed, the social stimulation derived from the calling and courting of neighbors in the colony has been shown to hasten the development of eggs in crested penguins.
Our results seemed to confirm the benefits of breeding in a crowd. The size of both first and second eggs tended to increase as the colony filled up and became more boisterous. Very early breeding pairs tended to lose their first egg immediately, suggesting that the adults were not ready to care for it properly. The brood patches of both female and male crested penguins take several days to become fully vascularized and suitable for incubation; work by my students on yellow-eyed penguins, the crested penguins’ closest living relatives, has shown that the presence of an egg stimulates the development of the brood patch.
As I sat atop the knoll, looking down at pairs of penguins hunkered down on and protecting second eggs, while all about lay the abandoned and broken shells of first eggs, it occurred to me that we were asking the wrong questions about their strange breeding behavior. The real question is not why they have two eggs but why they favor the second egg. Had the first egg been at least as likely to produce offspring as the second, it should have been a relatively simple matter to stop there and reduce the clutch to a single egg. But, of course, to take advantage of the better prospects of the second egg meant having a first one, too, no matter how superfluous that might be.
Could it be that the first is really just a primer for the birds’ reproductive system? Had natural selection tilted the balance in favor of the second egg because females were then better able to mobilize their reserves to produce it, and both males and females were better prepared to care for it? If so, then crested penguins have little choice but to lay a first egg even if it has little prospect of producing a surviving offspring. All they can do is reduce its size and the energy they invest in it. To explain why crested penguins favor the second egg, then, is also to explain why they must persist with laying two eggs and why the first is smaller.
Crested penguins’ breeding strategy has served them well enough for millions of years. But recently, in the face of environmental changes being wrought by humans all over the Southern Hemisphere, there are signals that all is not well in the crested penguin world: Major population crashes have been recorded for rockhopper penguins. Fiordland penguins are already among the world’s rarest. Snares penguins seem to be holding their own but are limited to a single breeding area, making them extremely vulnerable to a local catastrophe. Only macaroni penguins (including a subspecies, royal penguins) appear to be in reasonable health. No truly accurate census of erect-crested penguins exists. However, just over a decade ago there were estimates of 110,000 pairs breeding on the Antipodes, while a few years later the population was estimated to be only half that.
My colleagues and I did not have the time or resources to census all the penguin colonies on the Antipodes, but we did survey and count breeding pairs in representative colonies on the main island. Antipodes Island is less than five miles in length and somewhat more than a mile across at its widest point. It is, however, a desperately hard island to traverse. Its cliffs forbid coastal access, leaving the interior—a tussock-covered plateau—as the only feasible route. The going was particularly tough and the results discouraging: our sampling indicated that the size of the various breeding colonies had fallen between 8 and 41 percent since the counts made three years earlier.
Albatrosses are distant cousins of penguins. On the cliffs above the colonies, we encountered nests of the light-mantled sooty albatross; on the plateau, huge wandering albatross chicks sat like white, fluffy lighthouses. Simple economics dictates that albatrosses, too, can never rear more than one of their gargantuan babies and so lay a single egg. However, the albatrosses’ strategy has one major advantage over the penguins’ when environmental changes affect the distribution and abundance of prey: Albatrosses can fly. Penguins, even offshore-feeding penguins, are much more constrained by how far they can go from the colony and are therefore much more susceptible to local perturbations in the ecosystem.
We do not know just where erect-crested penguins go to find fish, but because a chick must be fed frequently after it hatches, parents must be limited to foraging within a radius of less than seventy-five miles from the colony. The crested penguins’ approach to chick provisioning also differs from other penguins’. Whereas in other species the parents take turns getting food for their newly hatched chicks, crested penguins strike a blow for female liberation, with the female being the sole breadwinner for the first two or three weeks, while the male stays at home to look after the chick.
During the period when the eggs are being incubated, erect-crested penguins are likely to travel hundreds of miles on feeding trips that can last upwards of two weeks. The energy demands of fasting through courtship and producing eggs are high, and in other species of penguins, either the male or the female departs immediately for the feeding grounds after egg laying. Not crested penguins. Parents remain together at the nest for up to ten days or so after laying. This baffled us as we continued to monitor the study colony, because only one parent at a time can incubate the egg—and the male, especially, having already gone without food for about a month, must have been starving. Why should he continue to hang around?
When hunger did eventually force the males to leave (after they had lost some 40 percent of their original body weight), we witnessed yet another twist to this tale: neighboring males and unemployed birds—penguins that either had not bred or had failed to breed successfully—went around attacking the females left alone on their nests. The poor females lay prone over their second eggs, flippers spread-eagled, forehead tucked down onto the ground, eyes closed, while the marauders meted out a flurry of blows with their flippers and beaks. In several cases, a female was forced to abandon her nest, and the egg was broken. Could it be that their male partners had remained with them so long after egg laying to guard them?
As our time on the Antipodes drew to a close, I was beginning to see the erect-crested penguins not so much as the last penguins but as the oddball penguins. Instead of answers, we had found mostly more questions. To unravel their story further, it seemed unavoidable that we would need to return. And as I boarded the Breaksea Girl for the journey back to New Zealand, that thought alone was enough to make me ill immediately.
– From: Natural History, 01/11: 46-55 | <urn:uuid:315056ab-e4aa-4557-8cb0-99c3486e83d2> | CC-MAIN-2022-33 | http://www.penguinworld.com/features/superlative.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00297.warc.gz | en | 0.974066 | 5,076 | 2.640625 | 3 |
The original European settlers came in the early 17th century from the midland and southern counties of England. They first settled in Virginia‘s tidewater (coastal plain). Many colonists had connections to Barbados. The earliest Africans to Barbados was in 1619. Starting in 1680, large numbers of Africans were captured and brought as slaves to Barbados. It has been estimated that 75% of white colonists arrived in bondage as indentured servants or transported convicts. Small landholders moved westward to the Piedmont, where they were joined by a new wave of English and Scottish immigrants.
In the early 1700s, French Huguenots arrived, followed by German workers imported between 1714 and 1717 to work iron furnaces in the Piedmont area. During the 1730s and 1740s, a large number of settlers of Ulster Scot and German descent moved southward from Pennsylvania down the Allegheny Ridges into the Shenandoah Valley.
Beginning in the late 18th century, Virginia lost many residents as families moved westward to new states and territories. There was very little foreign immigration to Virginia after 1800.
Ships commonly docked along riverside plantations on the Elizabeth River, James River, Potomac River, Rappahannock River, and York River.
Colonial Records[edit | edit source]
Very few passenger lists exist for immigrants entering colonial Virginia. There are quite a few sources, however, that include immigration information. Most records have been published. The place to start is P. William Filby, Passenger and Immigration Lists Index (available online at Ancestry ($)). Available library copies can be located through WorldCat. See also Passenger and immigration lists index. Supplement.
The major port in Virginia from the late eighteenth century forward was Norfolk, but many settlers arrived at Baltimore, Philadelphia, or other ports and then migrated to Virginia. In the eighteenth century, ships selling indentured servants and transported convicts often docked at ports along the Rappahannock and Potomac rivers.
It is often quite a challenge to determine whether or not a Colonial Virginian was an immigrant. Headright grants identify a certain percentage (particularly before 1720; at least three-fourths of the names of new settlers in the 1600s are found in these land contracts), but require special attention to correctly interpret. Colonial sources describing individuals as indentured or convict servants further develop a list. Military records kept about soldiers in the French and Indian War and Revolutionary War (particularly pensions) identify additional immigrants.
McCartney completed a 20-year scholarly study of all persons known to have resided in Colonial Virginia between 1607 and 1634. She published the results in 2007 to celebrate Virginia’s 400th anniversary:
- McCartney, Martha W. Virginia Immigrants and Adventurers 1607-1635: A Biographical Dictionary. Baltimore, Md.: Genealogical Publishing Co., 2007. FHL Book 975.5 D36m.
The families of early settlers who left descendants are charted in:
- Dorman, John Frederick. Adventurers of Purse and Person, Virginia, 1607-1624/5. 3 vols. Baltimore, Md.: Genealogical Publishing Co., 2004-2007. FHL Books 975.5 H2j v. 1 – v. 3.
Other studies establishing the identities of early Virginia immigrants include:
- The Biographical Dictionary of Early Virginia, 1607- 1660 lists many immigrants. See Virginia Biography.
- Greer, George Cabell. Early Virginia Immigrants 1623-1666. Richmond, Va.: W.C. Hill Printing Co., 1912. Digital version at Google Books, evmedia website.
- Standard, W.G. Some Emigrants to Virginia: Memoranda in Regard to Several Hundred Emigrants to Virginia During the Colonial Period Whose Parentage is Shown or Former Residence Indicated by Authentic Records. Richmond, Va.: The Bell Book & Stationery Company, 1911. Digital versions at Ancestry ($), FamilySearch Digital Library, Google Books, Internet Archive. Free online surname index and purchase details for 2005 reprint at Mountain Press website.
Headright grants document the importation of settlers into the colony. “Although it was possible to secure land on the headright system throughout the whole of the colonial period in Virginia, after about 1720 few of the land patents were issued on this basis.” They are kept at the Library of Virginia. They have been abstracted and digitized:
- Nugent, Nell M. et al. Cavaliers and Pioneers: Abstracts of Virginia Land Patents and Grants (1623-1782). 8 vols. Richmond, Va.: Virginia Genealogical Society, 1934-200. FHL Books 975.5 R2n v. 1-v. 8. Volume 1 (1623-1666) is available on Ancestry ($) and Internet Archive – free.
Once the patentee’s name is known it is possible to retrieve digital images of the original land office patents on the website of the Library of Virginia, see: Virginia Land Office Patents and Grants.
Main article: Virginia Land and Property
The Virginia Colonial Records Project at the Library of Virginia can help Americans trace their European immigrant origins. Scholars visited United Kingdom and other European archives searching for references to colonial-era Virginians. Their 14,704 records survey reports contain half a million names of persons and ships which are searchable at the Library’s web site. They also microfilmed about two-thirds of the records they located. The 963 reels of microfilm are held at the Library of Virginia and are available for interlibrary loan. The Library’s About the Virginia Colonial Records Project provides more information. See also: *Riley, Edward M. “The Virginia Colonial Records Project,” National Genealogical Society Quarterly, Vol. 51, No. 2 (June 1963):81-89. FHL Book 973 B2ng v. 51.
Virginians in English archives[edit | edit source]
Waters and Withington, like the Virginia Colonial Records Project scholars, sought out references to Virginians in English archives:
- Withington, Lothrop. Virginia Gleanings in England: Abstracts of 17th and 18th-Century English Wills and Administrations Relating to Virginia and Virginians. FHL 975.5 P28w
Withington’s work, along with his successors Leo Culleton and Reginald M. Glencross, was originally published as a serial article in The Virginia Magazine of History and Biography between 1902 and 1948. Nearly the entire set (through 1922) is available online for free at JSTOR:
Records of ethnic groups, including Huguenots, Mennonites, Scots, Germans, and blacks, are listed in the Locality Search of the FamilySearch Catalog under the subject heading VIRGINIA – MINORITIES.
Nugent identifies about 5,000 of the earliest immigrants to Virginia:
- Nugent, Nell M. Early Settlers of Virginia. Baltimore: Genealogical Publishing Company 1969 (lists pre-1616 settlers)
English Immigrants[edit | edit source]
In lieu of colonial passenger lists regarding early settlers of Virginia, genealogists must rely on evidence gleaned from a variety of sources to successfully trace immigrant origins.
Scholarly articles published in The American Genealogist, the National Genealogical Society Quarterly, and The Virginia Genealogist illustrate strategies that will help Americans trace their colonial Virginia immigrant origins.
The Prerogative Court of Canterbury in London proved the wills of many residents of Virginia. For access, see Virginia Probate Records. Heraldic visitations list some members of prominent English families who crossed the Atlantic. Expert Links: English Family History and Genealogy includes a concise list of visitations available online. Online archive catalogs, such as Access to Archives, can be keyword searched for place names, such as “Virginia” to retrieve manuscripts stored in hundreds of English archives relating to persons and landholdings in this former English colony. These types of records establish links between Virginia residents and England, which can lead researchers back to their specific ancestral English towns, villages, and hamlets.
The multi-volume Calendar of Colonial State Papers Colonial, America, and West Indies (1574-1739), which is available for free online (see discussion in Virginia Public Records), highlights many connections between England and Virginia.
A standard work on early Virginia immigrants, which includes some passenger lists, is now also widely available on the Internet:
- Hotten, John Camden. The Original Lists of Persons of Quality: Emigrants; Religious Exiles; Political Rebels; Serving Men Sold for a Term of Years; Apprentices; Children Stolen; Maidens Pressed; and Others Who Went from Great Britain to the American Plantations, 1600-1700, with Their Ages, the Localities Where They Formerly Lived in the Mother Country, the Names of the Ships in which They Embarked, and Other Interesting Particulars; from MSS. Preserved in the State Paper Department of Her Majesty’s Public Record Office, England. London: the author, 1874. Digital versions at Ancestry ($); Google Books and Internet Archive; 1983 reprint: FHL Book 973 W2hot 1983.
Sherwood published additional references not found in Hotten’s work:
- Sherwood, George. American Colonists in English Records. 1932.
Brandow also published an addendum to Hotten’s work:
- Brandow, James C. Omitted Chapters from Hotten’s Original Lists of Persons of Quality … and Others Who Went from Great Britain to the American Plantations, 1600-1700. Baltimore, Md.: Genealogical Publishing Co., 2001. Digital version at Ancestry ($).
Peter Wilson Coldham has published several volumes of English records that identify hundreds of thousands, among other American immigrants, those destined for Virginia. Many English indentured servants completed labor terms in Virginia. Coldham’s works are indexed in Filby’s Passenger and Immigration Lists Index, 1500s-1900s (digital version at Ancestry ($)).
- Coldham, Peter Wilson. British Emigrants in Bondage, 1614-1788. Baltimore, Md.: Genealogical Pub. Co., 2004. FHL CD-ROM no. 2150. Includes numerous Virginia immigrants. May show British hometown, emigration date, ship, destination, and text of the document abstract.
- Coldham, Peter Wilson. The Bristol Registers of Servants Sent to Foreign Plantations, 1654-1686. Baltimore, Md.: Genealogical Pub. Co., 1988. FHL Book 942.41/B2 W2c; digital versions at Ancestry ($); Chronicle Barbados (Barbados entries only); Virtual Jamestown.
- Coldham, Peter Wilson. The Complete Book of Emigrants: 1607-1776. n.p.: Brøderbund, 1996. FHL CD-ROM no. 9 pt. 350; digital version of select portions at Virtual Jamestown.
For English passenger lists, 1773 to 1776, which include emigrants destined for Virginia, see:
For London children apprenticed to Virginia colonists, see:
- Coldham, Peter Wilson. Christ’s Hospital.
- Hume, Robert. Early child immigrants to Virginia, 1618-1642 : copied from the records of Bridewell Royal Hospital. Baltimore, Md.: Magna Carta Book Company, 1986. FHL US/CAN Book 975.5 W2h
Main article: Virginia Church Records#Clergy
English officials kept records of payments made for the transportation of Anglican ministers to America, see:
Runaway advertisements for colonial indentured servants often yield immigration data. The Geography of Slavery in Virginia: Virginia Runaways, Slave Advertisements, Runaway Advertisements indexes these records (for both white indentured servants and black slaves). These records can also be found in the digitized Virginia Gazette 1736-1780, available online through the Colonial Williamsburg website.
Murphy’s research guide to tracing the English origins of Colonial Virginia indentured servants is available online: “Origins of Colonial Chesapeake Indentured Servants: American and English Sources,” National Genealogical Society Quarterly, Vol. 93, No. 1 (Mar. 2005):5-24.
Two excellent websites, containing tens of thousands of indentured servants are:
- Immigrant Servants Database 20,000+ colonial immigrants, primary focus: Chesapeake Bay colonies (Virginia and Maryland)
- Virtual Jamestown Indentured servant registers from colonial period, which identify English indentured servants shipped to America
The English port of Whitehaven, in northwest England, had extensive trade dealings with Virginia and Maryland during the colonial period. For an excellent study of this trade and the families involved, see:
- Lawrence-Dow, Elizabeth and Daniel Hay. Whitehaven to Washington. Copeland, England, 1974. FHL Book 975 H2d.
African Immigrants[edit | edit source]
Main article: African American Resources for Virginia
The Trans-Atlantic Slave Trade Database Internet site contains references to 35,000 slave voyages, including over 67,000 Africans aboard slave ships, using name, age, gender, origin, and place of embarkation. The database documents the slave trade between Africa, Europe, Brazil, the Caribbean, and the United States.
Scottish and Irish Immigrants[edit | edit source]
Many Scottish merchants established stores where British goods were imported in eighteenth-century Virginia.
Scots-Irish settlement was particularly concentrated in the Shenandoah Valley during the eighteenth-century in places such as Augusta County, Virginia Genealogy.
David Dobson has dedicated many years to establishing links between Scots and their dispersed Scottish cousins who settled throughout the world. For Virginia connections, see publications by David Dobson.
A helpful book about Scottish Highlanders in America is:
- MacLean, J.A.P. An Historical Account of the Settlements of Scotch Highlanders in America Prior to the Peace of 1783 Together with Notices of Highland Regiments and Biographical Sketches. Cleveland, Ohio: The Helman-Taylor Company, 1900. Digital version at Internet Archive.
French Immigrants[edit | edit source]
Huguenots came in 1700. Their settlement, in King William Parish, near Richmond on the James River, was known as Manakin Town. They and many of their descendants lived in Henrico, Goochland, Cumberland, and Powhatan counties.
German Immigrants[edit | edit source]
A group of Germans created a settlement called Germanna in early eighteenth-century Virginia. Several books have been published about the history and genealogy of these families, such as:
Herrmann Schuricht wrote a chapter titled “The first Germans in Virginia” in:
- Lohr, Otto et al. The First Germans in America: With a Biographical Directory of New York Germans. Bowie, Md.: Heritage Books, 1992. FHL Book 973 W2Lo.
- Schuright, Herrmann. History of the German Element in Virginia. 2 vols. Baltimore, Md.: T. Kroh, 1898, 1900. Digital versions at Google Books: Vol. 1; Vol. 2; 1977 reprint: FHL Book 975.5 F2gs v. 1-2.
- Wust, Klaus. The Virginia Germans. Charlottesville, Va.: The University Press of Virginia, 1969.
Germanna Foundation Library maintains a visitor’s center with genealogical library. They work to promote historic preservation as well as family history information and research.
Colonial Ships[edit | edit source]
Though they do not include names of passengers, records kept by the Board of Trade and stored at The National Archives (Kew, England), document ships’ arrivals and departures from Virginia ports between 1698 and 1774. FamilySearch microfilmed these records. They are useful for learning about the history of ships entering the colony:
For maritime court proceedings, see:
- Reese, George, ed. Proceedings of the Court of Vice-Admiralty of Virginia, 1698-1775. Richmond: Virginia State Library, 1983. FHL Book 975.5 P2p.
Ports and eastern seaboard towns were divided into customs districts. In 1770, there were six:
Accomack District · James River Lower District · James River Upper District · South Potomac District · Rappahannock District · York River District
Ships mentioned in the Virginia Gazette between 1736 and 1780 have been identified in the free online index produced by Colonial Williamsburg. The index links to scanned newspaper images.
Information about ships can also be gleaned from colonial county court order books and English State Papers Colonial, American and West Indies.
If you believe your ancestor served on the crew of an English vessel that docked in Virginia, Rediker’s book Between the Devil and the Deep Blue Sea: Merchant Seamen, Pirates, and the Anglo-American Maritime World, 1700-1750 (FHL Book 942 U3re) provides an excellent description of what your ancestor’s life at sea would have been like. Records about these people are stored in England at facilities such as the British National Archives. Their website offers research guides, such as Merchant seamen serving up to 1857: further research.
If you believe your ancestor’s ship was shipwrecked, Shomette compiled a “Chronological Index to Documented Vessel Losses in the Chesapeake Tidewater (1608-1978)” as an appendix to Shipwrecks on the Chesapeake (FHL Book 975 U3s) that can lead you to further information. Shomette also wrote a book titled Pirates on the Chesapeake: Being a True History of Pirates, Picaroons, and Raiders on Chesapeake Bay, 1610-1807 (1988) for those who believe they may have pirates in their family tree.
English Voyages[edit | edit source]
British Naval Office Shipping Lists, 1678-1825, have been digitized by British Online Archives (site requires subscription).
Lloyd’s Register of Shipping identifies ships leaving England, their masters, ports of departure, and destinations. They survive as early as 1764 and are being put online at Lloyd’s Register of Ships Online – free.
Peter Wilson Coldham compiled a list of convict ships travelling between English and Virginia ports during the eighteenth century. See appendix to:
Many English ships that voyaged to Colonial Virginia are also mentioned in:
Many ships that sailed from Bristol, England to Virginia are described in: Bristol, Africa and the Eighteenth-Century Slave Trade to America 1698-1807 (4 vols.) FHL British Books 942.41/B2 B4b v. 38-39, 42, 47. All four volumes are available for free online at the Bristol Record Society website.
Historic Jamestowne – National Park Service
German Voyages[edit | edit source]
Dr. Marianne S. Wokeck created a detailed list of “German Immigrant Voyages, 1683-1775” to Colonial America. Destinations include Virginia (1730s-1750s). She published the list in an Appendix to:
- Wokeck, Marianne S. Trade in Strangers: The Beginnings of Mass Migration to North America. University Park, Pa.: Pennsylvania State University Press, 1999. FHL Book 970 W2w.
Irish Voyages[edit | edit source]
A list of Irish ships that made voyages to the English colonies in America is included in:
- Griffin, Patrick. The People With No Name: Ireland’s Ulster Scots, America’s Scots Irish, and the Creation of a British Atlantic World, 1689-1764. Princeton, N.J.: Princeton University Press, 2001.
Scottish Voyages[edit | edit source]
Dr. David Dobson has compiled a detailed list of ships voyaging between Scotland and America. Volume 4 includes information gleaned from the Virginia Gazette:
1783 to Present[edit | edit source]
The Family History Library and the National Archives have many of the post-1820 passenger lists and indexes for Baltimore, Philadelphia, and other major ports. These are listed in the FamilySearch Catalog Locality Search under [STATE], [COUNTY], [CITY] – EMIGRATION AND IMMIGRATION.
The Family History Library and the National Archives also have incomplete passenger lists for the following ports.
The above lists are included in Copies of Lists of Passengers Arriving at Miscellaneous Ports on the Atlantic and Gulf Coasts . . . (in the FamilySearch Catalog Locality Search under UNITED STATES – EMIGRATION AND IMMIGRATION; FHL 830231–FHL 830246. These lists are indexed in Supplemental Index to Passenger Lists of Vessels Arriving at Atlantic and Gulf Coast Ports . . . (in the FamilySearch Catalog Locality Search under UNITED STATES – EMIGRATION AND IMMIGRATION – INDEXES; FHL 418161–FHL 418348
During the War of 1812, American officials reported finding a total of 333 British aliens, many of whom had families, living in Virginia. Most British immigrants were settling in the capital, and in towns, and ports at that time. The numbers show that immigration from Great Britain to Virginia had decreased considerably from the high levels reached during the seventeenth and eighteenth centuries:
|Chesterfield, Manchester||4||Jefferson, Charles Town||1|
|Stafford, Falmouth||4||Loudoun, Leesburg||1|
|Botetourt, Fincastle||2||Norfolk, Portsmouth||1|
|Elizabeth City, Hampton||2||Philadelphia [sic]||1|
|Kentucky, Lexington||2||Prince William, Dumfries||1|
Free native-born Virginians, alive in 1850, who had left the state, resettled as follows:
|State||Persons Born in Virginia||Percentage|
- Barlow, Lundie W. “Some Virginia Settlers of Georgia, 1773-1798,” The Virginia Genealogist, Vol. 2, No. 1 (Jan.-Mar. 1958):19-27. Digital version at American Ancestors ($).
What was it like to move from Virginia to Kentucky in the early 1800s? Daniel Trabue’s journal makes a fascinating read:
- Young, Chester Raymond. Westward into Kentucky, The Narrative of Daniel Trabue. Lexington, Ky.: University Press of Kentucky, 1981. FHL Book 976.9 H2td.
Dorothy Williams Potter in Passports of Southeastern Pioneers 1770-1823 (FHL Book 975 W4p) identifies some migrants from Virginia into territories that are now Alabama, Florida, Louisiana, Mississippi, and Missouri.
Robertson compiled a list of Virginians in Kansas in 1860:
- Robertson, Clara Hamlett. Kansas Territorial Settlers of 1860 Who were Born in Tennessee, Virginia, North Carolina and South Carolina: A Compilation with Historical Annotations and Editorial Comment. Baltimore, Md.: Genealogical Publishing Co., 1976. FHL 978.1 H2ro; digital version at World Vital Records ($).
British Mercantile Claims identify migrations made by many Virginians during the period 1775 to 1803. The folks listed owed debts to overseas British merchants at the opening of the Revolutionary War and after the War was over, the merchants came to collect their debts, only to find that many of these people had moved. Dorman published these records in The Virginia Genealogist, beginning with Volume 6. Digital version at American Ancestors ($). FHL Book 975.5 B2vg v. 6 (1962).
Dr. Koontz wrote a helpful article about life on “The Virginia Frontier, 1754-1763,” Johns Hopkins University Studies in Historical and Political Science (Baltimore: The Johns Hopkins Press, 1925). Digital version at FamilySearch Digital Library.
- Immigrant Servants Database 20,000+ colonial immigrants, primary focus: Chesapeake Bay colonies (Virginia and Maryland)
- Virtual Jamestown Indentured servant registers from colonial period, which identify English indentured servants shipped to America
- ↑ David Hackett Fischer, Albion’s Seed: Four British Folkways in America (New York: Oxford University Press, 1989). FHL Book 973 H2fis.
- ↑ David L. Kent, Barbados and America (Arlington, Va.: C.M. Kent, 1980). FHL Book 972.981 X2b.
- ↑ Wesley Frank Craven, White, Red, and Black: The Seventeenth-Century Virginian (Charlottesville, Va.: University Press of Virginia, 1971).
- ↑ Donald G. Shomette, Maritime Alexandria: The Rise and Fall of an American Entrepôt (2003).
- ↑ John Crump Parker, “Old South Quay in Southampton County: Its Location, Early Ownership, and History,” The Virginia Magazine of History and Biography, Vol. 83, No. 2 (Apr. 1975):160-172. Digital version at JSTOR ($).
- ↑ Urbanna: A Port Town in Virginia 1680-1980 (1980).
- ↑ Thomas, Robert E. The Thomas Family in 300 Years of American History. Salt Lake City, UT: Filmed by the Genealogical Society of Utah, 1982. Print.
- ↑ Edmund S. Morgan, “Headrights and Head Counts: A Review Article,” The Virginia Magazine of History and Biography, Vol. 80, No. 3 (Jul. 1972):361-371. Digital version at JSTOR ($); Richard Slatten, “Interpreting Headrights in Colonial-Virginia Patents: Uses and Abuses,” National Genealogical Society Quarterly, Vol. 75 (1987):169-179. Digital version at National Genealogical Society website ($); FHL Book 973 B2ng v. 75 (1987); James W. Petty, “Seventeenth Century Virginia County Court Headright Certificates,” The Virginia Genealogist, Vol. 45, No. 1 (Jan.-Mar. 2001):3-22; Vol. 45, No. 2 (Apr.-Jun. 2001):112-122. Digital version at American Ancestors ($). FHL Book 975.5 B2vg; Noel Currer-Briggs, “Headrights and Pitfalls,” The Virginia Genealogist, Vol. 23 (Jan. 1979):45-46. Digital version at American Ancestors ($); Charles E. Drake, “Virginia Headrights: Genealogical Content and Usage,” Virginia Genealogical Society Quarterly, Vol. 20, No. 2 (Apr.-Jun. 1982):50-52. Digital version at Ancestry ($); FHL Book 975.5 B2vs.
- ↑ John Frederick Dorman, “Review of Cavaliers and Pioneers,” in The Virginia Genealogist, Vol. 24, No. 3 (Jul.-Sep. 1980):221. Digital version at American Ancestors ($). FHL Book 975.5 B2vg v. 24 (1980)
- ↑ Lothrop Withington, “Arrivals from Virginia in 1655,” The William and Mary Quarterly, Vol. 20, No. 3 (Jan. 1912):186-187; Lothrop Withington, “Arrivals from Virginia in 1656,” The William and Mary Quarterly, Vol. 21, No. 4 (Apr. 1913):258-262. Digitized by JSTOR – free.
- ↑ “Manakin Town: The French Huguenot Settlement in Virginia 1700-ca. 1750,” National Humanities Center Resource Toolbox. Becoming American: The British Atlantic Colonies, 1690-1763, http://nationalhumanitiescenter.org/pds/becomingamer/growth/text4/frenchvirginia.pdf, accessed 23 June 2012.
- ↑ Lester J. Cappon, Barbara Bartz Petchenik, and John H. Long, Atlas of Early American History: The Revolutionary Era, 1760-1790 (Princeton, N.J.: Princeton University Press, 1976), Plate 40. FHL Book 973 E7ae.
- ↑ Marcus Rediker, Between the Devil and the Deep Blue Sea: Merchant Seamen, Pirates, and the Anglo-American Maritime World, 1700-1750 (New York: Cambridge University Press, 1987). FHL Book 942 U3re.
- ↑ Donald G. Shomette, Shipwrecks on the Chesapeake: Maritime Disasters on Chesapeake Bay and Its Tributaries, 1608-1978 (Centreville, Md.: Tidewater Publishers, 1982), 242-287. FHL Book 975 U3s.
- ↑ Kenneth Scott, British Aliens in the United States During the War of 1812 (Baltimore, Md.: Genealogical Publishing Co., 1979), 320-333. FHL Book 973 W4s; digital version at Ancestry ($).
- ↑ These statistics do not account for the large number of Virginians who had resettled and died before the year 1850. See: William O. Lynch, “The Westward Flow of Southern Colonists before 1861,” The Journal of Southern History, Vol. 9, No. 3 (Aug. 1943):303-327. Digital version at JSTOR ($).
- ↑ John Frederick Dorman, “Review of Research in Georgia,” in The Virginia Genealogist, Vol. 25, No. 2 (Apr.-Jun. 1981):147. Digital version at American Ancestors ($). FHL Book 975.5 B2vg v. 25 (1981)
- ↑ “John Owen’s Journal of His Removal from Virginia to Alabama in 1818,” Publications of the Southern History Association, Vol. 1, No. 2 (Apr. 1897):89-97. Digitized by Internet Archive. | <urn:uuid:6c34f98a-6e51-426b-8f06-4714bedaa038> | CC-MAIN-2022-33 | https://csgroupbb.com/2020/10/25/virginia-emigration-and-immigration/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00297.warc.gz | en | 0.857245 | 6,923 | 4.21875 | 4 |
Your diet should be unique to you, and your blood type may help determine the best foods for your health. For instance, if your blood type is A positive, it could be beneficial to follow the popular A positive blood type diet.
Knowing your blood type is important for understanding how your body reacts to food, your reaction to stress, your susceptibility to disease, and more.
Your blood contains a distinct biochemical makeup. There are four main blood types—A, B, AB, or O. Each blood type plays a big role in determining your ideal diet, according to naturopathic doctor Peter D’Adamo.
D’Adamo proposed the blood type diet in his New York Times best-selling books Eat Right 4 Your Type and Live Right for Your Type. D’Adamo explains how to eat right for your specific blood type in his books; however, it is important to note that the blood type diet is lacking scientific evidence.
In this article, I have singled out the blood type A diet, which includes people with both A positive blood type and A negative blood type. I will detail how the diet for A positive blood type works, including which foods to eat and avoid. This guide to the type A blood type diet will also detail the research on the blood type diet.
A Positive Blood Type Diet: How It Works
How does the A positive blood type diet work? Basically, a person’s blood type is named after the blood type antigen, or surface marker, they possess on their red blood cells.
In other words:
- Blood type A has the A antigens on your cells
- Blood type B has the B antigens on your cells
- Blood type AB has the A and B antigens on your cells
- Blood type O has no antigens on your cells
An antigen, in simplest terms, is any substance that triggers a response from your body’s immune system. Blood type A will form when the O antigen or fucose, and another sugar called N-acetyl-galactosamine combine. The type A blood is also called the agrarian, or the cultivator.
The type A blood type was established from the need to fully utilize nutrients from carbohydrates. This can also be observed in the digestive structure of someone with type A blood.
There are also particular factors that make it difficult for a type A blood individual to digest and metabolize animal protein and fat. This will include when there are low levels of hydrochloric acid in the stomach, low levels of intestinal alkaline phosphatase, and high intestinal disaccharide digestive enzyme levels.
D’Adamo notes that people with type A blood should consume a plant-based diet that is completely free from red meat. The blood type A diet closely resembles a vegetarian diet. Blood type A individuals should also consume foods as fresh and organic as possible.
Although the blood type A diet is not a weight loss plan, losing weight is a natural side effect of the diet. Essentially, once you remove meat from the diet, you will increase your energy and lose weight. Blood type A diet foods may also boost immunity and decrease the overall risk of disease, especially type 1 diabetes, anemia, cancer, heart disease, and liver and gallbladders disorders.
A healthy lifestyle is vital for the A positive blood type diet. Type A people naturally have high levels of the stress hormone cortisol. As a result, stress will manifest in the form of brain fog during the daytime, muscle loss, weight gain, increased blood thickening, and disturbed sleep. Stress in type A blood people can also lead to insulin resistance, hypothyroidism, and OCD (obsessive-compulsive disorder).
A Positive Blood Type Diet: Foods to Eat
Whether you have A positive blood type or A negative blood type, the diet will include a combination of vegetables, fruit, proteins, grains, legumes, nuts, seeds, spices, beverages, and fats and oils. The following is more detail on the food included within the type A blood type diet:
1. Meat proteins
Although people with type A blood are best on a vegetarian diet, that can eat certain animal products, such as poultry, fish, free-range eggs, and some dairy. Poultry will include turkey, chicken, and Cornish hens. Examples of fish and seafood will include carp, cod, pickerel, red snapper, trout, red snapper, monkfish, grouper, sardines, yellow or silver perch, white fish, and salmon.
Digesting dairy is thought to be difficult for the type A blood type; however, certain dairy types may be tolerable, including goat milk, yogurt, kefir, and cheeses like ricotta, feta, and mozzarella.
Most grains are well-tolerated among type A blood individuals; however, the most beneficial grains include amaranth, buckwheat, soba noodles, quinoa, spelt, rice, oats, rye, corn, kamut, millet, barley flakes, and couscous. Each grain can be eaten once or twice per week.
Many legumes are said to be well-tolerated on the type A blood type diet. The best legumes on the type A blood diet include lentils, black-eyed peas, red soy, pinto beans, black beans, green black, and adzuki beans.
Type A blood individuals can enjoy certain fruits, including apricots, cherries, pineapples, lemon, grapefruit, figs, prunes, plums, and most berries, especially blueberries, blackberries, cranberries, and boysenberries.
Many vegetables are well-suited to the type A blood type diet, including kale, collard greens, mustard greens, carrots, broccoli, onions, garlic, pumpkin, spinach, Swiss chard, dandelion, artichoke, chicory, horseradish, leek, romaine, okra, parsley, alfalfa, sprouts, and turnip.
Nuts and seeds are also healthy fats that benefit the blood type A diet. Beneficial nuts and seeds include pine nuts, almonds, walnuts, pumpkin seeds, flaxseeds, sunflower seeds, pecans, and macadamia nuts.
Spices and condiments thought to benefit the type A blood individual include ginger, garlic, soy sauce, miso, tamari, and blackstrap molasses.
Teas that include slippery elm, ginger, Echinacea, burdock, alfalfa, hawthorn, aloe, or green tea are welcomed on the type A blood type diet. Coffee and red wine are also acceptable.
A Positive Blood Type Diet: Foods to Avoid
Which foods should you avoid on the blood type A diet? The following is a list of the animal proteins, grains, legumes, fats and oils, vegetables, fruit, sweeteners, spices and condiments, herbs, and beverages to avoid in detail.
1. Meat proteins
The type A blood type diet recommends people avoid meats, including beef, duck, lamb, pork, veal, venison, goose, ham, bacon, quail, and pheasant. Seafood best avoided include caviar, conch, clam, catfish, bass, scallops, lobster, shrimp, crayfish, crab, eel, octopus, herring, prawns, flounder, sole, halibut, haddock, oyster, shad, hake, mussels, frog, and turtle.
Most dairy, milks, ice creams, and whipped cream should also be avoided.
While most grains are tolerated on the type A blood diet, some that should be avoided include granola, cream of wheat, farina, grape nuts, wheat germ, seven grain, wheat bran, durum wheat, and shredded wheat. Bread products that should be avoided include English muffins, matzos, pumpernickel, wheat bran muffins, white and whole-wheat flour, breads like multi-grain and whole-wheat breads, and pastas.
Some legumes should also be avoided on the type A blood type diet, including red beans, tamarind, navy beans, lima beans, kidney beans, garbanzo beans, and copper beans.
Fats and oils best avoided on the type A blood type diet include canola oil, corn oil, coconut oil, palm oil, cottonseed oil, peanut oil, safflower oil, sesame oil, cashew butter, shortening, and hydrogenated oils.
Fruits that should be avoided on the type A blood type diet include bananas, plantains, tangerines, oranges, mandarins, honeydew, cantaloupe, rhubarb, mango, papaya, coconut, blackberries, strawberries, and juices like orange, tomato, and papaya.
Although many vegetables are a good thing on the type A blood type diet, certain ones to avoid include domestic mushrooms, shiitake mushrooms, eggplant, tomatoes, cabbages, yams, sweet potatoes, potatoes, olives, and peppers.
Nuts that should be avoided include cashews, Brazil nuts, and pistachios.
9. Spices, herbs, and condiments
Spices, condiments, and herbs on the avoid list include relish, capers, vinegar, black pepper, white pepper, cayenne pepper, plain gelatin, ketchup, pickles, Worcestershire sauce, mayonnaise, wintergreen, catnip, cornsilk, red clover, and yellow dock.
Beverages the type A blood type diet will avoid include soda, diet soda, seltzer water, distilled liquor, black tea, beer, and club soda.
It is also recommended to avoid white sugar, corn syrup, and processed flavors, colors, and preservatives.
A Positive Blood Type Diet Chart
The following is a comprehensive chart for the blood type A diet that includes foods you should eat and avoid.
|Food Groups/Types||Foods to Eat||Foods to Avoid|
|Meat/Poultry:||Chicken, turkey, Cornish hens, and free-range eggs||All red meat (beef, pork, lamb, veal, ham, bacon, deer, heart, liver, goat, buffalo, and wieners), goose, duck, pheasant, quail, and partridge|
|Fish/Seafood:||*Red snapper, *pickerel, *monkfish, *carp, *cod, *trout, *salmon, *whitefish, *snails, *sardines, grouper, abalone, perch, mahi-mahi, porgy, pike, sailfish, shark, sea bass, smelt, swordfish, sturgeon, tuna, squid, mackerel, yellowfish, weakfish, an albacore||Barracuda, anchovies, bluefish, beluga, bass, striped bass, tilefish, halibut, catfish, caviar, conch, clam, crab, crayfish, lobster, shrimp, mussels, oyster, hake, smoked fish, haddock, sole, flounder, eel, herring, shad, octopus, frog, turtle, and prawns|
|Dairy:||Cheese: *tofu, *soya milk, *soya cheese, cow’s cheese, goat, feta, kefir, mozzarella, ricotta, lecithin granules, soy flakes, string cheese, and frozen yogurt||Most dairy, milks, ice creams, whip cream, and uncooked dairy|
|Grains:||*Amaranth, *buckwheat/groats/kasha, corn, barley, spelt, rye, millet, kamut, oat bran, oats, rice, quinoa, rice bran and their flours||Granola, cream of wheat, farina, grape nuts, wheat germ, seven grain, wheat bran, durum wheat, and shredded wheat|
|Breads/Cereals:||Breads: Gluten-free breads (like rice), rye crackers, rice cakes, and fresh breads not bagged in plastic
Cereals: Puffed (millet, rice, corn, kamut, spelt), cream of rice, barley flakes, couscous, and pastas except wheat pastas
|English muffins, matzos, pumpernickel, wheat bran muffins, plastic bagged breads, white and whole-wheat flour, multi-grain and whole-wheat breads, and pastas|
|Legumes:||*Adzuki, *aduke, *red soy, *lentils, *black eyed peas, *black, *green, *pinto, broad, cannellini, jimaca, snap, string, white, pods, green peas, and mung beans||Red beans, tamarind, navy beans, lima beans, kidney beans, garbanzo beans, and copper beans|
|Fats/Oils:||*Olive oil, *flax/linseed oil, cod liver oil, almond butter, tahini, sunflower butter, and peanut butter||Canola oil, corn oil, coconut oil, palm oil, cottonseed oil, peanut oil, safflower oil, sesame oil, cashew butter, shortening, and hydrogenated oils|
|Vegetables:||*Artichoke, * beet leaves, *carrots, *broccoli, *dandelion, *kale, *kohlrabi, *leek, *romaine lettuce, *okra, *parsley, *parsnip, *pumpkin, *spinach, *Swiss chard, asparagus, avocado, bamboo shoots, beets, bok choy, Brussels sprouts, collard greens, cauliflower, celery, cucumber, endive, escarole, fiddlehead ferns, garlic, horseradish, mustard greens, radicchio, radishes, rutabaga, seaweed, sprouts, squashes, turnip, mushrooms (enoki, tree oyster, and Portobello), water chestnut, watercress, and zucchini||Cabbages, yams, shiitake and domestic mushroom, potato, tomato, peppers, eggplants, sweet potatoes, and olives|
|Fruits:||Blueberries, cranberries, blackberries, boysenberries, cherries, figs, plums, grapefruit, pineapples, apricots, lemon, and apples||Bananas, plantains, oranges, tangerines, mandarins, cantaloupe, honeydew, coconut, rhubarb, mango, papaya, and juices like tomato, papaya, and orange|
|Nuts/Seeds:||Almonds, chestnuts, hazelnuts, litchi, hickory, macadamia, peanut, pine, pumpkin seeds, sunflower seeds, walnuts, pecans, flaxseeds, almond butter, poppy seeds, and sesame seeds||Cashews, Brazil nuts, and pistachios|
|Herbs/Spices/Condiments:||Ginger, garlic, soy sauce, miso, tamari, and blackstrap molasses||Relish, capers, vinegar, black pepper, white pepper, cayenne pepper, plain gelatin, ketchup, pickles, Worcestershire sauce, mayonnaise, wintergreen, catnip, cornsilk, red clover, and yellow dock|
|Beverages:||Teas that include slippery elm, ginger, echinacea, burdock, alfalfa, hawthorn, aloe, or green tea, as well as coffee and red wine||Soda, diet soda, seltzer water, distilled liquor, black tea, beer, and club soda|
|Sweetener/Other:||White sugar, corn syrup, and processed flavors, colors, and preservatives|
* Highly beneficial
A Positive Blood Type Diet: What Do Studies Say?
One of the central theories of the blood type diet is associated with proteins called lectins. These are considered anti-nutrients, and they may have a negative impact on the gut lining. The blood type diet theory claims there are lectins in the diet that target different ABO blood types.
Eating the wrong types of lectins is thought to lead to the clumping together of red blood cells. Although some evidence suggests that raw lima beans may interact with red blood cells in blood type A individuals, overall, the majority of agglutinating lectins react with all ABO blood types. In other words, lectins are not specific to a blood type aside from a few raw legumes.
What about research on ABO blood types and diet? There is strong evidence that people with certain blood types can have a lower or higher risk for various diseases. For instance, those with blood type A are more likely to have a higher risk for microbial infections. That being said, women with type A blood are likely to have a higher rate of fertility, too.
However, no studies show that this has anything to do with your diet.
One large study published in the journal PLoS One of 1,455 young adults eating a type A diet with lots of fruits and vegetables was associated with better health markers. But, this effect was seen in everyone following the type A diet and not just those with type A blood.
A major systematic review published in The American Journal of Clinical Nutrition in 2013 examined data from over a thousand studies and did not found a single well-designed study looking at the benefits of the blood type diet. The researchers had concluded that no evidence exists to validate the health benefits of blood type diets.
In other words, from the studies somewhat related to ABO blood type diets, they were all poorly designed. One study from 2009 that found a link between blood types and food allergies had also contradicted the blood type diet’s recommendations.
Is the A Positive Blood Type Diet Right for You?
If you are blood type A positive, does that mean you shouldn’t try the blood type A positive diet? I suggest trying the blood type A diet and see if it works for you. As a holistic nutritionist, I believe that no one knows their body better than you!
Although it might not be related to blood type, many people have experienced positive results on the blood type diet. As a result, if you tried the blood type diet and it worked for you, don’t let the lack of research on the diet stop you from eating that way.
For those on the type A blood type diet, here are a few other healthy tips:
- Don’t skip meals, and eat more protein at the beginning of the day and less at the end of the day. Chew food to enhance digestion, eat smaller and more frequent meals, and don’t eat when anxious.
- Establish a daily schedule. It may include going to bed no later than 11 p.m., getting at least eight hours of sleep, and not staying in bed once you wake up.
- Take at least two 20-minute breaks while working where you can mediate, walk, stretch, or perform deep-breathing exercises. Also, engage in a calming exercise like yoga around three times weekly.
Also Read :
“Blood Type A,” Dadamo; http://www.dadamo.com/txt/index.pl?1003, last accessed Oct. 27, 2017.
“Blood Type and Your Health,” Dadamo; http://www.dadamo.com/txt/index.pl?1001, last accessed Oct. 27, 2017.
“Blood Groups and Red Cell Antigens,” NCBI Resources; https://www.ncbi.nlm.nih.gov/books/NBK2267/, last accessed Oct. 27, 2017.
Cusack, L., et al., “Blood type diets lack supporting evidence: a systematic review,” The American Journal of Clinical Nutrition, May 22, 2013, 98(1): 99-104, doi: 10.3945/ajcn.113.058693, last accessed Oct. 27, 2017.
“Popular diet theory debunked,” University of Toronto; https://www.utoronto.ca/news/popular-diet-theory-debunked, last accessed Oct. 27, 2017.
Wang, J., et al., “ABO Genotype, ‘Blood-Type’ Diet and Cardiometabolic Risk Factors,” PLoS One, Jan. 2014; 9(1): r84749, doi: 10.1371/journal.pone.0084749, last accessed Oct. 27, 2017. .
Yamamoto, F., et al., “ABO research in the modern era of genomics,” Transfusion Medicine Reviews, April 2012; 26(2): 103-118, doi: 10.1016/j.tmrv.2011.08.002, last accessed Oct. 27, 2017..
Power, L., “Biotype Diets System: Blood types and food allergies,” Journal of Nutritional & Environmental Medicine, July 2009, 16(2): 125-135, doi: 10.1080/13590840701352807, last accessed Oct. 27, 2017. | <urn:uuid:5d5ced6a-2306-46f2-b4f3-cf731ac61359> | CC-MAIN-2022-33 | https://www.doctorshealthpress.com/food-and-nutrition-articles/a-positive-blood-type-diet-foods-to-eat-and-avoid/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00696.warc.gz | en | 0.877339 | 4,701 | 3.09375 | 3 |
E pluribus unum (Latin; “Out of many, one”)
A satellite is a vessel, a container for one or more instruments. With the launch of NASA’s Terra satellite on December 18, 1999, five instruments began a historic journey, a journey that has now extended more than 20 years—far beyond Terra’s six-year expected design life.
While the five individual instruments aboard Terra measure and collect data about specific properties of Earth and its interrelated systems, their combined data record represents a singular achievement in observations of our planet.
But an instrument is more than just an assemblage of sensors, mirrors, and electronics. An instrument—and the data derived from the instrument—is also an assemblage of people. While it is impossible to talk with the thousands of individuals who have been, and are, responsible for Terra’s five instruments and for ensuring the quality and validity of instrument data, conversations with Terra instrument Principal Investigators (PIs) and Science Team Leaders provide a glimpse into the significance of these instruments and the data they collect, along with how these instruments work together compiling an invaluable data record of Earth. From five instruments, one 20-year climate data record; out of many, one.
Terra is the flagship mission in NASA’s Earth Observing System (EOS). The EOS was established to acquire a long-term record of Earth observations to provide a better understanding of the total Earth system and the effects of natural and human-induced changes on the environment. Conceived in the 1980s and implemented in the 1990s, NASA’s EOS comprises an integrated constellation of satellites, a science component, and a data system. EOS data are the responsibility of NASA’s Earth Observing System Data and Information System (EOSDIS) and are managed by NASA’s Earth Science Data and Information System (ESDIS) Project, both of which are part of NASA’s Earth Science Data Systems (ESDS) Program.
Terra’s mission is to explore the connections between Earth’s atmosphere, land, snow and ice, ocean, and energy balance to derive a better understanding of the planet’s climate and climate change, along with the impact of human actions on these processes. Once established in its Sun-synchronous polar orbit approximately 705 kilometers above Earth’s surface, Terra began collecting data in early 2000. Five instruments provided by NASA and international partners are aboard the spacecraft:
- Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)
- Clouds and the Earth’s Radiant Energy System (CERES)
- Multi-angle Imaging SpectroRadiometer (MISR)
- Moderate Resolution Imaging Spectroradiometer (MODIS)
- Measurement of Pollution in the Troposphere (MOPITT)
All instruments continue to provide data that are processed into a wide range of standard data products for use in scientific research as well as near real-time (NRT) data for use in monitoring and managing on-going events such as storms, wildfires, and volcanic eruptions (a detailed description of Terra’s data processing system and strategy is available in the Earthdata article Terra: The Hardest Working Satellite in Earth Orbit).
According to figures from the ESDIS Metrics System (EMS), approximately 6.2 petabytes (PB) of Terra data were in the EOSDIS collection at the end of 2019, making up roughly 18.2 percent of the approximately 34 PB EOSDIS data collection. During the 2019 Fiscal Year (FY), which runs from October 1, 2018, to September 30, 2019, 12.65 PB of Terra data were distributed. Since 2000, the year the first Terra data were publicly available, approximately 50 PB of Terra data have been distributed to global data users. Distribution of data from Terra’s MODIS instrument remains the highest of any instrument data in the EOSDIS collection, and 10 PB of MODIS data were distributed during FY 2019. Terra instrument data are available through several discipline-specific EOSDIS Distributed Active Archive Centers (DAACs).
These numbers, however, represent only part of the Terra story. The data record compiled by Terra’s individual instruments, and how these instruments are used together, provides a better picture of the true monumental accomplishments of this mission over two decades orbiting our planet.
ASTER: Terra’s high-resolution imager
ASTER is a partnership between NASA and Japan’s Ministry of Economy, Trade and Industry (METI), and, notes ASTER U.S. Science Team Leader Michael Abrams, represents one of the longest-running—if not the longest-running—partnerships between NASA and another country’s space agency.
ASTER is the “zoom lens” of Terra, and has the highest spatial resolution of the five instruments. It also is pointable, which means it can view targets outside of its imaging swath and can be tasked to capture images of specific areas and events as well as produce global land maps. As Abrams notes, it often is used in conjunction with other instruments. “In MODIS data, you might have a pixel showing a vacant field with a little bit of vegetation; you don’t know what is causing the MODIS signal,” he says. “You go to a high-resolution instrument like ASTER to more closely examine the MODIS pixel to determine, say, that 80 percent of the area shown is fallow field or 10 percent is remnant forest and the other 10 percent might have some winter crop. ASTER gives you more information.”
ASTER comprises three infrared-sensing telescopes: a visible near infrared (VNIR), a shortwave infrared (SWIR), and a thermal infrared (TIR), all three of which are pointable. One component of Terra’s SWIR telescope failed after nine years in operation. While the overall SWIR system is still recording data and sending signals back, the data are not usable; the Japanese manufacturer wants to keep the system running to do lifetime tests on the components.
An additional feature of ASTER’s VNIR telescope is a backward-looking telescope to complement the downward looking (nadir) telescope. Combining the high-resolution data from the two VNIR telescopes allows the ASTER team to produce stereoscopic images and detailed terrain height models. These data were used to produce one of the most significant ASTER accomplishments: the ASTER Global Digital Elevation Model, or GDEM. “We have a stereo camera, so you can calculate vertical relief,” Abrams says. “In 2009 we decided we had enough scenes in our archive to cover just about all of Earth’s surface and produced a global digital topography map with 30-meter spatial resolution covering all the land surface of Earth divided into one-degree-by-one-degree tiles.”
The third version of the ASTER GDEM was released in 2019. “There are about 22,000 tiles that cover the entire Earth, and Japan distributes the same product,” says Abrams. “Between LP DAAC and Japan, we’ve distributed around 83 million tiles.”
The ASTER GDEM, along with Level 1 and Level 2 ASTER data products, are available through NASA’s Land Processes DAAC (LP DAAC). If ASTER data are not available for an area of interest, users can submit a Data Acquisition Request (DAR) to the ASTER team using the ASTER DAR Tool.
Along with the ASTER GDEM, Abrams notes that ASTER data are components of several other projects. These include the Global Land Ice Monitoring from Space (GLIMS) project run by the National Snow and Ice Data Center (NSIDC) that uses ASTER DEMs to examine changes in glacier thickness and volume; the ASTER Volcano Archive, which covers about 1,500 active volcanoes and includes every ASTER scene showing these volcanoes; and the Global Emissivity Database (GED) that was created by a team at NASA’s Jet Propulsion Laboratory (JPL) from ASTER thermal observations.
As Abrams observes, the 30-year ASTER collaboration between NASA and Japan continues to work well, and a new seven-year extension of the agreement was signed in October 2019. “This has been one of the most interesting parts of my involvement with this whole mission—getting to work with people from another culture for a common goal and learning together how we can approach problems through our two cultures,” he says. “So far, we’ve been able to solve all our problems, so we must be doing something right.”
CERES: Observing Earth’s radiation budget
CERES measures reflected solar and emitted thermal infrared radiation from Earth. These data provide a better understanding of what drives Earth’s climate system and how it is changing. “The energy exchange between Earth and space is fundamental to climate,” explains Dr. Norman Loeb, the CERES PI. “It’s a record you need to have for a long, long time.”
The two CERES instruments aboard Terra continue a data record that began in 1997 with the launch of the first CERES instrument aboard the joint NASA/Japan Aerospace Exploration Agency Tropical Rainfall Measuring Mission (TRMM; operational 1997 to 2015). Six CERES instruments are currently in space. Along with the two CERES aboard Terra, two CERES instruments are aboard NASA’s Aqua satellite (operational 2002 to present) and a single CERES instrument is aboard the joint NASA/NOAA Suomi National Polar-orbiting Partnership (Suomi NPP; operational 2011 to present) and NOAA-20 (operational 2017 to present) satellites.
CERES data are available through NASA’s Atmospheric Science Data Center (ASDC), which archives and distributes EOSDIS data related to Earth’s radiation budget, clouds, aerosols, and tropospheric composition. Algorithm work is done by the CERES science team, which delivers approved algorithms to ASDC for data product generation. CERES data can be subset, visualized, and ordered using the CERES Browse and Subset ordering tool, which was co-developed by the CERES science team and ASDC.
Dr. Loeb notes that the two Terra instruments with the most synergy with CERES are MISR and MODIS. MODIS and MISR retrievals of cloud, aerosol, and surface properties provide climate researchers with unique data to probe what drives variations in Earth’s radiation budget observed by CERES over a range of time and space scales. The CERES team uses the multi-angle capability of MISR to verify some of the algorithms they developed, and MODIS data are used to infer atmospheric and surface properties. “Within a CERES footprint, the higher spatial resolution of MODIS gives context to the CERES measurement,” says Dr. Loeb. “The MODIS data are also used as input to a radiative transfer model that calculates radiative fluxes at the surface and within the atmosphere.”
The strength of the CERES data record is clearly seen when CERES data are fused with data from other instruments aboard both polar-orbiting and geostationary satellites. The CERES team used these data to create a fully-resolved global diurnal cycle of Earth’s radiation budget at the surface, multiple levels in the atmosphere, and at the top-of-atmosphere. “Doing this data fusion at the level at which we’re able to do it, fusing data from instruments aboard so many different satellites to produce a seamless climate data record, is one of the major accomplishments of the CERES team and ASDC,” Dr. Loeb says. “If you bring multiple instruments with complementary capabilities together in a self-consistent manner, you end up with a far more complete picture of Earth’s radiation budget than what is achievable with just a single instrument.”
Dr. Loeb observes that with CERES flying aboard so many Earth observing satellites with long data records like Terra (20+ years), Aqua (17+ years), and Suomi NPP (8+ years), systematic trends in the data are beginning to appear. “When you show people an intriguing yet subtle change in Earth’s radiation budget from the Terra [CERES] data, and then show that Aqua, Suomi NPP, and NOAA-20 are all seeing the same thing, it makes a very compelling case that the observed change is real and the CERES instruments on these different platforms are performing exceptionally well,” he says. “This lends a lot more confidence going forward when we won’t have the luxury of so many CERES instruments operating simultaneously. Having all of these measurements, having them all together, and having these beautiful long records has been really, really useful for science.”
MISR: Multi-angle studies of Earth’s atmosphere and surface
“There had never been an instrument like MISR flown before Terra,” says MISR PI Dr. David J. Diner. “Even now, there are no other instruments similar to MISR in orbit.”
MISR uses nine cameras to capture multi-angular images of reflected sunlight scattered by Earth’s surface, clouds, and suspended airborne particles, called aerosols. MISR provides sensitivity to aerosol abundance and type, which is important for climate studies because the particles come in different sizes, shapes, and compositions, and, depending on their properties, can counteract or enhance warming due to greenhouse gases.
When the MISR proposal was submitted to NASA’s EOS in 1988, a main objective was to use this multi-angular dimension primarily for acquiring data about the effect of aerosols and clouds on the solar radiation budget and to study the angular reflection of light from vegetation to provide information about how plants interact with their environment.
Dr. Diner notes that looking at the atmosphere from oblique angles accentuates the reflection of sunlight from aerosols relative to the surface. It also enhances the information content of these measurements. “At the start of the EOS era, we knew pretty well how to retrieve aerosol amounts over deep ocean because the surface is very dark and most of the scattered light is from the atmosphere,” he explains. “This is not the case over land. You have aerosols over deserts and urban areas that are bright. The challenge is how to separate out how much of the light is coming from the atmosphere. This has been a main thrust of research by the MISR science team and other EOS instrument investigators over the last two decades. It’s now routine to use satellites for retrieving aerosol concentrations over land.”
While Dr. Diner estimates that about two-thirds of published peer-reviewed papers using MISR data relate to atmospheric aerosols, he notes that in recent years there has been roughly a 50-50 split between using MISR aerosol data for climate and air pollution studies. In fact, linking near-surface particulate matter to air quality and human health is now one of the principal applications of MISR data.
Since launch, the science team has shown that MISR data are useful for many other applications than were anticipated in the original proposal. Besides sensitivity of MISR’s multi-angular observations to vegetation canopy structure, the data also are able to characterize ice sheet and sea ice surface roughness, which is an indicator of seasonal ice conditions. Furthermore, as the MISR instrument passes over an area, its cameras continuously collect images from nine different angles over a period of seven minutes. The result is imagery depicting clouds or aerosol plumes from different times as well as at different angles.
These temporal and stereoscopic elements provide the ability to use imagery from multiple cameras to depict the heights of aerosol plumes and cloud tops, along with their speed and direction of motion. This information is valuable for studying climate and environmental impacts. MISR-observed winds have also proven useful for improving the accuracy of weather forecasts. “What MISR has shown us is the tremendous power of multi-angle imagery,” Dr. Diner observes.
MISR is also used together with the other Terra instruments. The fusion of MISR and MODIS aerosol products with an atmospheric model has led to the generation of maps of near-surface particulate matter concentrations that have been used in numerous health studies such as the Global Burden of Disease, which estimates that more than four million premature deaths occur each year due to exposure to airborne particles.
While CERES also has a multi-angular capability, its spatial resolution is much coarser than MISR’s. This enables cross-comparison and validation of MISR and CERES data on the solar radiation budget. MISR and MOPITT have been used together to help map pollution from aerosols and carbon monoxide to track sources of air pollution. Finally, higher-resolution ASTER data have been used to improve MISR estimates of cloud fraction and to validate MISR stereoscopic results.
“The fact that we have a 20-year record only enhances the uniqueness of Terra data because now we can look at how things are changing over the long term,” says Dr. Diner. “There are a host of applications of MISR data. They may only be limited by our imaginations.”
MODIS: Creating multi-disciplinary, broad-scale images of Earth
When you open the NASA Worldview data visualization application, the default base map you see is the current daily true-color image of Earth acquired by MODIS. By its sheer breadth of applications along with its ability to image almost every place on Earth every day, MODIS is the most heavily-used sensor aboard Terra based on the volume of data distributed, and continually collects data in 36 spectral channels in 2,330 km by 10 km swaths.
As noted by Dr. Michael King, the MODIS Science Team Leader, MODIS data are being used in studies across numerous disciplines. “It’s used to look at vegetative health, changes in land cover and land use, oceans and ocean biology, sea surface temperature, and cloud studies,” he says. “It provides information about cloud properties that no previous instrument has been able to do. It also is used extensively for monitoring fires and natural hazards along with oil spills and all kinds of things.”
As a result of its multi-disciplinary use, MODIS data are archived at and distributed through multiple discipline-specific EOSDIS DAACs. After being downloaded, raw MODIS data are sent to NASA’s Goddard Space Flight Center in Greenbelt, Maryland, for processing by Science Investigator-led Processing Systems (SIPS). Processed MODIS land products are sent to the USGS Center for Earth Resources Observation and Science (EROS) in Sioux Falls, SD. NASA’s Land Processes DAAC (LP DAAC) is co-located at EROS and archives and distributes MODIS land products. MODIS atmosphere products are processed and analyzed at Goddard, stored on NASA’s MODIS Adaptive Processing System (MODAPS), and distributed through NASA’s Level-1 and Atmosphere Archive and Distribution System (LAADS DAAC).
MODIS ocean biology and ocean color data are processed at Goddard by the Ocean Color Processing Group and archived at and distributed through NASA’s Ocean Biology DAAC (OB.DAAC). Finally, MODIS snow and ice data products are sent to NASA’s National Snow and Ice Data Center (NSIDC) DAAC. “The data and data processing have evolved greatly over 20 years,” Dr. King says. “Today, it’s a very automated and efficient process.”
Like his colleagues, Dr. King notes how MODIS data and capabilities complement those of other Terra instruments. “The MODIS cloud mask and clear sky data are used regularly in combination with MOPITT data for carbon monoxide monitoring and for knowing if MOPITT data are being collected through clouds or clear sky,” he says. “CERES data are used with MODIS data all the time. They use the aerosol optical product from MODIS and they use a lot of the MODIS spectral bands to identify the scene and determine cloud coverage or if the scene includes water clouds or ice clouds.”
MISR has similar capabilities as MODIS, and the two instruments are well-suited for use in aerosol monitoring as well as in studies of Polar winds. While ASTER imagery is acquired at a higher resolution than MODIS imagery, the ASTER instrument does not continually collect data like MODIS. This means that MODIS can be used to find targets for ASTER. This is especially useful when using MODIS thermal anomaly data to pinpoint the location of heat sources that could be wildfires or volcanic eruptions. ASTER can then be targeted to capture higher-resolution imagery of the MODIS-detected heat source.
Another important use of MODIS data is their adaptation into low-latency, real-time and near real-time data. “There’s direct broadcast, with stations around the world that can download raw MODIS data in very much real-time directly from the satellite,” says Dr. King. “Separate from this is NASA’s Land, Atmosphere Near Real-time Capability for EOS, or LANCE, which is able to provide several MODIS products generally within three hours of observation.” While these products do not have the extensive processing required for use in scientific research, their rapid availability make them valuable tools for monitoring on-going events like wildfires, volcanic eruptions, ice concentrations, and air quality.
For Dr. King, the significance of the MODIS data record is quite clear. “Because we have 20 years of MODIS data, you can see the evolution of change over a long time,” he observes. “It’s quite powerful to see these data over 20 years.”
MOPITT: Measuring carbon monoxide in lower levels of the atmosphere
It’s not enough to know that microscopic suspended particles in the atmosphere are present, you also need to know the source of these particles. For example, if you know aerosols are coming from biomass burning, you can expect them to be in the form of organic carbon and black carbon. Powerplants, on the other hand, emit a lot of sulfur dioxide, but not much carbon monoxide. Terra’s MOPITT instrument helps determine the sources of aerosols, along with other information about atmospheric composition.
MOPITT is a joint venture of NASA and the Canadian Space Agency. As noted by Dr. Helen Worden, the MOPITT U.S. PI, MOPITT is the only Terra instrument focused on trace gas pollution, specifically carbon monoxide. Comparing MOPITT data with data from MODIS and MISR, both of which measure aerosol optical depth, provides more information on the sources of atmospheric aerosols. “We were the first continuous global observations of carbon monoxide,” says Dr. Worden. “This showed how pollution from large urban and biomass burning sources, like fires in the Amazon, is transported globally. People take this for granted now, but this wasn’t the case until you had the satellite view of carbon monoxide transport.”
Carbon monoxide in the atmosphere is unique in that it’s much shorter-lived in the atmosphere than gasses like methane and carbon dioxide. As Dr. Worden notes, carbon monoxide plumes can travel around the world and still be easily tracked since it is possible to detect the enhancements caused by large sources. “With methane it’s harder to see this because it stays in the atmosphere around 11 years and the background levels are higher from all the methane that’s accumulated,” Dr. Worden explains. “And carbon dioxide stays in the atmosphere even longer, so satellite instruments need higher precision to see an enhancement [of carbon dioxide] against the background in the atmosphere.”
The MOPITT instrument was constructed by a consortium of Canadian companies and funded by the Space Science Division of the Canadian Space Agency. MOPITT instrument operations and data processing are divided between U.S. and Canadian teams. “The Canadian MOPITT team does all the instrument commanding and engineering,” says Dr. Worden. “The U.S. team works with NASA’s ASDC to do all the data processing, with algorithm updates and data validation done at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado.”
In a practical use of MOPITT data, Dr. Worden describes a study on which she worked looking at changes in carbon monoxide during the 2008 Summer Olympics in Beijing, China. The Chinese government significantly limited traffic during the Olympics. The result was a dramatic reduction in carbon monoxide emissions from automobiles and trucks. “You can see the effects of pollution reduction from space using MOPITT and use these data to make projections about the transportation sector and impacts on both carbon monoxide and carbon dioxide emissions,” she says.
MOPITT is currently the longest running record of carbon monoxide concentrations collected from space. “This is a great success story,” says Dr. Worden. “In addition to understanding decadal scale trends in carbon monoxide, when a new satellite instrument measuring carbon monoxide goes up, they have a reliable record against which they can compare to verify that their instrument is performing properly and collecting reasonable data.”
Five instruments; one data record
When Terra was launched more than 20 years ago it was expected to be the first of three satellites designed to compile a 15-year record of Earth processes. The single Terra data record now stretches beyond 20 years. All five instruments have performed far beyond their design expectations, and continue, with few minor issues, to provide a steady stream of data that form the foundation of a monumental climate data record.
The value of these data is evident in the amount of peer-reviewed research conducted using Terra data, such as the 15,185 unique peer-reviewed publications based on MODIS data (with 1,867 articles published in 2019, according to MODIS science team metrics); the 479 peer-reviewed publications based on MOPITT data as of January 22, 2020; or the more than 1,820 peer-reviewed CERES-based publications. As Terra data continue to be added to NASA’s EOSDIS collection, so does the research conducted using these data—research that furthers our understanding of Earth’s vast array of interconnected processes.
From five instruments, one data record. From the many, one. E pluribus, unum.
E pluribus … Terra.
Read more about 20 years of Terra | <urn:uuid:f71f6704-4a6a-44db-be1d-772343687506> | CC-MAIN-2022-33 | https://www.earthdata.nasa.gov/learn/articles/terra-at-20 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00097.warc.gz | en | 0.928259 | 5,667 | 3.703125 | 4 |
Hydration is Key
At BottlePro, our motto is Health Through Hydration. One major health risk that can be avoided through proper planning is to have enough water for your hike.
We've lived in the desert in Utah and western Colorado for 10 years, we've done A LOT of hikes in these areas where planning water needs is absolutely critical. We learned early on how essential it is to bring more water than you think you might need.
In this video, we talk through how much water you should bring on a hike, including best practices and recommendations so you can adventure safely.
And one major recommendation that we didn't specifically say in the video is to time your hike properly. If it's July and you're in the hot desert, you might want to start hiking before sunrise and finish by noon. We indirectly covered this in the video when talking about taking temperatures into account, but we wish we had made this recommendation more explicitly in the video. You wouldn't believe the number of hikers we've seen around here start long trails in the middle of the summer heat with just a small disposable water bottle!
Hiking is a great activity both for your physical and mental health, but if not planned properly, hiking can be dangerous. Every year, there are stories about people who have close calls or even die while hiking due to dehydration or hyperthermia, aka an overheated body.
Most incidents involve people who are hiking a new trail and may be unfamiliar with the area and the climate. This is especially true with tourists in desert areas like in Arizona, California, Utah, New Mexico, and Colorado, but it can happen to anyone, anywhere.
Here we’ll review best practices and guidelines to help you stay properly hydrated on your next hiking adventure.
Step 1: Research Your Route
The first rule of hiking is to plan ahead and know how long you’ll be gone. There’s a big difference between a 3 mile flat hike in the forest and a 3 mile hike with 2000’ of elevation gain and no shade in the desert.
Always look up the trail details from a site like:
(Click on an image below to link that site's Mt Garfield hike entry as an example.)
Step 2: Estimate How Long You'll Be Hiking
In general, it takes most people between 30 and 60 minutes to hike 1 mile. That’s a pretty big range, and your rate depends on a variety of factors including your own personal fitness, the elevation gain, the terrain (like if it’s sandy or involves scrambling), and the weather. And if you have children in your group or if you like to stop to take a lot of pictures, it will almost certainly take longer.
Again, Alltrails.com is a great resource you can use to estimate the hiking time, and it’s based on results from other hikers so it takes factors like elevation gain and terrain into account. But it may still be a good idea to plan on needing more time if you’re not in the best shape or if you’re hiking a new trail.
Step 3: Estimate How Much Water You’ll Need
According to REI, a good rule-of-thumb is to have roughly 17 ounces (a half-liter) of water for each hour of moderate activity in moderate temperatures.
You’ll have to use your own judgement on how to adjust that number based on factors for each hike, like your familiarity with the hike, your fitness level and health, your age, the temperature and humidity, and the elevation gain and terrain.
If you’re new to hiking or are trying a new route, we recommend doubling the rule-of-thumb and bringing 34 ounces, or roughly 1 liter, per hour that you expect to be hiking, especially if temperatures will be over 80 degrees Fahrenheit.
Once you get through these steps, you should have a good idea on how much water you should bring on your next adventure. Check out our next video to see our recommendations on the best water bottles and bladders to bring on hikes.
Which Hydro Flask Lid Should You Get?
Congrats, you have a Hydro Flask (or are thinking about getting one soon)! It's hard enough deciding which Hydro Flask bottle to get, but have you also thought about which lid you'll use?
Hydro Flask has four (4) lids available right now on Amazon: Flex Cap, Flex Sip, Flex Straw, and Straw Lid.
In this video, we go over how each one works, what we like and don't like about each design, and how we use them.
If you'd rather read through the review than watch the video, see the written summary below.
Hydro Flask has four lid options, and you might be asking which lid is best for you. We've reviewed and tested each lid extensively, and here's what we think. Links are in the description.
Which one is best for you? Well, it depends on how you want to use it, but we can tell you how we like to use them.
Make It Even Better with a SplashPro Splash Guard
Make It Even Better with a FlavorFuze Steel
Make It Even Better with a FlavorFuze Straw Infuser
Straw Lid (Original Design)
Also works with a FlavorFuze Straw Infuser
What's a Hydro Flask Flex Sip Lid?
Hydro Flask released the Flex Sip lid in early 2020, and it's a fantastic addition to their product lineup. Now you can take a sip with a quarter-turn of the lid, instead of having to take it all the way off
But the lid has quite a few moving parts, and knowing how to take it apart for cleaning and then put it back together may not be intuitive at first. Our quick 60 second review will show you how to use, disassemble, clean, and reassemble your Hydro Flask Flex Sip lid.
Where Can You Buy a Flex Sip Lid?
With summer in full gear and news of heat waves across the country, make sure you stay safe out there. Keep cool by adding ice to your bottle, and if you're using a Hydro Flask with the Flex Cap, use SplashPro to keep your ice at bay. Designed specifically to fit wide mouth Hydro Flasks. It also fits Iron Flasks and Takeyas, but not Nalgenes.
Follow along as we tackle this tough, but fun hike!
Located in Palisade, Colorado off of G Road.
Hydration products we used (follow the links to Amazon)
1) Hydro Flask 40oz Wide Mouth
2) BottlePro Cup Holder Adapter
3) SplashPro Splash Guard
4) HikerPouch Leather Bottle Sling
Here's what we think about Hydro Flasks's new Flex Straw Lid
We've seen a lot of questions like:
With all of Hydro Flask's cap options (not to mention 3rd party versions), it can be a little confusing. To help keep things straight, we're reviewing Hydro Flask's new Flex Straw cap and will let you know if it's worth a buy.
See our video review below
And if you'd rather read the review, keep scrolling past the video.
What's Different with the New Design?
At first the new lid may not seem very different, but there are a few key changes.
Change #1: The Handle
The previous straw lid had a hard plastic handle on one side that was basically a finger-loop. Some common complaints were that it was uncomfortable, especially when you're juggling holding other items too.
The Flex Straw cap solves many of these criticisms. The handle is the same style as their Flex Cap and Flex Sip lids.
This is a much improved handle design that we are totally on board with.
Change #2: Easier Cleaning
Change #3: Leak Resistance
Another common complaint of the previous straw lid was that it could leak fairly easily. The spout wouldn't "snap" into place, so it if wasn't pushed all the way down by accident, then you could experience some leaking if your bottle tipped over.
Based on our initial tests, the new Flex Straw lid seems to be less likely to leak. The main reason is that the spout "snaps" into the locked position. Once you hear the click, then you should be good to go and not have to worry about leaks.
We really like this change. The lid just feels a lot more secure and better-engineered than before.
Change #4: Insulation
The old cap really didn't insulate very well at all, but the new Flex Straw cap changes that. Hydro Flask added their Honeycomb(TM) insulation to the design, and the silicone insert also helps.
One of the main reasons you probably have a Hydro Flask is to have an insulated bottle, so this is a welcome improvement.
Want to Make Your Flex Straw Lid Even Better?
Try out our FlavorFuze Straw fruit infuser. It's a clip-on infuser that is compatible with both the old and new Hydro Flask straw lid designs.
Loose Leaf Tea in Hydro Flasks - A Match Made in Heaven
Many people search Google for things like:
We get a lot of questions about this too. Hydro Flasks are primarily used for water, but plenty of people would love it if they could have different flavors, like fruit infused water, coffee, or tea. In particular, tea is what we're focusing on today.
And check out our FlavorFuze Steel Mini demo video at the end!
Can You Make Hot Tea in a Hydro Flask?
And unlike some concerns with plastic and even aluminum, from what we have found, stainless steel won't leach chemicals or pollutants into your beverage. Flaske has a great article covering more details about the question of "Are Stainless Steel Water Bottles Safe to Drink From." So does Elemental Bottles, where they recommend looking for bottles that are either made from #304 or 18/8 stainless steel (Hydro Flasks are made from 18/8). We highly recommend checking these articles out if you have any other questions or concerns.
Best Hydro Flask Bottles for Tea
You should also consider what type of Hydro Flask you want to use, since that can have an effect on which type of tea infuser will work best.
Tea Infusers for Wide Mouth Hydro Flasks (like the 12/16/20oz Coffee Bottles and also 32oz/40oz Bottles)
Now that we know putting hot tea in stainless steel bottles like Hydro Flasks is safe, let's look at the best ways to do it!
OPTION #1: MAKE IT SEPARATELY
Historically, the most common way to enjoy tea in your Hydro Flask has been to brew it outside of your Hydro Flask first. Then just pour the tea into your flask, and you're good to go. This is great for many people because they already have tea-making equipment.
You'll also need to go this route if you are using a narrow-mouth Hydro Flask bottle.
OPTION #2: MAKE IT IN YOUR HYDRO FLASK
You can save yourself some extra dishes and time by brewing your tea right in your Hydro Flask!
But this option can be a little trickier because not all tea infusers and strainers will fit in Hydro Flask bottles. The inside diameter of wide mouth Hydro Flasks is right around 2.1 inches across, so be sure that your strainer is smaller so it can fit!
FlavorFuze Steel Mini Demo
My Soda Habit Story
I'll be 35 years old tomorrow, and I've had a soda habit since I was a kid. Growing up, it wasn't uncommon for me to drink 2, 3, or even 4 sodas per day. As you might guess, I've also been overweight most of my life too, which is certainly not a coincidence. But at 35, I finally kicked my soda habit. I'm down almost 10 pounds this year, and best of all, I feel like what I'm doing is sustainable.
Here's what's working for me, and hopefully it'll help you on your journey too.
Step 1: Know the Problem
If you're reading this, then you've likely already heard or read about the major health issues that can result from regularly eating or drinking high levels of sugar.
And a lot of other people have too, judging Google Trends. The interest-over-time for "Low Sugar" has been slowly-but-steadily increasing over the last several years.
Google Trends - Searches for "Low Sugar" for the Previous Five (5) Years
Of all the ways that consumers regularly ingest sugar, sugary beverages are the primary culprits. These include:
According to the American Heart Association, the maximum recommended sugar intake is 36 grams of sugar per day for men. For women, it's 25 grams. Each one of the drink examples above is either right at those limits or way above them, all from one drink.
Seeing how bottle and can sizes vary, it's also interesting to look at the sugar concentration as shown below.
One of the more surprising realizations for most people is how most fruit juices really aren't good for you. Sure they provide some benefits like vitamins and other nutrients, so in that way they are better than sodas. But the sugar content per ounce for Minute Maid orange juice is essentially the same as a Coke!
There are plenty of people smarter than me that study this topic for their day jobs, so I'll leave it to them to provide additional details and research about sugar and health. Here are some of the more helpful articles we've read.
The Centers for Disease Control and Prevention: Guidance on Added Sugars
WebMD: How Sugar Affects Your Body
American Heart Association: How Much Sugar is too Much?
Healthline: 11 Reasons Why Too Much Sugar is Bad for You
And keep in mind that diet sodas aren't free-and-clear of problems either. There is growing evidence that drinks with artificial sweeteners like aspartame and sucralose are bad for you as well, as discussed here.
Step 2: Find Your Real Motivation
You can always have a combination of motivations, and many of these are related (like weight loss and long-term health). But whatever the situation, the key is to figure out what primary focus and goal really motivates you.
A Personal Story - My New Motivation
After reading articles about how sugar essentially acts like a poison and how sodas in particular are "empty calories" that provide no nutritional benefit, I knew it was in my best interest to quit.
Each time I tried to change, I would start off a few days or weeks without sodas, but then I would slip and start drinking them again. The most common backfiring strategies I used were:
The reason that these backfiring strategies worked on me was that I hadn't determined what my real motivation was yet. I said I wanted to lose weight, and that can work for many people. But the problem, for me, is that I'm fairly comfortable in my own skin already. Also, I have always been overweight so it's not easy for me to truly realize how much better I might feel to achieve a healthy weight. It was more abstract.
But as I got older, something happened. I started thinking more about how little time we truly have, and how chronic diseases that we always read about and learned of back in school are very real. And then someone very close to me passed away. He smoked most of his life, and though he was finally able to quit a few years ago, by then the damage was done. He developed cancer and passed away earlier this year. He urged me to improve my habits now and to learn from his life experiences. This changed my motivational focus.
Instead of just wanting to lose weight, my new focus became achieving better long-term health. This seemingly small change in my focus and goals made all the difference for me.
Step 3: Strategies for Change
Once you know what really motivates you, it's time to start thinking about how you'll make changes.
There are many strategies that you can use to cut out soda from your diet. Here are a few.
A Personal Story - My New Routine
Honestly, I've used all of the strategies listed above to varying degrees. But the ones that helped me the most are #1 and #4. By drinking more water, I've been able to feel fuller and am less likely to drive to the store for a soda. And by using a flavored drink alternative, I can still take a break from "boring" water each day and satisfy my need for flavor.
This is what's working for me.
Need another bottle? Check out Hydro Flask's Amazon store.
Step #4 (If Needed): Don't Be Afraid to Reset
Stopping any habit can have its ups and downs. It took me over a dozen attempts over the years before I reached sustainable change. Don't feel ashamed if you don't succeed initially. Re-evaluate your motivations and strategies, and keep trying.
After debuting just over 10 years ago, Hydro Flasks quickly gained traction as the go-to insulated water bottle. For many people, the simple benefit of having a bottle that is vacuum-insulated was enough of a selling point. For others, it's the clean, yet stylish design. Whatever the reason, Hydro Flask continues to build its following and shows no signs of slowing down.
It's only natural what happened next. A whole range of accessories have been developed with the goal of making life with these amazing but cumbersome bottles a little easier.
We here at BottlePro got involved in this niche early with our cup holder adapter, so we've seen it grow over the years, including new notable accessories coming available fairly often. Here are some of our favorites for the best accessories for Hydro Flasks in 2022 (focusing on 32 and 40 ounce bottles).
ACCESSORIES FOR GETTING AROUND
The most common accessories for Hydro Flasks involve making it easier to bring your bottle wherever your adventures take you. These include cup holders, bottles slings, and handles.
#1: Cup Holder Adapter
#2: Stylish Bottle Sling
#3: Heavy-Duty Bottle Sling
#4: Leather Bottle Sling
#5: Paracord Handle
ACCESSORIES FOR PROTECTION
Next, consider investing in something that can help keep your bottle looking great for years to come.
#6: Bottle Sleeve
#7: Bottle Boot
ACCESSORIES FOR FLAVOR AND ICE
Now that your bottle is easier to bring along with you on your adventures, it's time to think about ways to improve what you're actually drinking!
#7: Flavor Infuser
#8: Ice Alternative
#9: Splash Guard / Ice Stopper
ACCESSORIES FOR CLEANING
It's not the sexiest category, but you should certainly put some thought into cleaning your Hydro Flask.
#10: Brush Kit
Bonus: Bottle Tablet Cleaners
We reviewed four popular adapters on the market today. This video will help you decide which cup holder adapter to purchase for your bottle. Clicking the links will take you to Amazon so you can check prices.
If you'd rather read the review, we've included a transcript of this video below for reference.
(And if you purchase something, we get a referral fee as an Amazon Associate! Thanks for your support!)
And while we focus on a few types of Hydro Flasks, this review is also applicable to other large bottles like Nalgenes, YETIs, Klean Kanteens, Simple Moderns, Takeyas, Thermoflasks, Iron Flasks, Fifty/Fifty, Swig, and many other popular bottles on the market today.
If you're not sure whether you need an adapter at all, we highly-recommend you visit the blog post referenced at the beginning of the video so you can determine if your car's cup holders will likely work with any of these adapters. Or if you're using a smaller bottle, you may not even need an adapter! Just click the link below to view that post.
Which Hydro Flasks Fit in Cup Holders? - The Ultimate Guide (Updated for 2022)
Hydro Flasks are great bottles, but many of them are so big that they don’t fit in standard cup holders. So we’re going to review four of the most popular cup holders on the market today, and give you our thoughts and recommendations. Links to purchase are in the description.
This video focuses on comparing cup holder adapters needed to use larger Hydro Flasks, and we assume you already know you need an adapter. But you may not need one at all if you have a smaller Hydro Flask, like a 21oz. Check out our blog post for a full step-by-step guide. Link in the description.
Let’s get started.
First we have Amazon Basics. Like many popular categories, Amazon released their own cup holder and have undercut most other adapters on the market. But it’s still very functional and a good option on a budget
Next up we have BottlePro, which is our cup holder adapter.
In summary, BottlePro is a great budget alternative to Amazon Basics for 32 and 40 ounce Hydro Flasks, but for smaller diameter bottles, you might want to look at an adapter with centralizer tabs. And keep an eye out for our upcoming 3rd version, which will have many improvements.
Next is Swigzy, which is a great premium option.
Last is Joytutus. This cup holder is a good option overall.
So that’s it! We hope this review has been helpful. Don’t forget to check out our website at bottlepro.net, where we have other products like infusers and splash guards for Hydro Flasks. And subscribe to our blog for more content like this, hydration news, and updates on product deals. Thanks for watching.
Follow us for more hydration-focused updates!
Try searching for things like "infusers" or "hiking".
Amazon Associates Program
BottlePro is part of the Amazon Services LLC Associates Program. We strive to provide helpful information and product recommendations, and we receive a commission on purchases made after you click through our links. | <urn:uuid:30ead12d-6102-44aa-9250-15d61c4e38e1> | CC-MAIN-2022-33 | https://www.bottlepro.net/hydration-blog | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00697.warc.gz | en | 0.948585 | 4,745 | 2.59375 | 3 |
How to find: Press “Ctrl + F” in the browser and fill in whatever wording is in the question to find that question/answer.
NOTE: If you have the new question on this test, please comment Question and Multiple-Choice list in form below this article. We will update answers for you in the shortest time. Thank you! We truly value your contribution to the website.
- What is an example of an M2M connection in the IoT?
- A user sends an email over the Internet to a friend.
- Sensors in a warehouse communicate with each other and send data to a server block in the cloud.*
- Redundant servers communicate with each other to determine which server should be active or standby.
- An automated alarm system in a campus sends fire alarm messages to all students and staff.
The Internet of Things (IoT) connects devices that traditionally are not connected to the Internet, such as sensors and actuators. A machine-to-machine (M2M) connection is unique to the IoT in that devices are connected together and communicate with each other. These devices can send data to a server block in the cloud for analysis and further operation change.
- What is the term for the extension of the existing Internet structure to billions of connected devices?
The Internet of Things (IoT) refers to the interconnection of billions of things, or “smart dust.” SCADA refers to a type of IoT system applied to the industrial Internet. Digitization has several meanings. It can refer to the process of converting analog to digital, or it can refer to the process by which an organization modernizes by planning and ultimately building, a sophisticated and forward-thinking IT network ecosystem that will allow for greater connectivity, productivity, and security. Finally, M2M refers to communication from machine to machine.
- Which statement describes the Cisco IoT System?
- It is a switch operating system to integrate many Layer 2 security features.
- It is an advanced routing protocol for cloud computing.
- It is an infrastructure to manage large scale systems of very different endpoints and platforms.*
- It is a router operating system combining IOS and Linux for fog computing.
Cisco developed the Cisco IoT System to help organizations and industries adopt IoT solutions. The IoT system provides an infrastructure to manage large scale systems of very different endpoints and platforms, and the huge amount of data that they create. Cisco IOx combines IOS and Linux to support fog computing.
- Which three network models are described in the fog computing pillar of the Cisco IoT System? (Choose three.)
- fog computing*
- cloud computing*
- enterprise WAN
The network models describe how data flows within a network. The network models described in the Fog computing pillar of the Cisco IoT System include:
Client/Server model – Client devices request services of servers. Servers are often located locally and managed by the organization.
Cloud computing model – a newer model where servers and services are dispersed globally in distributed data centers. Data is synchronized across multiple servers.
Fog computing – This model identifies a distributed computing infrastructure closer to the network edge. It enables edge devices to run applications locally and make immediate decisions.
- Which IoT pillar extends cloud connectivity closer to the network edge?
- network connectivity pillar
- fog computing pillar*
- management and automation pillar
- application enablement platform pillar
By running distributed computing infrastructure closer to the network edge, fog computing enables edge devices to run applications locally and make immediate decisions.
- Which cybersecurity solution is described in the security pillar of the Cisco IoT System to address the security of power plants and factory process lines?
- IoT network security
- cloud computing security
- operational technology specific security*
- IoT physical security
The Cisco IoT security pillar offers scalable cybersecurity solutions that include the following:
Operational Technology specific security – the hardware and software that keeps the power plants running and manages factory process lines
IoT Network security – network and perimeter security devices such as switches, routers, and ASA Firewall devices
IoT Physical Security – include Cisco Video Surveillance IP Cameras that enable surveillance in a wide variety of environments
- Which cloud computing opportunity would provide the use of network hardware such as routers and switches for a particular company?
- infrastructure as a service (IaaS)*
- software as a service (SaaS)
- browser as a service (BaaS)
- wireless as a service (WaaS)
This item is based on information contained in the presentation.
Routers, switches, and firewalls are infrastructure devices that can be provided in the cloud.
- What technology allows users to access data anywhere and at any time?
- Cloud computing*
- data analytics
Cloud computing allows organizations to eliminate the need for on-site IT equipment, maintenance, and management. Cloud computing allows organizations to expand their services or capabilities while avoiding the increased costs of energy and space.
- The exhibit is not required to answer the question. The exhibit shows a fog covering trees on the side of a mountain.What statement describes Fog computing?
- It requires Cloud computing services to support non-IP enabled sensors and controllers.
- It supports larger networks than Cloud computing does.
- It creates a distributed computing infrastructure that provides services close to the network edge.*
- It utilizes a centralized computing infrastructure that stores and manipulates big data in one very secure data center.
Three of the defining characteristics of Fog computing are as follows:
its proximity to end-users
its distributed computing infrastructure that keeps it closer to the network edge
its enhanced security since data is not released into the Cloud
- Which Cloud computing service would be best for a new organization that cannot afford physical servers and networking equipment and must purchase network services on-demand?
Infrastructure as a service (IaaS) provides an environment where users have an on-demand infrastructure that they can install any platform as needed.
- Which cloud model provides services for a specific organization or entity?
- a public cloud
- a hybrid cloud
- a private cloud*
- a community cloud
Private clouds are used to provide services and applications to a specific organization and may be set up within the private network of the organization or managed by an outside organization.
- How does virtualization help with disaster recovery within a data center?
- improvement of business practices
- supply of consistent air flow
- support of live migration*
- guarantee of power
Live migration allows moving of one virtual server to another virtual server that could be in a different location that is some distance from the original data center.
- What is a difference between the functions of Cloud computing and virtualization?
- Cloud computing separates the application from the hardware whereas virtualization separates the OS from the underlying hardware.*
- Cloud computing requires hypervisor technology whereas virtualization is a fault tolerance technology.
- Cloud computing utilizes data center technology whereas virtualization is not used in data centers.
- Cloud computing provides services on web-based access whereas virtualization provides services on data access through virtualized Internet connections.
Cloud computing separates the application from the hardware. Virtualization separates the OS from the underlying hardware. Virtualization is a typical component within cloud computing. Virtualization is also widely used in data centers. Although the implementation of virtualization facilitates an easy server fault tolerance setup, it is not a fault tolerance technology by design. The Internet connection from a data center or service provider needs redundant physical WAN connections to ISPs.
- Which two business and technical challenges does implementing virtualization within a data center help businesses to overcome? (Choose two.)
- physical footprint*
- server hardware needs
- virus and spyware attacks
- power and air conditioning*
- operating system license requirements
Traditionally, one server was built within one machine with one operating system. This server required power, a cool environment, and a method of backup. Virtualized servers require more robust hardware than a standard machine because a computer or server that is in a virtual machine commonly shares hardware with one or more servers and operating systems. By placing multiple servers within the same physical case, space is saved. Virtualized systems still need the proper licenses for operating systems or applications or both and still need the proper security applications and settings applied.
- When preparing an IoT implementation, what type of network will devices be connected to in order to share the same infrastructure and facilitate communications, analytics, and management?
- Which type of Hypervisor is implemented when a user with a laptop running the Mac OS installs a Windows virtual OS instance?
- virtual machine
- bare metal
- type 2*
- type 1
Type 2 hypervisors, also know as hosted hypervisors, are installed on top of an existing operating system, such as Mac OS, Windows, or Linux.
- Which statement describes the concept of cloud computing?
- separation of operating system from hardware
- separation of management plane from control plane
- separation of application from hardware*
- separation of control plane from data plane
Cloud computing is used to separate the application or service from hardware. Virtualization separates the operating system from the hardware.
- Which is a characteristic of a Type 2 hypervisor?
- best suited for enterprise environments
- installs directly on hardware
- does not require management console software*
- has direct access to server hardware resources
Type 2 hypervisors are hosted on an underlaying operating system and are best suited for consumer applications and those experimenting with virtualization. Unlike Type 1 hypervisors, Type 2 hypervisors do not require a management console and do not have direct access to hardware.
- Which is a characteristic of a Type 1 hypervisor?
- does not require management console software
- installed directly on a server*
- installed on an existing operating system
- best suited for consumers and not for an enterprise environment
Type 1 hypervisors are installed directly on a server and are known as “bare metal” solutions giving direct access to hardware resources. They also require a management console and are best suited for enterprise environments.
- How is the control plane modified to operate with network virtualization?
- Control plane redundancy is added to each network device.
- The control plane on each device is interconnected to a dedicated high-speed network.
- A hypervisor is installed in each device to allow multiple instances of the control plane.
- The control plane function is consolidated into a centralized controller.*
In network virtualization design, the control plane function is removed from each network device and is performed by a centralized controller. The centralized controller communicates control plane functions to each network device and each device focuses on forwarding data.
- Which technology virtualizes the network control plane and moves it to a centralized controller?
- fog computing
- cloud computing
Networking devices operate in two planes: the data plane and the control plane. The control plane maintains Layer 2 and Layer 3 forwarding mechanisms using the CPU. The data plane forwards traffic flows. SDN virtualizes the control plane and moves it to a centralized network controller.
- Which two layers of the OSI model are associated with SDN network control plane functions that make forwarding decisions? (Choose two.)
- Layer 1
- Layer 2*
- Layer 3*
- Layer 4
- Layer 5
The SDN control plane uses the Layer 2 ARP table and the Layer 3 routing table to make decisions about forwarding traffic.
- What pre-populates the FIB on Cisco devices that use CEF to process packets?
- the adjacency table
- the routing table*
- the DSP
- the ARP table
CEF uses the FIB and adjacency table to make fast forwarding decisions without control plane processing. The adjacency table is pre-populated by the ARP table and the FIB is pre-populated by the routing table.
- Which type of hypervisor would most likely be used in a data center?
- Type 1*
- Type 2
The two type of hypervisors are Type 1 and Type 2. Type 1 hypervisors are usually used on enterprise servers. Enterprise servers rather than virtualized PCs are more likely to be in a data center.
- What component is considered the brains of the ACI architecture and translates application policies?
- the Application Network Profile endpoints
- the Nexus 9000 switch
- the hypervisor
- the Application Policy Infrastructure Controller*
The ACI architecture consists of three core components: the Application Network Profile, the Application Policy Infrastructure Controller, which serves as the brains of the ACI architecture, and the Cisco Nexus 9000 switch.
- Fill in the blank.
In an IoT implementation, devices will be connected to a
network to share the same infrastructure and to facilitate communications, analytics, and management.
Correct Answer: convergedExplain:
Currently, many things are connected using a loose collection of independent use-specific networks. In an IoT implementation, devices will be connected to a converged network to share the same infrastructure and to facilitate communications, analytics, and management.
- Fill in the blank.
In a scenario where a user with a laptop running the Mac OS installs a Windows virtual OS instance, the user is implementing a Type
Correct Answer: 2Explain:
Type 2 hypervisors, also know as hosted hypervisors, are installed on top of an existing operating system, such as Mac OS, Windows, or Linux.
- A network design engineer is planning the implementation of a cost-effective method to interconnect multiple networks securely over the Internet. Which type of technology is required?
- a GRE IP tunnel
- a leased line
- a VPN gateway*
- a dedicated ISP
- What is one benefit of using VPNs for remote access?
- lower protocol overhead
- ease of troubleshooting
- potential for reduced connectivity costs*
- increased quality of service
- How is “tunneling” accomplished in a VPN?
- New headers from one or more VPN protocols encapsulate the original packets.*
- All packets between two hosts are assigned to a single physical medium to ensure that the packets are kept private.
- Packets are disguised to look like other types of traffic so that they will be ignored by potential attackers.
- A dedicated circuit is established between the source and destination devices for the duration of the connection.
- Two corporations have just completed a merger. The network engineer has been asked to connect the two corporate networks without the expense of leased lines. Which solution would be the most cost effective method of providing a proper and secure connection between the two corporate networks?
- Cisco AnyConnect Secure Mobility Client with SSL
- Cisco Secure Mobility Clientless SSL VPN
- Frame Relay
- remote access VPN using IPsec
- site-to-site VPN*
- Which two scenarios are examples of remote access VPNs? (Choose two.)
- A toy manufacturer has a permanent VPN connection to one of its parts suppliers.
- All users at a large branch office can access company resources through a single VPN connection.
- A mobile sales agent is connecting to the company network via the Internet connection at a hotel.*
- A small branch office with three employees has a Cisco ASA that is used to create a VPN connection to the HQ.
- An employee who is working from home uses VPN client software on a laptop in order to connect to the company network.*
- Which statement describes a feature of site-to-site VPNs?
- The VPN connection is not statically defined.
- VPN client software is installed on each host.
- Internal hosts send normal, unencapsulated packets.*
- Individual hosts can enable and disable the VPN connection.
- What is the purpose of the generic routing encapsulation tunneling protocol?
- to provide packet level encryption of IP traffic between remote sites
- to manage the transportation of IP multicast and multiprotocol traffic between remote sites*
- to support basic unencrypted IP tunneling using multivendor routers between remote sites
- to provide fixed flow-control mechanisms with IP tunneling between remote sites
- Which remote access implementation scenario will support the use of generic routing encapsulation tunneling?
- a mobile user who connects to a router at a central site
- a branch office that connects securely to a central site
- a mobile user who connects to a SOHO site
- a central site that connects to a SOHO site without encryption*
- Refer to the exhibit. A tunnel was implemented between routers R1 and R2. Which two conclusions can be drawn from the R1 command output? (Choose two.)
- This tunnel mode is not the default tunnel interface mode for Cisco IOS software.
- This tunnel mode provides encryption.
- The data that is sent across this tunnel is not secure.*
- This tunnel mode does not support IP multicast tunneling.
- A GRE tunnel is being used.*
- Refer to the exhibit. Which IP address would be configured on the tunnel interface of the destination router?
- Which statement correctly describes IPsec?
- IPsec works at Layer 3, but can protect traffic from Layer 4 through Layer 7.*
- IPsec uses algorithms that were developed specifically for that protocol.
- IPsec implements its own method of authentication.
- IPsec is a Cisco proprietary standard.
- Which function of IPsec security services allows the receiver to verify that the data was transmitted without being changed or altered in any way?
- anti-replay protection
- data integrity*
- Which statement describes a characteristic of IPsec VPNs?
- IPsec is a framework of Cisco proprietary protocols.
- IPsec can secure traffic at Layers 1 through 3.
- IPsec encryption causes problems with routing.
- IPsec works with all Layer 2 protocols.*
- What is an IPsec protocol that provides data confidentiality and authentication for IP packets?
- What two encryption algorithms are used in IPsec VPNs? (Choose two.)
- AES *
- Which algorithm is an asymmetrical key cryptosystem?
- Which two algorithms use Hash-based Message Authentication Code for message authentication? (Choose two.)
- MD5 *
- Which three statements describe the building blocks that make up the IPsec protocol framework? (Choose three.)
- IPsec uses encryption algorithms and keys to provide secure transfer of data.*
- IPsec uses Diffie-Hellman algorithms to encrypt data that is transferred through the VPN.
- IPsec uses 3DES algorithms to provide the highest level of security for data that is transferred through a VPN.
- IPsec uses secret key cryptography to encrypt messages that are sent through a VPN.*
- IPsec uses Diffie-Hellman as a hash algorithm to ensure integrity of data that is transmitted through a VPN.
- IPsec uses ESP to provide confidential transfer of data by encrypting IP packets.*
- A network design engineer is planning the implementation of an IPsec VPN. Which hashing algorithm would provide the strongest level of message integrity?
- 512-bit SHA*
- What is the purpose of utilizing Diffie-Hellman (DH) algorithms as part of the IPsec standard?
- DH algorithms allow unlimited parties to establish a shared public key that is used by encryption and hash algorithms.
- DH algorithms allow two parties to establish a shared secret key that is used by encryption and hash algorithms.*
- DH algorithms allow unlimited parties to establish a shared secret key that is used by encryption and hash algorithms.
- DH algorithms allow two parties to establish a shared public key that is used by encryption and hash algorithms.
- What is the purpose of a message hash in a VPN connection?
- It ensures that the data cannot be read in plain text.
- It ensures that the data has not changed while in transit.*
- It ensures that the data is coming from the correct source.
- It ensures that the data cannot be duplicated and replayed to the destination.
- Which Cisco VPN solution provides limited access to internal network resources by utilizing a Cisco ASA and provides browser-based access only?
- clientless SSL VPN*
- client-based SSL VPN
- What key question would help determine whether an organization should use an SSL VPN or an IPsec VPN for the remote access solution of the organization?
- Is a Cisco router used at the destination of the remote access tunnel?
- What applications or network resources do the users need for access?
- Are both encryption and authentication required?
- Do users need to be able to connect without requiring special VPN software?*
- Open the PT Activity. Perform the tasks in the activity instructions and then answer the question. What problem is preventing the hosts from communicating across the VPN tunnel?
- The EIGRP configuration is incorrect.
- The tunnel destinations addresses are incorrect.
- The tunnel IP addresses are incorrect.*
- The tunnel source interfaces are incorrect
- Which critical function that is provided by IPsec ensures that data has not been changed in transit between the source and destination?
- anti-replay protection
- Which service of IPsec verifies that secure connections are formed with the intended sources of data?
- data integrity
- Fill in the blank.
“__GRE__” is a site-to-site tunnel protocol developed by Cisco to allow multiprotocol and IP multicast traffic between two or more sites.
- What is an advantage of using the Cisco Secure Mobility Clientless SSL VPN?
- Security is provided by prohibiting network access through a browser.
- Any device can connect to the network without authentication.
- Clients do not require special software.*
- Clients use SSH to access network resources.
- How can the use of VPNs in the workplace contribute to lower operating costs?
- VPNs prevents connectivity to SOHO users.
- VPNs can be used across broadband connections rather than dedicated WAN links.*
- VPNs require a subscription from a specific Internet service provider that specializes in secure connections.
- High-speed broadband technology can be replaced with leased lines.
- Which two characteristics describe IPsec VPNs? (Choose two.)
- Key lengths range from 40 bits to 256 bits.
- IPsec authentication is one-way or two-way.
- Specific PC client configuration is required to connect to the VPN.*
- IPsec is specifically designed for web-enabled applications.
- IPsec authenticates by using shared secrets or digital certificates.*
Download PDF File below: | <urn:uuid:2f5c2865-5e1b-418c-92c7-291f1759e154> | CC-MAIN-2022-33 | https://itbeginner.net/ccna-4-connecting-networks-v6-0-cn-chapter-7-exam-answers/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00295.warc.gz | en | 0.895942 | 4,999 | 3.15625 | 3 |
Influential Black Virginians
There are many powerful stories of Black men and women who courageously rose above adversity and hardship. Virginia is home to many Black history figures, some well-known and others not widely heard, but all have contributed greatly to our culture. From early freedom seekers, educators, inventors, and public figures, to today’s greatest athletes and musicians, we celebrate the following Black Virginians for their unequivocal accomplishments and legacy they left or will leave behind.
Sally Hemings (1773-1835) had at least six children fathered by her owner, Thomas Jefferson. She left no written records of her life; therefore, details are extracted from plantation records, descendant stories, and most importantly, recollections written by her son, Madison. Hemings’s relations with Jefferson are believed to have begun when she was between the ages of 14 and 16 during Jefferson’s term as Minister to France. In Paris, Hemings was legally a free servant and refused to return to Monticello once Jefferson’s term ended in 1789. She returned only after negotiating freedom for their unborn children, and decades later Jefferson freed all of Heming’s children once they reached 21 years of age. Jefferson did not grant freedom to any other enslaved family unit. Learn more when visiting the The Life of Sally Hemings room at Monticello illuminating the life of one of the most famous and least understood Black women in U.S. history.
Nat Turner (1800-1831) was an enslaved mystical preacher who led a two-day rebellion, known as the Nat Turner Rebellion, of both enslaved and free Black people in Southampton County, Virginia. Beginning August 21, 1831, the rebellion caused the death of approximately 60 white men, women and children and 120 slaves and free Blacks. The rebellion was suppressed at Belmont Plantation on August 23, 1831 and resulted in state legislatures passing new laws prohibiting education of slaves and free Black people, restricting right of assembly and other civil liberties for free Black people, and requiring white ministers to be present at all worship services. Turner survived in hiding until later captured and hanged for “conspiring to rebel and making insurrection.”
Henry “Box” Brown (1815-unknown) was an enslaved man who shipped himself to freedom in a wooden box. Born on a plantation in Louisa, Virginia, Henry was sent to work in a tobacco factory in Richmond at age 15. He had a wife and four children who were later sold to a plantation in North Carolina. This tragic, irrevocable loss of his family fueled his ambition for freedom. In 1849, Henry made a 27-hour journey from Richmond to Philadelphia tangled in a box labeled “dry goods.” Henry’s story of perseverance is one that is not widely known. Not only did he accomplish an unfathomable feat, but he then turned his freedom narrative into an anti-slavery stage show as Henry “Box” Brown. In remembrance of Henry “Box” Brown, a metal replication of the box he escaped in can be viewed on Richmond’s Canal Walk not far from the tobacco factory in which he worked.
Mary Richards Bowser (1846-unknown) was born a slave to the Van Lew family in Richmond. After being freed by her owner, and following her schooling in Philadelphia, Mary returned to Richmond where she posed as Van Lew’s servant and joined her ring of spies. Bowser made her way into the Confederate White House as a full-time servant to Jefferson Davis during the Civil War. As Davis’s servant, Bowser read plans and documents laid out or hidden throughout the house and relayed her findings to Van Lew. Jefferson Davis knew there was a leak of information coming from his house but never suspected Bowser. In 1995, Mary was honored by the U.S. Government for her spying during the Civil War and inducted into the U.S. Army Military Intelligence Corps Hall of Fame.
John Dabney (1824-1900) spent the first 41 years of his life enslaved while simultaneously acquiring his reputation as a renowned chef and bartender in Richmond. To earn money, Dabney’s owner would hire him out to work in restaurants and hotels allowing him to earn tips. He was able to marry and purchase his wife’s freedom and was in the process of purchasing his own freedom when slavery was abolished. Although free, he continued to pay off his last bit of debt to his previous owner, which earned him a reputation and ability to secure credit at any bank in Richmond. Learn more about John Dabney’s legacy as a 19-century renowned chef and bartender through the film "The Hail-Storm".
Booker T. Washington — (1856-1915) Hardy; Educator, Founder of Tuskegee Institute.
Robert Russa Moton — (1867-1940) Amelia County; Educator, Lawyer, successor to Booker T. Washington as President of Tuskegee Institute.
Virginia Randolph — (1870-1958) Richmond; African-American educator. She was named the United States' first "Jeanes Supervising Industrial Teacher” and was posthumously honored by the Library of Virginia as one of their "Virginia Women in History" for her career and contributions to education.
Roger Arliner Young — (1889-1964) Clifton Forge; Zoologist; The first African-American woman to be awarded a Ph.D. in zoology.
Henrietta Lacks — (1920-1951) Roanoke; The progenitor of the HeLa cell line, one of the most notable cell research discoveries ever made. Her cells lead to many important breakthroughs in biomedical research, including the polio vaccine. Today, the HeLa cell line has been recognized as a globally significant contribution to medicine and research.
Mildred Loving — (1939-2008) Mildred Loving and her husband Richard, an interracial couple living in Caroline County, were plaintiffs in the landmark U.S. Supreme Court case Loving v. Virginia. With the help of the American Civil Liberties Union (ACLU), in 1967 the Supreme Court ruled in their favor and ordered Virginia’s Racial Integrity Act and all state anti-miscegenation laws as unconstitutional violations of the Fourteenth Amendment. The case remains relevant today and frequently cited in the 2015 U.S. Supreme Court ruling that legalized same-sex marriage. Learn more about Mildred and Richard’s unwavering love and battle for justice in the film, Loving.
Maggie L. Walker — (1864* –1934) Richmond; First woman bank president in America, Advocate of Black women's rights
* A discovery in 2009 revealed bank records showing Walker's birth earlier than the widely believed 1867 date.
L. Douglas Wilder — (1931-present) Richmond; First elected African-American Governor in U.S. history.
Barbara Johns — (1935-1991) New York City, but grew up in Farmville, Prince Edward County. Sixteen year old junior at Robert Russa Moton High School who organized a student strike for a new school building (1951). The NAACP advised the students to sue for integration. The Farmville case was one of the five eventually rolled into the Brown v. Board of Education of Topeka case that declared segregation unconstitutional (1954).
Oliver White Hill — (1907-2007) Richmond; Lead attorney with Davis v. County School Board of Prince Edward County, which was consolidated with Brown v. Board of Education at the Supreme Court; First African American Richmond City Council member (1949). Highly decorated with the top-prize being the Presidential Medal of Freedom bestowed by President William J. Clinton (1999).
Vernon Johns — (1892-1965) Prince Edward; Uncle of Barbara Johns and minister at several Black churches in the South and a pioneer in the civil rights movement. He is best known as the pastor 1947—52 of the Dexter Avenue Baptist Church in Montgomery Alabama. He was succeeded by Martin Luther King Jr.
Royal L. Bolling — (1920-2002) Dinwiddie; Massachusetts legislator and father of Boston politicians Bruce and Royal Jr. While serving in the Massachusetts House of Representatives in 1965, he sponsored the state's Racial Imbalance Act, which led to the desegregation of Boston's public schools.
Henry L. Marsh III — (1933-present) Isle of Wight County; Attorney involved with Brown v. Board of Education on the Virginia front; first African American mayor of Richmond (1977-82); Virginia State Senator (1991-2014).
Joseph Jenkins Roberts — (1809-1876) Norfolk; Born free at a time when many were born into slavery. Boarded a ship to Liberia with his mother, six siblings, and wife in 1829. Became a sheriff in 1833; first Black governor of the colony of Liberia (1840-1847); elected the first president of the new Republic of Liberia (1848). Served as president twice (1848-1856, 1872-1876). Helped found Liberia College (1851); served as a professor and president.
Randall Robinson — (1941-present) Richmond; African-American lawyer, author and activist, noted as the founder of TransAfrica. He is known particularly for his impassioned opposition to apartheid, and for his advocacy on behalf of Haitian immigrants and Haitian president Jean-Bertrand Aristide.
Anthony W. Gardiner — (1820-1885) Southampton; Gardiner was born in Virginia, but relocated with his family to Liberia. He became Vice President of Liberia in 1871 and was elected President of the country from 1878 to 1883.
William Harvey Carney — (1840-1908) Norfolk; First Black Medal of Honor recipient, decorated for his "extraordinary heroism on 18 July 1863 ... when the color sergeant was shot down, Sergeant Carney grasped the flag, led the way ... was twice severely wounded."
Adam Clayton Powell, Sr. — (1865-1953) Franklin County; Yale University graduate, prominent minister for Abyssinian Baptist Church in Harlem, New York between 1908 and 1936. By the time his son took over the pastorate in 1937, church membership totaled 7,000, making it one of the largest Protestant churches in the world.
Bill "Bojangles" Robinson — (1878-1949) Richmond; Dancer, stage and screen actor in early 1900s.
Ella Fitzgerald — (1917-1996) Newport News; "The First Lady of Song;" Grammy Award-winning Jazz Singer.
Pearl Bailey — (1918-1988) Newport News; Actress, Singer and Author; Tony Award (1967); Medal of Freedom Award (1988).
Lonnie Liston Smith — (1940-present) Richmond; Jazz pianist and keyboardist recording with notable musicians Pharaoh Sanders and Miles Davis. Meshed jazz with rap in the 90s.
Don Pullen — (1941-1995) Roanoke; Jazz pianist, organist, and composer. Well-received in Europe for his avant-garde jazz.
Tremaine "Trey" Aldon Neverson, aka Trey Songz — (1984-present) Petersburg; Singer-Songwriter, Rapper, Producer, and Actor.
Tim Reid — (1944-present) Norfolk; Actor, writer, director, producer. WKRP in Cincinnati, Simon & Simon, Sister, Sister. Co-founder of New Millennium Studios in Petersburg, VA.
Joseph B. Jefferson — Richmond/Petersburg; songwriter. "One of a Kind (Love Affair)" performed by The Spinners and released in 1972, topping the R&B Singles Chart and reaching number eleven on the Billboard Pop Singles chart in 1973. Other songs include "Games People Play," and "Sadie," sampled in "Dear Mama" by Tupac Shakur.
Wanda Sykes — (1964-present) Portsmouth; Comedienne and actress. Film and television credits include The Wanda Sykes Show, The New Adventures of Old Christine, Evan Almighty, Monster-in-Law, Nutty Professor 2, Chris Rock Show; Emmy Award Winner (1999, 2002, 2004, 2005).
Blair Underwood — (1964-present) Tacoma, WA moved to different states and graduated from Petersburg High School; Actor, film and television credits include Dirty Sexy Money, Full Frontal, Rules of Engagement,City of Angels, LAX, L.A. Law, The Event; NAACP Image Award Winner (1992, '95, '99, 2001).
Jesse L. Martin — (1969-present) Rocky Mount; Actor. Broadway and television credits include Rent, New York Undercover, 413 Hope Street, Alley McBeal, Law & Order, and Smash.
Missy Elliott — (1971-present) Portsmouth; Songwriter, Producer, Arranger, Talent Scout, Record Mogel. Considered the top female hip-hop artist of all time. Four-time Grammy Award Winner (2001, 2002, 2003, 2005).
Timothy Z. Mosley, aka Timbaland — (1972-present) Norfolk; Songwriter, Producer, Rapper. One of the highest paid musicians according to a 2007 Forbes article, "Hip Hop Cash Kings." Grammy Award Winner (2006)
Pharrell Williams — (1973-present) Virginia Beach; Composer, Singer, Producer, Rapper, Fashion Designer. Part of hip-hops most successful production team, The Neptunes, and the performing group N*E*R*D. Three-time Grammy Award Winner (two Grammy Awards in 2003, and one in 2006).
Maxie Cleveland "Max" Robinson, Jr. — (1939-1988) Richmond; First African-American broadcast journalist in the U.S., most notably serving as co-anchor on ABC World News Tonight alongside Frank Reynolds and Peter Jennings from 1978 until 1983. Robinson was a founder of the National Association of Black Journalists.
Spencer Christian — (1947-present) Charles City; TV weatherman for ABC's "Good Morning America"
Chris Brown — (1989-present) Tappahannock; Singer-Songwriter, Dancer, Actor.
D’Angelo (Michael Archer) — (1974-present) Richmond; American R&B singer, songwriter, and record producer. Four-time Grammy Award Winner (two in 2001, two in 2016).
Caressa Cameron — (1987-present) Fredericksburg; 2010 Miss America.
Dr. Robert Walter “Whirlwind” Johnson — (1899-1971) was the force behind integrating tennis. As his nickname “Whirlwind” suggests, he stormed across the American tennis landscape for three decades (1940-1970) and changed tennis forever. The former football All-American built a tennis dynasty in Lynchburg, VA, that produced the first two African-American grand slam champions, Althea Gibson and Arthur Ashe.
Arthur Ashe — (1943-1993) Richmond; Tennis player, writer, commentator; Wimbledon champion (1975); Medal of Freedom Award (1993).
Wendell Scott — (1921-1990) Danville; First (and only, as of this publication) African-American to win a NASCAR race (1963); "State Hero" Resolution by the Virginia General Assembly (1991); International Motorsports Hall of Fame Inductee (1999); Virginia Sports Hall of Fame Inductee (2000).
Pernell Whitaker — (1964-2019) Norfolk; Boxer; Olympic Gold Medalist (1984); member, International Boxing Hall of Fame (2006).
LaShawn Merritt — (1986-present) Portsmouth; Olympic Gold Medalist (Beijing 2008)
Gabrielle Douglas — (1995-present) Virginia Beach; Gymnast. Olympic Gold Medalist (London 2012). First African-American all-around gymnastics champion.
Allen Iverson — (1975-present) Hampton; Basketball player; Philadelphia 76ers (1996-2006; Rookie of the Year, 2001 league MVP); Denver Nuggets (2007-2008); Detroit Pistons (2009). Olympic Bronze Medalist (Athens 2004)
Dr. Edwin B. Henderson — (1883-1977) Washington, DC, settled in Falls Church for more than 50 years; "Grandfather of Black Basketball;" Introduced basketball to African-Americans on a wide-scale, organized basis in 1904; Author of The Negro in Sports (1939); Principle organizer of the first rural branch of the NAACP.
Moses Malone — (1955-2015) Petersburg; basketball player; ABA Utah Stars, St. Louis Spirits (1974-76); NBA Buffalo Braves, Houston Rockets, Philadelphia 76ers (1976-95); NBA MVP (1979, '82, '83); named "One of the 50 Greatest Players in NBA History" (1996); member, Naismith Memorial Basketball Hall of Fame (2001).
Ralph Sampson — (1960-present) Harrisonburg; basketball player. First Overall Draft Pick (NBA Draft, 1983). Houston Rockets (1983-1987), Golden State Warriors (1987-1989), Sacramento Kings (1989-1990), Washington Bullets (1991), Unicaja Ronda of Spain (1992), Rockford Lightning of CBA (1994-1995). Four-time NBA All-Star (1984-1987), NBA Rookie of the Year (1984), NBA All-Star Game MVP (1985).
J.R. Reid — (1968-present) Virginia Beach; American Basketball player; Charlotte Hornets (1989–1992); San Antonio Spurs (1992–1996); New York Knicks (1996); Paris Basket Racing (1996–1997); Charlotte Hornets (1997–1999); Los Angeles Lakers (1999); Milwaukee Bucks (1999–2000); Cleveland Cavaliers (2000–2001); Strasbourg (2001–2002); Baloncesto León (2002–2003). Assistant coach for the Monmouth Hawks. Third-team All-American – NABC (1989); First-team All-ACC (1988); Second-team All-ACC (1987); ACC Tournament MVP (1989); ACC Rookie of the Year (1987) and more.
Alonzo Mourning — (1970-present) Chesapeake; Basketball player; Second Overall Draft Pick (NBA Draft, 1992), NBA Charlotte Hornets (1992-95), Miami Heat (1995-2002, 2004-08), New Jersey Nets (2003-04); Olympic Gold Medalist (2000).
Willie Lanier — (1945-present) Clover; Football player, Kansas City (1967-77); member, pro football Hall of Fame (1986).
Lawrence Taylor — (1959-present) Williamsburg; football player; Second Overall Draft Pick (NFL Draft, 1981), New York Giants (1981-93); member, pro football Hall of Fame (1999).
Charles Haley — (1964-present) Gladys; American Football Player; San Francisco 49ers (1986-1991); Dallas Cowboys (1992-1996). The first five-time Super Bowl champion and second only to Tom Brady who has six, Haley attended William Campbell High School and James Madison University.
Bruce Smith — (1963-present) Norfolk; American Football Player; Outland Trophy winner at Virginia Tech (1984) , First Overall Draft Pick (NFL Draft, 1985), Buffalo Bills, Washington Redskins; member, Virginia Sports Hall of Fame (2005); member, College Football Hall of Fame (2006).
D.J. Dozier — (1965-present) Norfolk; American Football Player; Minnesota Vikings (1987-1990); Detriot Lions (1991); Heisman Trophy finalist (1986); Dozier scored the winning touchdown of Penn State's 1987 National Championship Fiesta Bowl victory over Miami.
Jamie Sharper — (1974-present) Richmond; Baltimore Ravens (1997-2001); Houston Texans (2002-2004); Seattle Seahawks (2005)
Atiim Kiambu Hakeem-ah "Tiki" Barber — (1975-present) Roanoke; American Football player, sports broadcaster, author; NFL New York Giants (1997-2006); NBC's "Today," "Football Night in America" and "Sunday Night Football" (2007-present); identical twin brother of "Ronde" Barber.
Jamael Orondé "Ronde" Barber — (1975-present) Roanoke; American Football player, author; NFL Tampa Bay Buccaneers (1997-2012); Super Bowl XXXVII winner vs. Oakland Raiders (2003); identical twin brother of "Tiki" Barber.
James Farrior — (1975-present) Ettrick; American Football player; New York Jets (1997-2001); Pittsburgh Steelers (2002-2012); Super Bowl XL and XLIII winner
Darren Sharper — (1975-present) Richmond; American Football Player; Green Bay Packers (1997-2004); Minnesota Vikings (2005-2008); New Orleans Saints (2009-2010)
Plaxico Burress — (1977-present) Norfolk; American Football Player; Pittsburgh Steelers (2000-2004, 2012-2013); New York Giants (2005-2008); New York Jets (2011); Super Bowl XLII winner.
Erron Kinney — (1977-present) Ashland; American Football Player; Tennessee Titans (2000-2005)
Damien Woody — (1977-present) Beaverdam; American Football Player. New England Patriots (1999—2003), Detroit Lions (2004—2007), New York Jets (2008—2011); Super Bowl XXXVI and XXXVIII champion; Pro Bowl 2002 and 2004. On August 5, 2011, he joined ESPN as an NFL analyst and can be seen on SportsCenter, NFL Live and other shows.
Michael Vick — (1980-present) Newport News; American Football player; Atlanta Falcons (2001-2006); Philadelphia Eagles (2009-2013); New York Jets (2014); Pittsburgh Steelers (2015); First African-American quarterback to be drafted first overall in an NFL Draft (2001). First Archie Griffin Award winner (1999). NFL Pro Bowl (2003, 2004, 2005)
DeAngelo Hall — (1983-present) Chesapeake; American Football Player; Atlanta Falcons (2004-2007); Oakland Raiders (2008); Washington Redskins (2008-2017)
Michael Robinson — (1983-present) Richmond; American Football Player; San Francisco 49ers (2006-2009); Seattle Seahawks (2010-2014); Super Bowl XLVIII winner.
Melvin Upton — (1984-present) Norfolk; Baseball Player; Tampa Bay Rays (2004-2012); Atlanta Braves (2013-2014); San Diego Padres (2015-2016); Toronto Blue Jays (2016)
Kendall Langford — (1986-present) Petersburg; American Football player; NFL Miami Dolphins (2008-2011); NFL St. Louis Rams (2012-2014); Indianapolis Colts (2015-2016); Houston Texans (2017)
Kam Chancellor — (1988-present) Norfolk; American Football player; Seattle Seahawks (2010-2017); Super Bowl XLVIII winner.
Percy Harvin — (1988-present) Virginia Beach; American Football player; Minnesota Vikings (2009-2012); Seattle Seahawks (2013-2014); New York Jets (2014); Buffalo Bills (2015-2016); Super Bowl XLVIII winner.
Russell Wilson — (1988-present) Richmond; American Football player; Seattle Seahawks (2012-present); Super Bowl XLVIII winner.
Tyrod Taylor — (1989-present) Hampton; American Football player; Baltimore Ravens (2011-2014); Buffalo Bills (2015-2017); Cleveland Browns (2018-2019; Los Angeles Chargers 2019-present
Chandler Fenner — (1990-present) Virginia Beach; Football player; Seattle Seahawks (2012-2013); New York Giants (2014-2015); Bristish Columbia Lions (2016-2017); Winnipeg Blue Bombers (2018-present); Super Bowl XLVIII winner. | <urn:uuid:575b569e-bce1-4fd5-81ec-61f13a9033a2> | CC-MAIN-2022-33 | https://www.virginia.org/plan-your-trip/about-virginia/famous-virginians/influential-black-virginians/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00295.warc.gz | en | 0.907553 | 4,990 | 3.765625 | 4 |
The principal aim of this educational research lies in promoting the development of readingskill by implementing activestrategies such as meaningful activities among learners of 8th year EGB at “Mercedes Moreno Irigoyen” public high school, Zone 8, District 3, Province of Guayas, Guayaquil Canton, during school year 2016 - 2017. Considering the social impact, it can be mentioned that the following work consists in facilitating reading competence for English Language Learners of 8th year EGB at “Mercedes Moreno Irigoyen”, who will get the major benefits of this project since they will be encouraged to practice reading exercises through fun and interactive activities such as comics which will allow them to feel motivated during the learning process and get over their reading difficulties. Hence, students will be able to raise their scholastic accomplishment. Moreover, appropriate didactic resources will be available by teachers in order to foster the teaching process of readingskill.
It is known that second language learning implies the mastering of the four language skills; speaking, writing, listening and reading. However, according to general knowledge in the field of English as a foreign language teaching, the most frequent language skill that students find difficult to master seems to be reading. This may be because they find words that are unknown for them. Therefore, reading a text in English is a difficult task for them. Furthermore, one crucial factor for students to become proficient in reading texts in English is the amount of vocabulary they possess. Research shows that learners need to know approximately 98 percent of the words in written or spoken discourse in order to understand it well (Nation, 2006, in Schmitt, 2008). However, besides these facts, vocabulary acquisition or learning can be regarded as the biggest problem for most learners (Cheung, 2004). Therefore, it seems that vocabulary learning is currently receiving attention in second language pedagogy and research (Hatch, 1983; Zimmerman, 1997, in Bornay, 2011). But it is still a contentious issue how learners learn vocabulary effectively and efficiently or how it can best be taught. In the light of these arguments, it is necessary to explore how foreign language learners deal with unknown vocabulary, especially secondary school students in Mexico. This is because to date there is no published empirical studies on Mexican secondary school students’ use of vocabulary strategies, so this study pretends to shed light on this subject.
c. Read to critically evaluate a text or book. Previous educational experiences (your previous academic preparation) should help you develop opinions about the facts. When reading different points of view, be impartial and once you know the consistency of the author's ideas, judge them or assess them objectively. You must discover the ideological influences or implications that it presents, to weigh the validity and foundations of the partial theses. The important thing is to read with an open attitude. When possible, consult at least two points of view before forming a definitive opinion on the subject. d. Read to understand the contents of the topics that make up a text or book. It is the type of reading that is done with the purpose of acquiring new knowledge, which implies the realization of a series of activities, such as writing notes, consulting the dictionary, reviewing, etc. These activities that provide an understanding of the contents will be discussed extensively later.
18 While reading consists of highlighting important facts by coloring the story elements, highlight important vocabulary; highlight the main idea and details. Students can use post- its by putting the post-it along the edge of the page to mark the important facts. Moreover, students can use graphic organizers to give sequence and organize thoughts about a story as they read it. Other visual aids can be to use different pictures so that, students can order them according to the story. The teacher can stop and ask students to read a sentence and describe what it looks like in their mind and ask them to say if some part of the story is true or false. In post-reading, students can create visual drawings or story maps of what happened in the course of the story after they have read it. They can also do role-play activities from the text by illustrating the images they have seen while reading the state. They can draw the story on their own, write small sentences in it, or drawing a special character from the text, and write its characteristics. Duke, Pearson, and Whitney (2002).
Inference and the development of critical thinking favor intellectual development and autonomous learning in a changing and full of opportunities world for those who have the capacity to take them, allowing students to understand in greater depth the growing information available in society, analyze it, discover its meaning, evaluate its accuracy, relevance or validity, and make judgments based on criteria based on the reading of texts and contexts. To understand, it is necessary todevelop several mental skills or use different cognitive strategies, such as: sampling, that allows selecting meaningful information and constructing meaning, predicting anticipating what the writing will say, providing previous knowledge to make hypotheses and making inferences to understand what is recommended, deduce the verifiable data, develop a significance, check and self- correct to check whether what was anticipated and inferred is right, and so on.
The first step focused on the individual through task planning. The kids with these kind of problems were able to control themselves in certain moments of the activity thanks to the micro-instructions offered (using Total Physical Response Method). Most of the times, they needed help to control their emotions and actions, that was achieved through the language provided by the teacher, little by little this help was supplied by the other classmates, who encouraged them to keep trying doing the task. At the same time, we design and implemented motor skill activities addressed to a small group of eight students, some of them SEN students, in the classes of Alternative to Religious Education. With this individualized instruction, we could see improvements in our students either their motor skills, their capacity to understand rules as well as their control of the emotions.
The readingskill is paramount for language learners because it aids them to broaden their lexicon, reinforce grammatical structures, and become more critical. However, this skill might be one of the most challenging for students since it requires knowing a wide range of vocabulary, identifying general and specific ideas, inferring implicit messages, establishing relationships, following instructions, among others. In this sense, the present study aims at helping nursing and physical therapy students from Manuela Beltran University improve their reading comprehension through the instruction of readingstrategies based on specific content related to health. The implementation was carried out during six sessions in which the students did not have the normal English classes but worked on the CLIL worksheets that I designed.
language, either in an academic or in a daily life context. For this, there are several types of strategiesto be successful in the different situations that a student may face, and consequently, they can be taught to help students develop both academic, and life skills. In this study, it is expected that from the development of one language ability (reading), there can be trends (from the acquisition of learning strategies) to help students have a better academic learning process (understanding the influence of reading in their lives as students) and performance in their daily lives. Even though this topic has been thoroughly studied, it is relevant to continue studying it because as Harmer (2007) mentions “...many of them want to be able to read texts in English either for their careers, for study purposes or simply for pleasure. Anything we can do to make reading easier for them must be a good idea (pp. 68). Generations change, the perceptions of learning change, and it is important to keep track of the thinking that different generations possess.
T his study deals with the implementation of the Reciprocal Teaching Model (RT) and its relation to the development of writing skills in the tenth graders of a public school in Cartagena, Colombia. The participants were selected according to Cozby’s (2008) convenience sampling, which considers availability, schedule, members, and characteristics.The Action Research approach related to the quelitaive research allowed to identify the problem, gather data, interpret , to act on evidence and to evaluate results. Consequently, a diagnostic stage was carried out which indicated difficulties in generating thoughts, translating ideas into readable texts, using accurate grammar, vocabulary, and punctuation, and establishing cohesion and coherence. Therefore, it was clear the need for the implementation of strategiesto improve the writing skills in this school. This introspection led the researchers to consider the use of the RT Model because it encourages students to take into consideration their own thinking processes during reading and it helps them to be actively involved in their comprehension process, which is reflected in their written production. The outcomes of the study reported that through the implementation of the workshops under the RT Model, the students developed and improved their writing skills in English. The findings established the usefulness of this model since it raised the confidence of the students towards writing which contributed to the improvement of the skill. Additionally, the practicality of portfolios and the collaborative and cooperative strategies allowed students to learn from their peers and teacher by recognizing writing as a more meaningful and pleasant.
Ostenson (2010) also reported the same strategies but he approached them by focusing on the topic of critical evaluation skills when reading online, he pointed out the importance of making judgments about credibility of the sources everybody can find on the Internet. The main purpose of this study was to examine the effects of targeted instruction on students’ ability to read critically and evaluatively the sources of information that they found on the Internet, one using a checklist with some important criteria for the evaluation and the another one basically background knowledge of the topic being studied, as well as relying on the use of strategies of sourcing and corroborating, comparing information using other tools to corroborate trustworthiness of the information. Future research suggested can be done on students who are less likely to have access to Internet. O’Byrne and McVerry (2009) also investigated an instrument designed to gauge the dispositions and capabilities necessary for online reading comprehension, these ones were included: persistence, flexibility, collaboration, reflection, and critical stance.
This final project is titled: “The reading storm.” As an Infant Education Degree student specialized in English language, I have chosen to design a reading workshop proposal because reading at early stages is very important in order to get a good comprehension level. Reading is one of the things that worries every teacher and family the most because reading comprehension is a very important skill that everyone needs to have to be an active citizen. I have also chosen these ages and English students because I have been working in England with these ages for a long time and I have also observed that some of these students are not very interested in reading.
It is possible that due to insufficiencies in didactic resources at Unidad Educativa "Camilo Ponce Enriquez" the teaching of English is being limited, in consequence, students are using the same material and the same strategiesto learn English all the time, it also reduces the students’ interest by learning. Therefore, with the purpose to find out the application of English modern music lyrics as a tool todevelop the writing skillto work with worksheets, the songs that could be listened to the radio, a CD, a flash memory or other storage device previously, in the criteria to choose the songs, the pupils are interested by the musical current panorama and teachers should follow this trending, too. Besides, my years of practice like teacher have allowed me to penetrate into the interests of the pupils of different ages. The classroom in that the didactic offer is going to take place is very spacious, which allows realizing psychomotor education activities. We have a normal blackboard, a speaker, many CDs or flash memories with different current styles of songs.
Reading is an essential part for all communities todevelop in their daily activities and the purpose of this is to understand written texts, for example. to relate the symbols with words, and in turn with sounds. The mental exercise that learners of a second language perform while reading, is usually to translate and at the same time try to understand what is being read. The cognitive process that the brain performs during readings is to create meaning through the interaction of the reader and the text. Reading is not a simple activity that is learned in the early school years, as students’ progress in their academic preparation reading implies learning new vocabulary and is related to new knowledge that is why teachers must choose the readings according to the previous knowledge of the students.
Apart from that, computer science has also produced advances, particularly in the field of machine learning. Surprise has been expressed for the limited use of machine learning methods in ITS, in comparison to other fields (H¨am¨al¨ainen & Vinni, 2006; Hamburger & Tecuci, 1998). There has been some research on machine learning methods for classifica- tion of educational data. In distance learning, the Naive Bayes algorithm has proven useful for predicting student dropout (Kotsiantis, Pierrakeas, & Pintelas, 2003); while the combi- nation of multiple classifiers, and feature weighting by means of a genetic algorithm have been used as successful strategies for predicting student final grades from logged data in an educational web application (Minaei-Bidgoli, Kashy, Kortemeyer, & Punch, 2003). The big size of distance learning datasets is a key aspect for this research. Other educational data sets are typically much smaller. A study dealing with this problem concluded that the Naive Bayes approach is applicable in such cases after careful preprocessing (H¨am¨al¨ainen & Vinni, 2006).
Being conscious that English education is a priority nowadays, to enhance the economic and, social and technological development of the country. The government has implemented the English teaching from the primary school to strengthen students‘ English skills. However, receiving English instruction five years earlier doesn‘t guarantee for successful language learning, what is important is that EFL students should be taught how to learn English strategically focused on readingto learn rather than learning to read. Because the inability to read English effectively has not only caused students to experience barriers to academic success, but has also disadvantaged them in their career performance. Therefore, how to assist EFL students in taking control of their own reading process while fostering success and positive attitudes toward EFL reading has become one of the most urgent tasks facing teachers who teach English.
Investigating potential connections between reading abilities, metacognitive awareness, and readingstrategies, Sheorey and Mokhtari’s (2001) study establishes that there is a direct correlation between low/high ability readers and their low/high metacognitive awareness, as well as their reading strategy. Drawing on these results, the same authors develop and test a special instrument, the Survey of ReadingStrategies (SORS), for measuring the metacognitive awareness of L2 readers, in particular for determining “the type and frequency of readingstrategies that adolescent and adult ESL students perceive they use while reading academic materials in English” (Mokhtari and Sheorey, 2002: 4). The SORS is comprised of three groups of readingstrategies: (i) global readingstrategies (different techniques to keep the purpose in mind, to manage the reading process, etc.); (ii) problem solving strategies (differ- ent actions/procedures used while reading, to improve comprehension, for example, adjusting the speed of reading, rereading, etc.); and (iii) support strategies (activities aimed at aiding the reader in comprehending the text, for example, taking notes, underlining, etc.). The complete list of these strategies is given in Appendix 1. Mokhtari and Sheorey (2002: 4-6) point out that the SORS is not only beneficial for L2 learners, but also for teachers, since it can be used as an instrument for identifying the reading needs, as well as making readers more conscientious about the reading process. In fact, the authors (2002: 6-8) accentuate the practical value of SORS, and the importance of teachers in raising learners’ metacognitive awareness, as well as introducing different reading techniques into their teaching practices, inevitably resulting in higher reading abilities among learners. 1
"Comprehension can be seen as the process of using one’sown prior experiences and the writer’s cues to infer the authors intend the meaning" (Dahler, Fauzan, Ridho & M. Zaim, 2019, p. 211). The meaning of process is the realization of several activities to achieve a purpose. According to Tyson is cited by Scott (2018) as a process who studies a foreign language must have enough time to analyze a text and understand its meaning. To that end, it requires the implementation before, during and after the reading of active techniques and strategies that allow the student to increase his / her knowledge of the language. Also, this process is interactive, which is generally given in the social and cultural environment that shapes and is shaped on what the reader knows in advance, its practice, the intention to read, the knowledge contained in the text and the context in which the reading is done.
Similarly, Fuenzalida (2011) concluded that the use of RSs have been shown to be effective in enhancing reading comprehension. It was reported that these strategies provided students with the necessary tools to reach a better understanding of the texts, since students started to use them strategically when they found barriers to understanding what they were reading. Ozek and Civelek (2006) reported that the effects of cognitive strategies on reading performance suggests that relating pictures and background knowledge to the text, guessing, re- reading, making notes and summaries of important information are the strategies that help readers to improve their reading ability significantly, providing rich information about how learners can solve problems and when they can use them to better understand the texts.
Learning strategies have been divided into two groups direct and indirect. Direct strategies include memory, cognitive, and compensation strategies. Memory strategies help foster particular aspects of competence (grammatical, sociolinguistic, discourse, etc.) by using imaginary and structured review. Cognitive strategies strengthen grammatical accuracy by reasoning deductively and using contrastive analysis. Compensation strategies help develop strategic competence by using inference and guessing when the meaning is not known, using synonyms or gestures to express meaning of an unknown word or expression. Indirect strategies include metacognitive, affective, social strategies. Metacognitive strategies help students to regulate their own cognitive processes and to focus, plan and evaluate their progress as they move toward communicative competence. Affective strategiesdevelop the self-confidence and perseverance needed for learners to be actively involved in language learning. Social strategies provide increased interaction and more emphatic understanding with others.
Given the importance of listening in language learning and teaching it is essential for both teachers and learners to take appropriate actions that foster understanding of the listening skill. The teacher researcher of this study considers that listening is not an easy skilltodevelop, but he believes that if we equip students with a repertory of metacognitive strategies for listening, learners will become more aware of their listening process. Consequently, they will be capable of employing different strategies before, during and after a listening task. The use of metacognitive strategies makes a difference between successful and less successful listeners. According to Yang (2009), successful listeners plan before the listening task, direct their attention to specific information ignoring irrelevant distractions during listening, and evaluate their performance after listening. Moreover, the use of these strategies has demonstrated to have a positive impact in the improvement of the listening comprehension (Vandergrift, 2004). Similarly, Bedoya (2012) indicated that these strategies support active student participation in listening. In sum, it can be stated that the use metacognitive strategies play an important role in the development of the listening skill. | <urn:uuid:fa82e763-67d6-4ae9-ab6b-e3e8fefcc209> | CC-MAIN-2022-33 | https://1library.co/title/active-strategies-to-develop-reading-skill | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00297.warc.gz | en | 0.954493 | 4,101 | 3.375 | 3 |
- Getting your classes organized
- Planning lessons and teaching classes
- Grading assignments and exams
- Meeting and collaborating with colleagues
- Prioritizing professional development
Teaching can be an incredibly rewarding career. From instilling a love of learning to helping students master complex concepts, the “aha!” moments make the high stress worth it.
However, teaching and face time (or, now, Zoom time) with your students is typically only a small part of being an educator. With multiple priorities competing for your time, knowing where to focus in order to have the biggest impact is a challenge. How do you manage your time to balance work and life?
Having a productivity system, powered by a task manager that works as hard as you do, is a great first step. That’s where Todoist comes in. Whether you have an hour to quietly plan between classes, or you’re on the go between hectic meetings, a reliable productivity app will help you to quickly capture, organize, and prioritize everything you need to do. This Educator’s Guide to Todoist will provide workflow tips to help you get ahead your lesson plans, stay on top of your grading, and keep up to date with the new developments in your field to keep your students informed.
Get your classes organized
The time between the first day of class to final exams often feels like a whirlwind. In between grading papers and helping students back-to-back during office hours, it’s hard to find the time to get everything in order. Stay ahead of the busyness of the semester by getting your classes organized in Todoist before they even begin.
Create a project for each course
Whether you’re a teacher or a TA, create a Todoist project for each of your classes to organize related tasks. You can create projects for the distinct subjects you teach (e.g. Math and Chemistry) or the same course across different sections or times (e.g., History 120A and History 120B).
If a course has multiple components to it, keep them separate using sub-projects. For instance, you may teach the lecture and run the seminar of an English course.
Divide your project into sections
Every class you teach probably requires a range of work –– from assigning homework and grading exams to providing student evaluations and creating assessments. To organize these different kinds of tasks, split your Todoist project into different segments using sections. For instance you could divide your course into the following sections:
- Resources: Including your course syllabus, any important teaching materials, or any links you regularly access as an educator.
- Lectures: Create a section dedicated to any tasks related to your lectures, for instance, updating any slides, creating or revising existing lesson plans, or reminders to share helpful resources with your students.
- Grading: A specific section for “grading” will help you separate out any tasks that are relevant to marking exams, assessing term papers, and evaluating exams.
- Office Hours & Support: Keep helping students central to your courses, by creating dedicated space for student support in each of your class projects. Create tasks to keep you abreast of your office hours, or add students’ emails as tasks so they don’t get lost in your inbox.
- Administrative: From staff meetings to institutional to-dos like expense reports, add a dedicated section for administrative tasks to avoid them slipping to the bottom of your to-do list.
Capture and organize all your tasks, big and small
With your Todoist account structured around your teaching roster, you’re ready to add tasks. At the start of the semester or quarter, add tasks and key dates that you’re aware of in advance — important exams, project deadlines, field trip days, professional development conferences, and other events you’ll need to plan for.
Of course, no matter how well you plan in advance, new things come up all the time. Trying to keep everything straight in your head is stressful and often leads to important tasks getting missed. Assign a due date to each task to help you keep track of upcoming deadlines and plan accordingly.
Organize work and life.
Join over 30 million people who trust Todoist to stay on top of it all, from work projects to birthday reminders and everything in between.Try Todoist
This is a strategy that has helped Lisa Dumicich, an E-Learning Coordinator from Melbourne, Australia:
“I input all meetings and classes in Todoist, then plan tasks around them so I know what I can realistically do in a day. It keeps me sane.”
With Todoist, you can capture and organize your tasks from wherever you are — Todoist’s apps are available on over 10 platforms, including iOS and Android. That means you can effortlessly add tasks the moment they come to you whether you’re on the computer in a classroom, on your tablet while facilitating a study session, or on your phone during your train commute home.
Dr. Robyn Wiens, principal of Hawthorn Leadership School for Girls, says the ability to quickly add tasks from anywhere helps her be more present while away from work:
“If I just got home from work and I’m with the family and I remember I have to tell someone a few different things, I can just quickly grab my cell phone and pop those into my Todoist. That way when I’m back into work mode the next day, I can remember to get those things done. I basically have Todoist on every single device that I touch.”
Set task reminders
If you’re a Todoist Pro user, stay on top of your meetings and other time-specific tasks by setting reminders in Todoist on your desktop or phone. You can set date and time reminders or location reminders:
- On a time and date: Send a reminder to your phone 30 minutes before your office hours start to prompt you to head back to your office.
- At a specific location: Receive a reminder text to remind you to pick up classroom supplies while you’re at your local art supply store.
Tali Lerner, a Math and Science teacher from La Jolla, California uses this particular feature to stay ahead of his workload:
“I use Todoist to keep track of incidents, emails and calls that need follow-up. I have reminders to upload project pages and I keep notes for my weekly email.”
Sync your to-do list with your calendar
If you’re like most teachers, your day revolves around your calendar. With the Google Calendar and Todoist integration, your schedule and your tasks can go hand-in-hand.
This integration automatically adds all your Todoist tasks to your calendar and all your calendar events to Todoist as new tasks. Any changes to a task you make in your calendar, such as a date or time, will automatically be displayed in Todoist.
Turn on this integration by navigating to Settings and then Integrations while logged in to the Todoist web app.
Set and assess your goals regularly
Setting goals as an educator –– whether that’s raising your classes’ overall average or returning graded assignments back within two weeks –– is a great first step. However, adding in an accountability system to hold you to those goals is what really makes a difference.
Craig McClennan, an Elementary School Teacher from Nashville, Tennessee, finds that Todoist Karma has been invaluable for setting productivity goals and assessing his progress regularly: “Just having daily goals in Karma helps me keep my head above water.”
Here’s how to use Todoist’s Your Productivity tab and Karma to stay on top of your goals::
- Set productivity goals for your day and week: Use Todoist Karma to set daily and weekly goals for the number of tasks you want to complete. You’ll build streaks and earn Karma points as you achieve your goals. It’s a fun way to hold yourself accountable for making progress every day.
- Review the # of tasks you’ve completed over time, color-coded by project: Are you giving your classes equal attention? Does a new class require more of your time while one you’ve taught before is less demanding? Are you more productive some days than others? Keep track of your productivity over time at a glance.
- View all of your completed tasks: Having a chronological list of everything you’ve completed is valuable when answering the question, “Where did all my time go?” Browse through to see the days where meetings and coordination might have eaten into your planning and prep.
Planning lessons and teaching classes
Setting up new lesson plans and finding the right way to teach concepts to your class can be overwhelming. Focus on what’s important and excel as an educator by using Todoist to help you stay organized — from breaking down large projects into smaller tasks to prioritizing efficiently when you can’t get everything done.
Break big tasks into more manageable ones
Creating a lesson plan from scratch is no easy feat. Developing one for a course could include setting objectives, defining learning outcomes, creating relevant assignments and assessments, acquiring any necessary resource materials, and developing key talking points that align with the curriculum. Simply adding one task to your list to encapsulate all of those items can be a recipe for feeling stressed and overwhelmed. That’s where Todoist sub-tasks can help.
Avoid feeling overwhelmed by giant tasks that leave you without a clear direction. Instead, break down your lesson planning — or any other tasks that loom large on your list — into smaller, actionable sub-tasks.
Make your to-do list as efficient as possible by adding all the information you need to every task. When adding a to-do in Todoist, open up the task view to add all the details.
With time constraints and competing priorities, it’s impossible to do everything. Make room to do your best teaching by focusing on the activities that will most impact your class.
Prioritize your tasks in Todoist by selecting one of four priority levels. For each task, you have the option of categorizing it as Priority 1, 2, 3, or 4, with 1 being the most important and 4 being the least important. When you view your tasks for the day, your highest priority tasks, marked in red, will appear at the top.
Use your time efficiently
As an educator, you probably don’t have the luxury of large chunks of uninterrupted time. You need to be able to efficiently use the small amounts of free time you have to get things done.
You can use Todoist labels to quickly group the tasks across all of your projects that take 15 minutes or less. Just type @15min into the task field, and hit enter. The label will automatically be added to the task. Whenever you find yourself with a small 15-minute block of open time, search for “@15min” to pull up all of the associated tasks in seconds.
Save your ideas in task comments
Whether you’re actively working on a lesson plan or doing something else entirely, inspiration can strike at any time. A unique example to clarify a concept can come to mind, or a fellow teacher can throw a great idea your way. Jot down ideas as task comments in Todoist that you can come back and browse when you get the chance to sit down and tackle an item.
Here are a few examples of comments you can add to a specific task:
- New talking points on an upcoming lecture
- A current events topic that fits perfectly into your Powerpoint presentation
- A question asked by a student that you want to answer more fully
Attach the files you need to get work done
Keep your resources and notes in one place by attaching them to your tasks. This way, when you’re ready to face your to-do list head-on, everything you need will be right there. If you have a task to revise an existing lesson plan, attach the old one. If you’re building a presentation for next week, attach an image of the textbook diagram now so you don’t need to flip back and find it later.
You can attach items to your tasks from:
- Your computer or phone’s file storage or your Dropbox or Google Drive accounts: Attach files, PDFs, images, presentations, or anything else you need to the relevant tasks so you can find them again quickly.
- Photo Library (mobile apps): Attach snapshots from your phone. For example, diagrams that help illustrate a core concept or a student’s work you want to save as an example.
Turn emails into tasks
Students will often reach out to you over email with burning questions about an upcoming midterm or needing help to catch up on concepts from a class they missed. Don’t let these emails linger in your inbox! Instead, answer quick emails immediately, and send the rest to Todoist as tasks with due dates to follow up on later.
Grading assignments and exams
When you’re not in front of the classroom (or leading a Zoom call), chances are you have homework of your own: grading assignments and marking exams. Make this process as painless as possible with the magic of automation and the power of recurring tasks.
Grade assignments more efficiently by automating the submission flow with Zapier, a service that lets you connect the apps you already use and set up automated workflows with Todoist — all without writing a line of code. Rather than digging through your inbox to find and sort email assignment submissions from students, follow these steps:
- Require your students to submit assignments to you with a specific naming convention in the subject of the email (e.g., Firstname_Lastname_CourseCode-SectionCode_Assignment Name)
- Set up the following Zap on Zapier: “Add emails matching certain conditions to Todoist as tasks”
- Use a search query that matches the naming convention you selected, i.e. subject:”hist290″ and “a2” and “term paper”
- Ensure you select “attachments” to display in the task comments
Now, every email matching your subject criteria, along with the actual assignment attachment, will be visible on your Todoist task list in the project of your choice.
Set recurring tasks
As a teacher, grading quizzes, assignments, and exams is an ongoing task. Reflect those repeating tasks in Todoist with recurring due dates. Determine when you’re regularly in grading mode and schedule accordingly.
Here are a few examples of what that might look like:
- Grade post lab assignments every Wednesday and Thursday at 2PM
- Grade English papers every day for 5 days
- Grade mid semester submissions next week Monday, Wednesday, and Friday
Todoist’s smart Quick Add will automatically recognize the due date as you type it in and schedule your recurring tasks accordingly. Whenever you complete a recurring task, it will automatically reschedule for the next due date.
Meeting and collaborating with colleagues
While you’ve probably assigned a group project or two to your students, from time-to-time you’ll have to do some colleague collaboration of your own. Use Todoist to make teamwork easier.
Share projects and assign tasks
From school-wide initiatives to staff projects, there are quite a few scenarios where you might have to collaborate both inside and outside of the classroom.
Make collaboration and communication simpler by sharing projects and assigning tasks in Todoist. Everyone will be able to see who needs to do what and when. You can even discuss details and share files in task and project comments. Your teammates will get notified via push notification or email about any new comments. Unlike email, everything stays organized and accessible for everyone.
To get started, create a new project for your team, and click on the “share” icon in the top-right corner in project view. Then, simply enter the email addresses of the people you want to collaborate with. They’ll receive an email inviting them to view the project in Todoist (and to create a Todoist account if they don’t already have one).
Make meetings more efficient
You probably have one or more staff meetings throughout the week. Todoist can help get everyone on the same page beforehand and ensure that agreed upon actions are completed afterward. Here’s how to do it:
- Create a new meeting project, and share it with the relevant people.
- Create a new task for each meeting with the associated due date and time (unassigned tasks in a shared project that have a date and time will show up on everyone’s to-do lists).
- Before the meeting, attach the agenda to the task comments, and post a comment asking for additional questions or topics that need to be discussed.
- During the meeting, as new action items come up, create new tasks and assign them to the person responsible.
- Afterward, post the meeting minutes to the task so everyone will have a record of what was discussed.
Prioritizing professional development
While working to develop the expertise of your students, don’t forget to make time to develop yourself professionally –– whether that’s keeping up with industry publications or attending conferences with people in your field.
Save articles to read later
Staying up to date on new developments related to your field can help you retain your expertise (and continually improve your students’ learning). Naturally, one of the best ways to hone your expertise and stay abreast of the latest in your field is through reading.
Use Todoist’s extensions for Chrome, Firefox, or Safari to save links to interesting articles and papers that you find in academic journals, industry email newsletters, and authoritative websites directly to your Todoist task list for later reading.
Pursue opportunities to follow up on
Aside from self-directed professional development, there are countless conferences, conventions, events, webinars, and workshops that aim to equip you with information about the best teaching styles to support students or the most recent developments in your field. Seeking out these opportunities can be invaluable in pushing your career forward and informing the way you approach teaching.
- Create a Todoist project called “Professional Development”: Add all the professional development opportunities advertised through your institution, posted to your school’s online portal, or discovered online as tasks.
- Assign a deadline: Once you decide on the ones you would like to pursue, add a due date and begin working on any necessary applications or letters of intent.
- Get the most out of it: While you’re away from the classroom at a professional development event, stay alert for any action items that you could implement immediately. This could be a suggestion from a speaker or an idea explored during a break-out session. Add these action items to a specific project or to your Todoist Inbox to sort through later.
Being responsible for teaching anywhere from a handful to hundreds of students can be exhausting. This can be especially the case during the pandemic, where teaching over Zoom — with student cameras often off — can feel like speaking into the void. It’s important to take the time to relax and do the things that re-energize you outside of your physical or virtual classroom. Use Todoist to help make them a priority.
- Add your routines to Todoist: Ease into the morning or wind down at night by including your routines as tasks. For instance, create a recurring “morning routine” task with subtasks like “stretching,” “coffee,” and “play with the kids.”
- Create projects and tasks for your personal life: Create projects like “Cooking” to save new recipes and “Movies” to add must-see flicks. Add tasks with specific due dates and times for things that you know boost your mood — for example, exercise, reading, or spending time with friends and family.
- Add your personal projects to Todoist favorites: Stay balanced and keep your personal life top-of-mind by keeping projects like “Health” and “Hobbies” where you can always see them.
Hans Smits, a Special Education teacher has found that using Todoist has brought him a level of calm in a high-stress job:
“Many of my colleagues are under pressure, but thanks to Todoist I am the one who is ready at 3pm.”
If you have tips that have helped you stay organized and productive as a teacher, we want to hear about them in the comments below!
Ready to get organized for the coming school year? Create your own Todoist for free. Could your students use a little help in the organization department? Share our accompanying Student’s Guide to Todoist. | <urn:uuid:729dcc86-2221-4af2-8850-7329ac0ad53a> | CC-MAIN-2022-33 | https://blog.doist.com/an-educators-guide-to-todoist/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00296.warc.gz | en | 0.930897 | 4,431 | 2.703125 | 3 |
In mathematics, curvature is any of several strongly related concepts in geometry. Intuitively, the curvature is the amount by which a curve deviates from being a straight line, or a surface deviates from being a plane.
For curves, the canonical example is that of a circle, which has a curvature equal to the reciprocal of its radius. Smaller circles bend more sharply, and hence have higher curvature. The curvature at a point of a differentiable curve is the curvature of its osculating circle, that is the circle that best approximates the curve near this point. The curvature of a straight line is zero. In contrast to the tangent, which is a vector quantity, the curvature at a point is typically a scalar quantity, that is, it is expressed by a single real number.
For surfaces (and, more generally for higher-dimensional manifolds), that are embedded in a Euclidean space, the concept of curvature is more complex, as it depends on the choice of a direction on the surface or manifold. This leads to the concepts of maximal curvature, minimal curvature, and mean curvature.
For Riemannian manifolds (of dimension at least two) that are not necessarily embedded in a Euclidean space, one can define the curvature intrinsically, that is without referring to an external space. See Curvature of Riemannian manifolds for the definition, which is done in terms of lengths of curves traced on the manifold, and expressed, using linear algebra, by the Riemann curvature tensor.
In Tractatus de configurationibus qualitatum et motuum, the 14th-century philosopher and mathematician Nicole Oresme introduces the concept of curvature as a measure of departure from straightness; for circles he has the curvature as being inversely proportional to the radius; and he attempts to extend this idea to other curves as a continuously varying magnitude.
The curvature of a differentiable curve was originally defined through osculating circles. In this setting, Augustin-Louis Cauchy showed that the center of curvature is the intersection point of two infinitely close normal lines to the curve.
Intuitively, the curvature describes for any part of a curve how much the curve direction changes over a small distance travelled (e.g. angle in rad/m), so it is a measure of the instantaneous rate of change of direction of a point that moves on the curve: the larger the curvature, the larger this rate of change. In other words, the curvature measures how fast the unit tangent vector to the curve rotates (fast in terms of curve position). In fact, it can be proved that this instantaneous rate of change is exactly the curvature. More precisely, suppose that the point is moving on the curve at a constant speed of one unit, that is, the position of the point P(s) is a function of the parameter s, which may be thought as the time or as the arc length from a given origin. Let T(s) be a unit tangent vector of the curve at P(s), which is also the derivative of P(s) with respect to s. Then, the derivative of T(s) with respect to s is a vector that is normal to the curve and whose length is the curvature.
For being meaningful, the definition of the curvature and its different characterizations require that the curve is continuously differentiable near P, for having a tangent that varies continuously; it requires also that the curve is twice differentiable at P, for insuring the existence of the involved limits, and of the derivative of T(s).
The characterization of the curvature in terms of the derivative of the unit tangent vector is probably less intuitive than the definition in terms of the osculating circle, but formulas for computing the curvature are easier to deduce. Therefore, and also because of its use in kinematics, this characterization is often given as a definition of the curvature.
Historically, the curvature of a differentiable curve was defined through the osculating circle, which is the circle that best approximates the curve at a point. More precisely, given a point P on a curve, every other point Q of the curve defines a circle (or sometimes a line) passing through Q and tangent to the curve at P. The osculating circle is the limit, if it exists, of this circle when Q tends to P. Then the center and the radius of curvature of the curve at P are the center and the radius of the osculating circle. The curvature is the reciprocal of radius of curvature. That is, the curvature is
where R is the radius of curvature (the whole circle has this curvature, it can be read as turn 2π over the length 2πR).
This definition is difficult to manipulate and to express in formulas. Therefore, other equivalent definitions have been introduced.
Every differentiable curve can be parametrized with respect to arc length. In the case of a plane curve, this means the existence of a parametrization γ(s) = (x(s), y(s)), where x and y are real-valued differentiable functions whose derivatives satisfy
This means that the tangent vector
has a norm equal to one and is thus a unit tangent vector.
If the curve is twice differentiable, that is, if the second derivatives of x and y exist, then the derivative of T(s) exists. This vector is normal to the curve, its norm is the curvature κ(s), and it is oriented toward the center of curvature. That is,
Moreover, as the radius of curvature is
and the center of curvature is on the normal to the curve, the center of curvature is the point
If N(s) is the unit normal vector obtained from T(s) by a counterclockwise rotation of π/2, then
with k(s) = ± κ(s). The real number k(s) is called the oriented curvature or signed curvature. It depends on both the orientation of the plane (definition of counterclockwise), and the orientation of the curve provided by the parametrization. In fact, the change of variable s → –s provides another arc-length parametrization, and changes the sign of k(s).
Let γ(t) = (x(t), y(t)) be a proper parametric representation of a twice differentiable plane curve. Here proper means that on the domain of definition of the parametrization, the derivative dγ/dt is defined, differentiable and nowhere equal to the zero vector.
With such a parametrization, the signed curvature is
where primes refer to derivatives with respect to t. The curvature κ is thus
These can be expressed in a coordinate-free way as
These formulas can be derived from the special case of arc-length parametrization in the following way. The above condition on the parametrisation imply that the arc length s is a differentiable monotonic function of the parameter t, and conversely that t is a monotonic function of s. Moreover, by changing, if needed, s to –s, one may suppose that these functions are increasing and have a positive derivative. Using notation of the preceding section and the chain rule, one has
and thus, by taking the norm of both sides
where the prime denotes differentiation with respect to t.
The curvature is the norm of the derivative of T with respect to s. By using the above formula and the chain rule this derivative and its norm can be expressed in terms of γ′ and γ″ only, with the arc-length parameter s completely eliminated, giving the above formulas for the curvature.
The graph of a function y = f(x), is a special case of a parametrized curve, of the form
As the first and second derivatives of x are 1 and 0, previous formulas simplify to
for the curvature, and to
for the signed curvature.
In the general case of a curve, the sign of the signed curvature is somewhat arbitrary, as it depends on the orientation of the curve. In the case of the graph of a function, there is a natural orientation by increasing values of x. This makes significant the sign of the signed curvature.
The sign of the signed curvature is the same as the sign of the second derivative of f. If it is positive then the graph has an upward concavity, and, if it is negative the graph has a downward concavity. It is zero, then one has an inflection point or an undulation point.
When the slope of the graph (that is the derivative of the function) is small, the signed curvature is well approximated by the second derivative. More precisely, using big O notation, one has
It is common in physics and engineering to approximate the curvature with the second derivative, for example, in beam theory or for deriving wave equation of a tense string, and other applications where small slopes are involved. This often allows systems that are otherwise nonlinear to be considered as linear.
If a curve is defined in polar coordinates by the radius expressed as a function of the polar angle, that is r is a function of θ, then its curvature is
where the prime refers to differentiation with respect to θ.
This results from the formula for general parametrizations, by considering the parametrization
For a curve defined by an implicit equation F(x, y) = 0 with partial derivatives denoted Fx, Fy, Fxx, Fxy, Fyy, the curvature is given by
The signed curvature is not defined, as it depends on an orientation of the curve that is not provided by the implicit equation. Also, changing F into –F does not change the curve, but changes the sign of the numerator if the absolute value is omitted in the preceding formula.
A point of the curve where Fx = Fy = 0 is a singular point, which means that the curve is not differentiable at this point, and thus that the curvature is not defined (most often, the point is either a crossing point or a cusp).
Above formula for the curvature can be derived from the expression of the curvature of the graph of a function by using the implicit function theorem and the fact that, on such a curve, one has
It can be useful to verify on simple examples that the different formulas given in the preceding sections give the same result.
A common parametrization of a circle of radius r is γ(t) = (r cos t, r sin t). The formula for the curvature gives
It follows, as expected, that the radius of curvature is the radius of the circle, and that the center of curvature is the center of the circle.
The circle is a rare case where the arc-length parametrization is easy to compute, as it is
It is an arc-length parametrization, since the norm of
is equal to one. This parametrization gives the same value for the curvature, as it amounts to division by r3 in both the numerator and the denominator in the preceding formula.
The same circle can also be defined by the implicit equation F(x, y) = 0 with F(x, y) = x2 + y2 – r2. Then, the formula for the curvature in this case gives
Consider the parabola y = ax2 + bx + c.
It is the graph of a function, with derivative 2ax + b, and second derivative 2a. So, the signed curvature is
It has the sign of a for all values of x. This means that, if a > 0, the concavity is upward directed everywhere; if a < 0, the concavity is downward directed; for a = 0, the curvature is zero everywhere, confirming that the parabola degenerates into a line in this case.
The (unsigned) curvature is maximal for x = –b/2a, that is at the stationary point (zero derivative) of the function, which is the vertex of the parabola.
Consider the parametrization γ(t) = (t, at2 + bt + c) = (x, y). The first derivative of x is 1, and the second derivative is zero. Substituting into the formula for general parametrizations gives exactly the same result as above, with x replaced by t. If we use primes for derivatives with respect to the parameter t.
The same parabola can also be defined by the implicit equation F(x, y) = 0 with F(x, y) = ax2 + bx + c – y. As Fy = –1, and Fyy = Fxy = 0, one obtains exactly the same value for the (unsigned) curvature. However, the signed curvature is meaningless here, as –F(x, y) = 0 is a valid implicit equation for the same parabola, which gives the opposite sign for the curvature.
The expression of the curvature In terms of arc-length parametrization is essentially the first Frenet–Serret formula
where the primes refer to the derivatives with respect to the arc length s, and N(s) is the normal unit vector in the direction of T′(s).
As planar curves have zero torsion, the second Frenet–Serret formula provides the relation
For a general parametrization by a parameter t, one needs expressions involving derivatives with respect to t. As these are obtained by multiplying by ds/dt the derivatives with respect to s, one has, for any proper parametrization
As in the case of curves in two dimensions, the curvature of a regular space curve C in three dimensions (and higher) is the magnitude of the acceleration of a particle moving with unit speed along a curve. Thus if γ(s) is the arc-length parametrization of C then the unit tangent vector T(s) is given by
and the curvature is the magnitude of the acceleration:
The direction of the acceleration is the unit normal vector N(s), which is defined by
The plane containing the two vectors T(s) and N(s) is the osculating plane to the curve at γ(s). The curvature has the following geometrical interpretation. There exists a circle in the osculating plane tangent to γ(s) whose Taylor series to second order at the point of contact agrees with that of γ(s). This is the osculating circle to the curve. The radius of the circle R(s) is called the radius of curvature, and the curvature is the reciprocal of the radius of curvature:
The tangent, curvature, and normal vector together describe the second-order behavior of a curve near a point. In three dimensions, the third-order behavior of a curve is described by a related notion of torsion, which measures the extent to which a curve tends to move as a helical path in space. The torsion and curvature are related by the Frenet–Serret formulas (in three dimensions) and their generalization (in higher dimensions).
For a parametrically-defined space curve in three dimensions given in Cartesian coordinates by γ(t) = (x(t), y(t), z(t)), the curvature is
where the prime denotes differentiation with respect to the parameter t. This can be expressed independently of the coordinate system by means of the formula
where × denotes the vector cross product. This last formula is valid for the curvature of curves in a Euclidean space of any dimension:
Given two points P and Q on C, let s(P,Q) be the arc length of the portion of the curve between P and Q and let d(P,Q) denote the length of the line segment from P to Q. The curvature of C at P is given by the limit
where the limit is taken as the point Q approaches P on C. The denominator can equally well be taken to be d(P,Q)3. The formula is valid in any dimension. Furthermore, by considering the limit independently on either side of P, this definition of the curvature can sometimes accommodate a singularity at P. The formula follows by verifying it for the osculating circle.
For broader coverage of this topic, see Differential geometry of surfaces.
The curvature of curves drawn on a surface is the main tool for the defining and studying the curvature of the surface.
For a curve drawn on a surface (embedded in three-dimensional Euclidean space), several curvatures are defined, which relates the direction of curvature to the surface's unit normal vector, including the:
Any non-singular curve on a smooth surface has its tangent vector T contained in the tangent plane of the surface. The normal curvature, kn, is the curvature of the curve projected onto the plane containing the curve's tangent T and the surface normal u; the geodesic curvature, kg, is the curvature of the curve projected onto the surface's tangent plane; and the geodesic torsion (or relative torsion), τr, measures the rate of change of the surface normal around the curve's tangent.
Let the curve be arc-length parametrized, and let t = u × T so that T, t, u form an orthonormal basis, called the Darboux frame. The above quantities are related by:
Main article: Principal curvature
All curves on the surface with the same tangent vector at a given point will have the same normal curvature, which is the same as the curvature of the curve obtained by intersecting the surface with the plane containing T and u. Taking all possible tangent vectors, the maximum and minimum values of the normal curvature at a point are called the principal curvatures, k1 and k2, and the directions of the corresponding tangent vectors are called principal normal directions.
Curvature can be evaluated along surface normal sections, similar to § Curves on surfaces above (see for example the Earth radius of curvature).
Some curved surfaces, such as those made from a smooth sheet of paper, can be flattened down into the plane without distorting their intrinsic features in any way. Such developable surfaces have zero Gaussian curvature (see below).
Main article: Gaussian curvature
In contrast to curves, which do not have intrinsic curvature, but do have extrinsic curvature (they only have a curvature given an embedding), surfaces can have intrinsic curvature, independent of an embedding. The Gaussian curvature, named after Carl Friedrich Gauss, is equal to the product of the principal curvatures, k1k2. It has a dimension of length−2 and is positive for spheres, negative for one-sheet hyperboloids and zero for planes and cylinders. It determines whether a surface is locally convex (when it is positive) or locally saddle-shaped (when it is negative).
Gaussian curvature is an intrinsic property of the surface, meaning it does not depend on the particular embedding of the surface; intuitively, this means that ants living on the surface could determine the Gaussian curvature. For example, an ant living on a sphere could measure the sum of the interior angles of a triangle and determine that it was greater than 180 degrees, implying that the space it inhabited had positive curvature. On the other hand, an ant living on a cylinder would not detect any such departure from Euclidean geometry; in particular the ant could not detect that the two surfaces have different mean curvatures (see below), which is a purely extrinsic type of curvature.
Formally, Gaussian curvature only depends on the Riemannian metric of the surface. This is Gauss's celebrated Theorema Egregium, which he found while concerned with geographic surveys and mapmaking.
An intrinsic definition of the Gaussian curvature at a point P is the following: imagine an ant which is tied to P with a short thread of length r. It runs around P while the thread is completely stretched and measures the length C(r) of one complete trip around P. If the surface were flat, the ant would find C(r) = 2πr. On curved surfaces, the formula for C(r) will be different, and the Gaussian curvature K at the point P can be computed by the Bertrand–Diguet–Puiseux theorem as
The integral of the Gaussian curvature over the whole surface is closely related to the surface's Euler characteristic; see the Gauss–Bonnet theorem.
The discrete analog of curvature, corresponding to curvature being concentrated at a point and particularly useful for polyhedra, is the (angular) defect; the analog for the Gauss–Bonnet theorem is Descartes' theorem on total angular defect.
Because (Gaussian) curvature can be defined without reference to an embedding space, it is not necessary that a surface be embedded in a higher-dimensional space in order to be curved. Such an intrinsically curved two-dimensional surface is a simple example of a Riemannian manifold.
Main article: Mean curvature
The mean curvature is an extrinsic measure of curvature equal to half the sum of the principal curvatures, k1 + k2/2. It has a dimension of length−1. Mean curvature is closely related to the first variation of surface area. In particular, a minimal surface such as a soap film has mean curvature zero and a soap bubble has constant mean curvature. Unlike Gauss curvature, the mean curvature is extrinsic and depends on the embedding, for instance, a cylinder and a plane are locally isometric but the mean curvature of a plane is zero while that of a cylinder is nonzero.
Main article: Second fundamental form
The intrinsic and extrinsic curvature of a surface can be combined in the second fundamental form. This is a quadratic form in the tangent plane to the surface at a point whose value at a particular tangent vector X to the surface is the normal component of the acceleration of a curve along the surface tangent to X; that is, it is the normal curvature to a curve tangent to X (see above). Symbolically,
where N is the unit normal to the surface. For unit tangent vectors X, the second fundamental form assumes the maximum value k1 and minimum value k2, which occur in the principal directions u1 and u2, respectively. Thus, by the principal axis theorem, the second fundamental form is
Thus the second fundamental form encodes both the intrinsic and extrinsic curvatures.
Further information: Shape operator
An encapsulation of surface curvature can be found in the shape operator, S, which is a self-adjoint linear operator from the tangent plane to itself (specifically, the differential of the Gauss map).
For a surface with tangent vectors X and normal N, the shape operator can be expressed compactly in index summation notation as
(Compare the alternative expression of curvature for a plane curve.)
The Weingarten equations give the value of S in terms of the coefficients of the first and second fundamental forms as
The principal curvatures are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gauss curvature is its determinant, and the mean curvature is half its trace.
Further information: Curvature of Riemannian manifolds
"Curvature of space" redirects here. Not to be confused with Curvature of space-time.
By extension of the former argument, a space of three or more dimensions can be intrinsically curved. The curvature is intrinsic in the sense that it is a property defined at every point in the space, rather than a property defined with respect to a larger space that contains it. In general, a curved space may or may not be conceived as being embedded in a higher-dimensional ambient space; if not then its curvature can only be defined intrinsically.
After the discovery of the intrinsic definition of curvature, which is closely connected with non-Euclidean geometry, many mathematicians and scientists questioned whether ordinary physical space might be curved, although the success of Euclidean geometry up to that time meant that the radius of curvature must be astronomically large. In the theory of general relativity, which describes gravity and cosmology, the idea is slightly generalised to the "curvature of spacetime"; in relativity theory spacetime is a pseudo-Riemannian manifold. Once a time coordinate is defined, the three-dimensional space corresponding to a particular time is generally a curved Riemannian manifold; but since the time coordinate choice is largely arbitrary, it is the underlying spacetime curvature that is physically significant.
Although an arbitrarily curved space is very complex to describe, the curvature of a space which is locally isotropic and homogeneous is described by a single Gaussian curvature, as for a surface; mathematically these are strong conditions, but they correspond to reasonable physical assumptions (all points and all directions are indistinguishable). A positive curvature corresponds to the inverse square radius of curvature; an example is a sphere or hypersphere. An example of negatively curved space is hyperbolic geometry. A space or space-time with zero curvature is called flat. For example, Euclidean space is an example of a flat space, and Minkowski space is an example of a flat spacetime. There are other examples of flat geometries in both settings, though. A torus or a cylinder can both be given flat metrics, but differ in their topology. Other topologies are also possible for curved space. See also shape of the universe.
The mathematical notion of curvature is also defined in much more general contexts. Many of these generalizations emphasize different aspects of the curvature as it is understood in lower dimensions.
One such generalization is kinematic. The curvature of a curve can naturally be considered as a kinematic quantity, representing the force felt by a certain observer moving along the curve; analogously, curvature in higher dimensions can be regarded as a kind of tidal force (this is one way of thinking of the sectional curvature). This generalization of curvature depends on how nearby test particles diverge or converge when they are allowed to move freely in the space; see Jacobi field.
Another broad generalization of curvature comes from the study of parallel transport on a surface. For instance, if a vector is moved around a loop on the surface of a sphere keeping parallel throughout the motion, then the final position of the vector may not be the same as the initial position of the vector. This phenomenon is known as holonomy. Various generalizations capture in an abstract form this idea of curvature as a measure of holonomy; see curvature form. A closely related notion of curvature comes from gauge theory in physics, where the curvature represents a field and a vector potential for the field is a quantity that is in general path-dependent: it may change if an observer moves around a loop.
Two more generalizations of curvature are the scalar curvature and Ricci curvature. In a curved surface such as the sphere, the area of a disc on the surface differs from the area of a disc of the same radius in flat space. This difference (in a suitable limit) is measured by the scalar curvature. The difference in area of a sector of the disc is measured by the Ricci curvature. Each of the scalar curvature and Ricci curvature are defined in analogous ways in three and higher dimensions. They are particularly important in relativity theory, where they both appear on the side of Einstein's field equations that represents the geometry of spacetime (the other side of which represents the presence of matter and energy). These generalizations of curvature underlie, for instance, the notion that curvature can be a property of a measure; see curvature of a measure.
Another generalization of curvature relies on the ability to compare a curved space with another space that has constant curvature. Often this is done with triangles in the spaces. The notion of a triangle makes senses in metric spaces, and this gives rise to CAT(k) spaces. | <urn:uuid:69041914-41bf-4250-9812-b8e1ff8a7391> | CC-MAIN-2022-33 | https://db0nus869y26v.cloudfront.net/en/Curvature | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00097.warc.gz | en | 0.924527 | 6,069 | 3.953125 | 4 |
Triage is the medical assessment of patients to establish their priority for treatment. When medical resources are limited and immediate treatment of all patients is impossible, patients are sorted in order to use the resources most effectively. The process of triage was first developed and refined in military medicine, and later extended to disaster and emergency medicine.
In recent years, it has become common to use the term triage in a wide variety of contexts where decisions are made about allocating scarce medical resources. However, triage should not be confused with more general expressions such as allocation or rationing (Childress). Triage is a process of screening patients on the basis of their immediate medical needs and the likelihood of medical success in treating those needs. Unlike the everyday practice of allocating medical resources, triage usually takes place in urgent circumstances, requiring quick decisions about the critical care of a pool of patients. Generally, these decisions are controlled by a mixture of utilitarian and egalitarian considerations.
Baron Dominique Jean Larrey, Napoleon's chief medical officer, is credited with organizing the first deliberate plan for classifying military casualties (Hinds, 1975). Larrey was proud of his success in treating battle casualties despite severe scarcity of medical resources. He insisted that those who were most seriously wounded be treated first, regardless of rank (Larrey). Although there is no record of Larrey's using the term triage, his plan for sorting casualties significantly influenced later military medicine.
The practice of systematically sorting battle casualties first became common during World War I. It was also at this time that the term triage entered British and U.S. military medicine from the French (Lynch, Ford, and Weed). Originally, triage (from the French verb trier, "to sort") referred to the process of sorting agricultural products such as wool and coffee. In military medicine, triage was first used both for the process of prioritizing casualty treatment and for the place where such screening occurred. At the poste de triage (casualty clearing station), casualties were assessed for the severity of their wounds and the need for rapid evacuation to hospitals in the rear. The emphasis was on determining need for immediate treatment and the feasibility of transport.
The following triage categories have become standard, even though terminology may vary:
- Minimal. Those whose injuries are slight and require little or no professional care.
- Immediate. Those whose injuries, such as airway obstruction or hemorrhaging, require immediate medical treatment for survival.
- Delayed. Those whose injuries, such as burns or closed fractures of bones, require significant professional attention that can be delayed for some period of time without significant increase in the likelihood of death or disability.
- Expectant. Those whose injuries are so extensive that there is little or no hope of survival, given the available medical resources.
First priority is given to those in the immediate group. Next, as time and resources permit, care is given to the delayed group. Little, beyond minimal efforts to provide comfort care, is given to those in the expectant category. Active euthanasia for expectant casualties has been considered but is almost never mentioned in triage proposals (British Medical Association, 1988). Those in the minimal group are sent to more distant treatment facilities or left to take care of themselves until all other medical needs are met.
From the beginning, the expressed reasons for such sorting were a blend of utilitarian and egalitarian considerations. Larrey stressed equality of care for casualties sorted into the same categories. On the other hand, one early text on military medicine advised, "The greatest good of the greatest number must be the rule" (Keen, p. 13). Over the years, it also became clear that the utilitarian principle could be interpreted in different ways. The most obvious meaning was that of limited medical utility: The good to be sought was saving the greatest number of casualties' lives.
But the principle could also be construed to mean doing the greatest good for the military effort. When interpreted this way, triage could produce very different priorities. For example, it was sometimes proposed that priority be given to the least injured in order to return them quickly to battle (Lee). An oft-cited example of the second use of the utilitarian principle for triage occurred during World War II (Beecher). Commanders of U.S. forces in North Africa had to decide how to use their extremely limited supply of penicillin. The choice was between battle casualties with infected wounds and soldiers with gonorrhea. The decision was made to give priority to those with venereal disease, on the grounds that they could most quickly be returned to battle preparedness. A similar decision was made in Great Britain to favor members of bomber crews who had contracted venereal disease, because they were deemed most valuable to the continuation of the war effort (Hinds, 1975).
As military triage has evolved during the twentieth century, the goal of maintaining fighting strength has increasingly become the dominant, stated goal. In the words of surgeons Gilbert W. Beebe and Michael E. DeBakey, "Traditionally, the military value of surgery lies in the salvage of battle casualties. This is not merely a matter of saving life; it is primarily one of returning the wounded to duty, and the earlier the better" (p. 216).
The nuclear weapons used at the end of World War II introduced unprecedented destructive power. In the nuclear age, triage plans have had to include the possibility of overwhelming numbers of hopelessly injured civilians. In earlier days, it was not uncommon to plan for 1,000 or 2,000 casualties from a single battle. Now, triage planners must consider the likelihood that a single nuclear weapon could produce a hundred times as many casualties or more. At the same time a single blast could destroy much of a community's medical capacity. Such probabilities have led some analysts to wonder if triage would be a realistic expectation following a nuclear attack (British Medical Association, 1983).
Triage has moved from military into civilian medicine in two prominent areas: the care of disaster victims and the operation of hospital emergency departments. In both areas, the categories and many of the strategies of military medicine have been adopted.
The necessity of triage in hospital emergency departments is due, in part, to the fact that a number of patients needing immediate emergency care may arrive almost simultaneously and temporarily overwhelm the hospital's emergency resources (Kipnis). More often, however, the need for triage in hospital emergency departments stems from the fact that the majority of patients are waiting for routine care and do not have emergent conditions. Thus, screening patients to determine which ones need immediate treatment has become increasingly important. Emergency-department triage is often conducted by specially-educated nurses using elaborate methods of scoring for severity of injury or illness (Purnell; Wiebe and Rosen; Grossman).
The traditional ethic of medicine obligates healthcare professionals to protect the interests of patients as individuals and to treat people equally on the basis of their medical needs. These same commitments to fidelity and equality have, at times, been prescribed for the treatment of war casualties. For example, the Geneva Conventions call for medical treatment of all casualties of war strictly on the basis of medical criteria, without regard for any other considerations (International Committee of the Red Cross; Baker and Strosberg). However, this principle of equal treatment based solely on medical needs and the likelihood of medical success has competed with utilitarian considerations in military medicine. In such triage, healthcare professionals have sometimes thought of patients as aggregates and given priority to goals such as preserving military strength; loyalty to the individual patient has, at times, been set aside in order to accomplish the most good or prevent the most harm. The good that might have been accomplished for one has been weighed against what the same amount of effort and resources could do for others. The tension between keeping faith with the individual patient and the utilitarian goal of seeking the greatest good for the greatest number is the primary ethical issue arising from triage.
Triage generates a number of additional ethical questions. To what extent are the utilitarian goals of military or disaster triage appropriate in the more common circumstances of allocating everyday medical care, such as beds in an intensive care unit? If some casualties of war or disaster are categorized as hopeless, what care, if any, should they be accorded? Should their care include active euthanasia? Should healthcare professionals join in the triage planning for nuclear war if they are morally opposed to the policies that include the possibility of such war (Leaning, 1988)? What new issues arise for triage in a time of global terrorism (Kipnis)?
Triage is a permanent feature of contemporary medical care in military, disaster, and emergency settings. As medical research continues to produce new and costly therapies, it will continue to be tempting to import the widely accepted principles of triage for decisions about who gets what care. Indeed, whenever conditions of scarcity necessitate difficult decisions about the distribution of burdens and benefits, the language and tenets of medical triage may present an apparently attractive model. This is true for issues as far from medical care as world hunger and population control (Hardin; Hinds, 1976). The moral wisdom of appropriating the lessons of medical triage for such diverse social problems is doubtful and should be carefully questioned. Otherwise, utilitarian considerations often associated with triage may dominate issues better addressed in terms of loyalty, personal autonomy, or distributive justice (Baker and Strosberg).
gerald r. winslow (1995)
revised by author
Baker, Robert, and Strosberg, Martin. 1992. "Triage and Equality: An Historical Reassessment of Utilitarian Analyses of Triage." Kennedy Institute of Ethics Journal 2: 103–123.
Beebe, Gilbert W., and DeBakey, Michael E. 1952. Battle Casualties: Incidence, Mortality, and Logistic Considerations. Springfield, IL: Charles C. Thomas.
Beecher, Henry K. 1970. "Scarce Resources and Medical Advancement." In Experimentation with Human Subjects, ed. Paul A. Freund. New York: George Braziller.
British Medical Association. 1983. The Medical Effects of Nuclear War. Chichester, UK: John Wiley and Sons.
British Medical Association. 1988. Selection of Casualties for Treatment After Nuclear Attack: A Document for Discussion. London: Author.
Burkle, Frederick M. 1984. "Triage." In Disaster Medicine: Application for the Immediate Management and Triage of Civilian and Military Disaster Victims, ed. Frederick M. Burkle, Jr., Patricia H. Sanner, and Barry W. Wolcott. New Hyde Park, NY: Medical Examination.
Childress, James F. 1983. "Triage in Neonatal Intensive Care: The Limitations of a Metaphor." Virginia Law Review 69: 547–561.
Grossman, Valerie G.A. 1999. Quick Reference to Triage. Philadelphia: Lippincott Williams and Wilkins.
Hardin, Garrett. 1980. Promethean Ethics: Living with Death, Competition, and Triage. Seattle: University of Washington Press.
Hinds, Stuart. 1975. "Triage in Medicine: A Personal History." In Triage in Medicine and Society: Inquiries into Medical Ethics, ed. George R. Lucas, Jr. Houston, TX: Institute of Religion and Human Development.
Hinds, Stuart. 1976. "Relations of Medical Triage to World Famine: A History." In Lifeboat Ethics: The Moral Dilemmas of World Hunger, ed. George R. Lucas, Jr., and Thomas W. Ogletree. New York: Harper and Row.
International Committee of the Red Cross. 1977. "Geneva Conventions: Protocol I, Additional to the Geneva Conventions of 12 August 1949, Relating to the Protection of Victims of International Armed Conflicts (1977)." In Encyclopedia of Human Rights, ed. Edward Lawson. New York: Taylor and Francis.
Keen, William W. 1917. The Treatment of War Wounds. Philadelphia: W. B. Saunders.
Kipnis, Kenneth. 2003. "Overwhelming Casualties: Medical Ethics in a Time of Terror." In After the Terror: Medicine and Morality in a Time of Crisis, ed. Jonathan D. Moreno. Cambridge, MA: MIT Press.
Larrey, Dominique Jean. 1832. Surgical Memoirs of the Campaign in Russia, tr. J. Mercer. Philadelphia: Cowley and Lea.
Leaning, Jennifer. 1986. "Burn and Blast Casualties: Triage in Nuclear War." In The Medical Implications of Nuclear War, ed. Fredric Solomon and Robert Q. Marston. Washington, D.C.: National Academy Press.
Leaning, Jennifer. 1988. "Physicians, Triage, and Nuclear War." Lancet 2(8605): 269–270.
Lee, Robert I. 1917. "The Case for the More Efficient Treatment of Light Casualties in Military Hospitals." Military Surgeon 42: 283–286.
Lynch, Charles; Ford, J. H.; and Weed, F. W. 1925. Field Operations: In General View of Medical Department Organization. Vol. 8 of The Medical Department of the United States Army in the World War. Washington, D.C.: U.S. Government Printing Office.
O'Donnell, Thomas J. 1960. "The Morality of Triage." Georgetown Medical Bulletin 14(1): 68–71.
Purnell, Larry D. 1991. "A Survey of Emergency Department Triage in 185 Hospitals." Journal of Emergency Nursing 17(6): 402–407.
Rund, Douglas A., and Rausch, Tondra S. 1981. Triage. St. Louis, MO: Mosby.
Vickery, Donald M. 1975. Triage: Problem-Oriented Sorting of Patients. Bowie, MD: Robert J. Brady.
Wiebe, Robert A., and Rosen, Linda M. 1991. "Triage in the Emergency Department." Emergency Medicine Clinics of North America 9(3): 491–503.
Winslow, Gerald. 1982. Triage and Justice. Berkeley: University of California Press.
The metaphor "triage" (a French word meaning "to pick or sort according to quality") gained entry into medical parlance from a military context in which Napoleon's chief surgeon, Jean Larrey, found it necessary to categorize wounded soldiers needing treatment according to a utilitarian principle: those whose wounds, even if left untreated, were such as not to preclude a return to the battlefield; those sustaining mortal wounds for whom treatment would be futile; those needing immediate attention for whom there would be hope for survival and eventual return to active duty. Only the last group would be given medical attention when human, medicinal, and facility resources had to be rationed. Strategies for "triaging" in times of warfare, natural disasters (e.g., earthquakes, famines, etc.), and civil defense planning have marked the modern era. Similarly and more routinely, contemporary health care practice necessitates the application of triage where patients must be sorted or prioritized because of restricted medical resources. Hospital emergency rooms often designate a triage nurse whose task it is to order those seeking treatment according to greatest need and best potential for benefit.
Medical Care. The highly technical nature of modern medicine has further contributed to the complexity of selecting patients for treatment. For example, advances in organ transplant technology utilizing both natural and artificial organs offer new hope to patients with life threatening vital organ failure, but the supply of transplantable organs remains limited and the selection of recipients presents an ethical as well as a logistical dilemma. In organ allocation the utilitarian questions of "Who has greatest need?" and "Who might benefit most?" are further complicated by possible considerations of social worth and equality of persons. Should younger patients with as yet untapped potential for social contribution be chosen over the retired, or those with disabling mental or physical handicap? If three patients are equal in need and in their potential to benefit from treatment, and there are resources for treating only one, what criteria or selection principle will accord with the traditional Christian belief in a fundamental obligation in justice to recognize the irreducible, inalienable equality of all persons?
Some ethicists (e.g., Joseph Fletcher), appealing to a pragmatic distributive or allocative justice, propose that we choose on the basis of the good of the greatest number or the social interest. Thus, a bank president and father of four children would be chosen to receive treatment over an unemployed single person or a prison inmate. Paul Ramsey, and most Roman Catholic moral theologians, espousing a principle of the absolute equality of persons (commutative justice), argues that selection among medically equal and suitable patients be by random choice (e.g., lottery, choosing straws, or "firstcome, first-served") so as to avoid reducing the value of persons to their social worth. To do otherwise, it is argued, is to enter upon a "slippery slope" with implications unacceptable in a Christian ethic. Decisions based on social worth criteria are highly relative and rooted in a value system in which power and material things take precedence over persons. Further, the power entrusted to selected decision or policy makers, who would be calculating and evaluating the social value of another, raises disturbing ethical questions about who decides and who decides who decides.
The power at stake here is not just power for persons but power over persons. Ramsey and others object that there are some things we can do which we ought not to do, things which in the extended calculus hold potential for disproportionate harm to the humanum which is to be sustained by a Christian ethic. In a more positive vein, Ramsey observes that blind or lottery selection of persons to benefit from rationed medical resources emulates God's own indiscriminate care for us.
The social distribution of health care also invokes the ethical consideration of triage when a choice must be made between providing for a few patients whose need is critical and those for whom there is immediate, though limited, potential for benefit; expensive, even esoteric, treatments (e.g., the artificial heart); and supplying a large number of persons, especially the poor and underprivileged, with more routine medical care and preventive medicine (e.g., vaccines, dietary supplements). Many Christian ethicists maintain that in public policy concerning health care
priority ought to be given to that kind of preventive medicine or treatment of acute disease which will raise the general standards of health, especially for the young, over elaborate modes of treatment for the aged or seriously handicapped (Ashley and O'Rourke, 240).
A factor in this position is a recognized distinction between Biblical justice and the justice prevalent in secular society. The latter is avowedly impartial and favors individualistic opportunism. Those who find access and the financial means to pay have a right to benefit. Biblical justice, on the other hand, is not impartial and individualistic, but biased in favor of the poor and decidedly social in its thrust (see option for the poor).
Social Triage. The "lifeboat ethics" conundrum is yet another example of the metaphor of triage, here, social triage. The world population explosion, with attendant world hunger, confronts the developed nations with a disturbing specter: providing medical aid and food to underdeveloped countries will insure burgeoning population growth and, ultimately, increased starvation, unless such aid is contingent upon compulsory population control. Garrett Hardin (1980) argues for such contingencies in his "lifeboat ethics" proposal, cautioning the developed countries against lowering their own standards of living and health care lest their children, who ensure the future of the human race, become similarly deprived and lose their edge. Hardin contends that no amount of aid can reverse the plight of the underdeveloped nations. His utilitarian ethic effectively dictates that one save oneself even at the cost of sacrificing the other.
Hardin's assessment of the imminence of the over-population crisis is disputed by others who, nonetheless, do acknowledge a significant socio-economic and political problem confronting the world community. Some Catholic ethicists contend that
the advanced countries by introducing modern medicine [into underdeveloped nations] … upset the ecological balance and produced a rapid population growth, without at the same time producing the standard of living which in developed countries motivates and facilitates responsible parenthood (Ashley and O'Rourke, 241).
Rather than "sailing away," the developed nations are bound by principles of distributive and Biblical justice to restore the balance which they helped to destroy by raising the standards of living and education in the underdeveloped world. When resources are scarce those who stand to benefit most from enhanced opportunity are those whose need is greatest.
Bibliography: b. m. ashley and k. d. o'rourke, Health Care Ethics: A Theological Analysis (rev. ed. St. Louis 1982). j. f. childress, "Who Shall Live When Not All Can Live?" Readings on Ethical and Social Issues in Biomedicine, r. w. wertz, ed. (New Jersey 1973) 143–153. r. a. mccormick, "Justice in Health Care," Health and Medicine in the Catholic Tradition (New York 1984) 75–85. g. outka, "Social Justice and Equal Access to Health Care," On Moral Medicine: Theological Perspectives in Medical Ethics, s. lammers and a. verhey, eds. (Michigan 1987) 632–642. a. verhey, "Sanctity and Scarcity: The Makings of Tragedy," ibid., 653–657. p. ramsey, The Patient as Person (New Haven, 1970); Ethics at the Edges of Life (New Haven 1977).
[r. m. friday]
tri·age / trēˈäzh; ˈtrēˌäzh/ • n. 1. the action of sorting according to quality.2. (in medical use) the assignment of degrees of urgency to wounds or illnesses to decide the order of treatment of a large number of patients or casualties.• v. [tr.] assign degrees of urgency to (wounded or ill patients).ORIGIN: early 18th cent.: from French, from trier ‘separate out.’ The medical sense dates from the 1930s, from the military system of assessing the wounded on the battlefield. | <urn:uuid:61f52ada-b0c1-4299-b5ce-0e2b0f756348> | CC-MAIN-2022-33 | https://www.encyclopedia.com/social-sciences-and-law/political-science-and-government/military-affairs-nonnaval/triage | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00696.warc.gz | en | 0.931724 | 4,674 | 3.65625 | 4 |
Rosa Luxemburg — Fragment on War, National Questions and Revolution
Translated and introduced by Rida Vaquas for Cosmonaut
It has long become a truism that Marxism failed to grasp the problem of nationalism, particularly as the second half of the twentieth century saw national revolutions flourish whilst socialist movements collapsed. As national identity cements itself as a political force in our times, the Communist Manifesto’s declaration that “national one-sidedness and narrowmindedness become more and more impossible” can strike some as impossibly glib. The globalization of capital, far from diminishing the prospects of the nation-state, has instead spawned many nationalisms and even shaken the stability of ‘settled’ nation-states. Both Britain and Spain have faced secessionist movements in recent years. In the wake of this theoretical “failure” of Marxism, the response of Marxists has too frequently been to pack up and go home, taking the failure for granted. Nowadays the claim of “the right of nations to self-determination” is the accepted solution to the national question, even when no plausible working out has been shown. The “Leninist position” has become reified as part of socialist political programmes in the 21st century, even as very little sets it apart from the principle of national self-determination advocated by the Democratic President Woodrow Wilson.
After over half of a century of socialists firmly embracing nation-states, perhaps it is time to re-evaluate this “failure”. As opposed to understanding the principle of national self-determination as necessary to fill a hole in Marxist theory, we should understand it as blasting the hole itself and calling for the bourgeoisie to fill it. It is time to shed a light on the debates that took place within socialism before the principle of national self-determination became widely accepted as a necessary part of socialist programmes, in the period of the Second Socialist International between 1890 and 1914. This means an analysis of the national question from peripheral socialist parties rather than the centers in Germany and Russia. To seriously appraise the defeated alternatives to national self-determination allows us to appreciate that the nation-state is not the final word in history.
Much of the historiographical understanding of the national question debate in this period frames it as a dispute between two of the leading personalities: Rosa Luxemburg and Vladimir Ilyich Lenin. Rosa Luxemburg, as co-founder of the Social Democratic Party of Poland and the Kingdom of Lithuania (henceforth SDKPiL), positioned her party against the social patriotism of the rival Polish Socialist Party (henceforth PPS) who demanded the restoration of Poland, which was then partitioned under Germany, Austria-Hungary, and Russia. She fought against the Polish claim to independence at the London Congress of the Second International in 1896 and consistently argued for the Polish socialists in Prussia to be integrated into the German Social Democratic Party (henceforth SPD), rather than being a separate party. On the other hand, Vladimir Lenin, writing from the heart of the Tsarist empire, understood the right of nations to self-determination as a “special urgency” in a land where “subject peoples” were on the peripheries of Great Russia and experienced higher amounts of national oppression than they did in Europe.1
Rosa Luxemburg’s position has been recently evaluated as effectively forming a bloc with the chauvinist bureaucracy of the SPD.2 Luxemburg has further been accused of underestimating the force of national oppression and hence of “international proletariat fundamentalism”.3 By examining the debate as it took place in the Second International as a whole, we can understand these assessments of the case against national self-determination to be unsatisfactory and re-appraise the positive legacy of revolutionary internationalism.
Meanwhile, Lenin’s position has received praise in the wake of socialists relating to the national liberation movements of the 20th century, as “championing the rights of oppressed nations”.4 In this framework, the “liberation” of oppressed nations is the precondition of international working class unity and therefore national struggle clears the way for class struggle.
However, it is important to interrogate the consistency of Lenin’s position and hence dismantle the idea of a coherent Leninist position which emerges from its conclusions. The right of nations to self-determination, as Lenin took care to emphasize, could not be equated with support for secessionist movements. In 1903, as the Russian Social Democratic Party adopted the national self-determination as part of its programme, Lenin argued that “it is only in isolated and exceptional cases that we can advance and actively support demands conducive to the establishment of a new class state” against the calls of the PPS for the restoration of Poland.5 This lack of sympathy to struggles for national independence in practice was noted by contemporary socialist supporters of nationalism, the Ukrainian socialist Yurkevych polemicized that Lenin supported the right of national self-determination “for appearances’ sake” whilst in actuality being a “fervent defender of her [Russia’s] unity”.6 If the exercise of the right of national self-determination naturally leads to the formation of an independent state, Lenin was politically opposed to it in many of the same cases as Luxemburg. This distinction may be lost upon later “Leninists”, such as the Scottish Socialist Party who assume Scottish independence to be an extension of national self-determination, but it should not be obscured from our view.
Moreover, Lenin’s position changed through the course of his political experiences. The early Soviet government’s policy on nationalities required that “we must maintain and strengthen the union of socialist republics”.7 Instead of promoting secession, the Bolsheviks pursued a policy of Korenizatsiya (nativization) in which national minorities were promoted in their local bureaucracies and administrative institutions spoke the minority language. Hence national autonomy within a larger state was seen as an adequate guarantor of national rights to oppressed nations. Far from a consistent “Leninist position” of supporting the exercise of the right of national self-determination in nearly all cases, it is raised, what emerges as Lenin’s actual position is a theoretical “right” whose use is very rarely legitimated by historical conditions and the interests of the working class in practice, even where there are popular nationalist movements. Is this “right” really so far from the metaphysical formula that Rosa Luxemburg derided the principle of self-determination as?
Having dealt with the historical misapprehensions of Lenin’s position, it is time to reappraise the perspective of Rosa Luxemburg. Whilst her position is frequently presented as a theoretical innovation on her part, Luxemburg herself noted a longer anti-national heritage. Assessing the legacy of the earlier conspiratorial Polish socialist party, the Proletariat, she argued that they “fought nationalism by all available means and invariably regarded national aspirations as something which can only distract the working class from their own goals”.8 Far from national self-determination being an accepted orthodox Marxist position, we should keep in mind that the PPS had to argue for it at multiple congresses of the Second International in the case of Poland. After the revolutionary upsurges in Russia in 1905, a considerable segment of the PPS formed the PPS-Left, which similarly disavowed national independence as an immediate goal for socialists.
There were three core strands to Luxemburg’s opposition to national self-determination. Firstly, it was materially unviable given that no new nation could achieve economic independence owing to the spread of capitalism. Secondly, pursuing national self-determination in the form of supporting independence struggles did not make strategic sense for socialists as it inhibited them from placing political demands upon existing states. Finally, and most saliently for socialists today, even if national self-determination was politically and economically more than a utopian pipe-dream, it would still be against the interests of the working class to pursue it.
These latter two strands are more decisive in understanding Rosa Luxemburg’s position and are what make it more than a miscalculation rooted in economic determinism. Luxemburg herself appreciated the separation of the “economic” from the “political” under capitalism, as she argued capitalism “annihilated Polish national independence but at the same time created modern Polish national culture”.9 Far from being a national nihilist, Luxemburg stated that the proletariat “must fight for the defense of national identity as a cultural legacy, that has its own right to exist and flourish”.10 The 20th century has proven that political independence is materially possible. It has not shown that it is a remedy for national oppression and that it is a worthy goal for socialists.
National self-determination, in Luxemburg’s words, “gives no practical guidelines for the day to day politics of the proletariat, nor any practical solution of nationality problems”.11 As we can observe from Lenin’s policies on nationalities, there is no consistent conclusion that comes from the acknowledgment of this “right”. The only real conclusion is that affairs must be settled by the relevant nationality, which is presented as a homogeneous socio-political entity, as opposed to a site of class struggle in itself. The impracticality of this formula was not only resisted by Luxemburg, but also by Fritz Rozins, a Latvian socialist. Rozins, criticizing the position of Lenin in 1902, made the argument that several nations can occupy the same territory which problematized the demand for national self-determination.12
When examining contemporary manifestations of the national problem, these issues are thrown into sharper focus. In the case of Israel and Palestine, the framework of two competing claims of national self-determination which need to be reconciled with each other ultimately leads to endorsing an indefinite political and economic subordination of one nation by another. One way some sections of the modern Left attempt to address this is by rendering one nation’s claim (Israel’s) as inherently illegitimate, on account of its annexationist political project and racist domestic policy, and hence dismissing Hebrew Jewish people as constituting a national people with particular rights. However, making the right of national self-determination contingent upon the political project of its claimants would leave very few nations, if any, with this “right” at all, as its claimants tend to be an aspirational national bourgeoisie, whose class interests are tied to the continuation of the subjugation of the working class peoples within a territory, including working-class national minorities. The best way forward is to abandon such a “right” altogether, which assumes a basic unity between the interests of the oppressor and oppressed as part of the same nation. The question should instead be examined from the perspective of the common interests of the Israeli and Palestinian working classes against the Israeli state.
Abandoning national self-determination as a democratic “right”, which socialists should cease to guarantee as a part of their programmes is often equated with opposing national struggles in all cases. Rosa Luxemburg’s attitude to Armenia at the beginning of the 20th century demonstrates this is not the case. Unlike a number of her contemporaries who were concerned that the disintegration of the Ottoman Empire would only strengthen the hand of Tsarist Russia, Luxemburg argued emphatically that “the aspirations to freedom can here make themselves felt only in a national struggle” and hence that Social Democracy must “stand for the insurgents”.13 In her reasoning, the national struggle was appropriate for Armenia in a way that it was not for Poland, as the Armenian territories lacked a working-class, and were not bound to the Ottoman Empire by capitalist economic development, but by brute force. Perhaps ironically, this put her at odds with the Armenian Social Democrat David Ananoun, who rejected national secession on the grounds that new nation-states could not guarantee the rights of national minorities within them. The Armenian Social Democrats “always subordinated the solution of the national problem to the victory of the proletarian revolution”, including rejecting the specificity of Armenian situation.14 One could say they surpassed the supposed “international proletariat fundamentalism” of Rosa Luxemburg.
Both Ananoun and Luxemburg rejected territorial national self-determination as a framework, yet drew different conclusions in the specific case of Armenia. Why is that? By moving away from the idea that national oppression can be resolved by emergent nations, settling national oppression becomes the affair of the working class. Franz Mehring, on the left-wing of the SPD, clarified this in the case of Poland: “The age when a bourgeois revolution could create a free Poland is over, today the rebirth of Poland is only possible through a social revolution in which the modern proletariat breaks its chains”.15 Supporting a nationalist movement for Luxemburg only became tenable in the absence of an organized working class, and nationalism could not be the slogan raised to lead it. For Ananoun, conscious of the lack of capacity of forming coherent territorial states along ethnic lines in heterogeneous Transcaucasia, his position was conditioned by the concern of maintaining the rights and cultures of national minorities in territories that were necessarily going to contain multiple nationalities. This reveals the national question as it should be for the socialist movement: a question of the interests and the capacities of the working classes to place their demands upon bourgeois class states, and hence, the conquest of political power by the working classes. The maintenance of nationalities, in the form of culture and language, is part of the political and social rights that the working class wins through struggle against class states, not by creating them. Rather than debating whether a nation ought to exercise a “right” of self-determination, socialists should see the nation itself as a veil, under which contending classes are hidden.
What fundamentally determined Rosa Luxemburg’s attitude was understanding that nationalism was not an empty vessel in which socialists could pour in proletarian content. The ideology of nationhood intrinsically demands temporary class collaboration, at the very least, to the advantage of the ruling classes. An article she penned in January 1918, intended as friendly criticism of the early Soviet government’s policy on nationalities, most clearly articulates this perspective:
“The “right of nations to self-determination” is a hollow phrase which in practice always delivers the masses of people to the ruling classes.
Of course, it is the task of the revolutionary proletariat to implement the most expansive political democracy and equality of nationalities, but it is the least of our concerns to delight the world with freshly baked national class states. Only the bourgeoisie in every nation is interested in the apparatus of state independence, which has nothing to do with democracy. After all, state independence itself is a dazzling thing which is often used to cover up the slaughter of people.”16
This has been vindicated by historical experience. When we look at Poland today, a right-wing government is installing “Independence Benches” that play nationalist speeches.17 The speeches were delivered by none other than Józef Piłsudski, a former leader of the PPS who later abandoned socialism altogether. The warning of the Polish Communist Party, published in 1919, a year after Polish independence, that bourgeois “independence” in reality meant “the brutal dictatorship of the bourgeoisie over the proletariat” has proven more correct than any fantasy about the achievement of independence offering a permanent resolution to the national question, opening up the battlefield of class struggle.18 The formation of new class states does not resolve national oppression, so much as redistribute it.
Revolutionary internationalism, or the so-called “international proletariat fundamentalism”, stands as a rejoinder to those who seek shortcuts to social revolution by the construction of nation-states. Yet it also allows for a more positive assessment of nationalities. Rather than being bound to the political form of territorial states responsible for the oppression of millions across centuries, the traditions, institutions, and languages associated with nationalities can become part of a universal cultural legacy and human inheritance that requires neither the violence of borders nor of class rule. We can be moved by the words of the poet Adam Mickiewicz without scrambling to statehood. Capitalist development has made the endgame of the exercise of national self-determination, the nation-state, a dead-end for socialists. It is now necessary to pose the national question once more and seek different answers.
Fragment on War, National Questions and Revolution
When hatred of the proletariat and the imminent social revolution is absolutely decisive for the bourgeoisie in all their deeds and activities, in their peace programme and in their policies for the future: what is the international proletariat doing? Completely blind to the lessons of the Russian Revolution, forgetting the ABCs of socialism, it pursues the same peace programme as the bourgeoisie, it elevates it to its own programme! Hail Wilson and the League of Nations! Hail national self-determination and disarmament! This is now the banner that suddenly socialists of all countries are uniting under — together with the imperialist governments of the Entente, with the most reactionary parties, the government socialist boot-lickers, the ‘true in principle’ oppositional swamp socialists, bourgeois pacifists, petty-bourgeois utopians, nationalist upstart states, bankrupt German imperialists, the Pope, the Finnish executioners of the revolutionary proletariat, the Ukrainian sugar babies of German militarism.
In Poland the Daszyńskis are in a cozy union with the Galician slaughterers and Warsaw’s big bourgeoisie, in German Austria, Adler, Renner, Otto Bauer, and Julius Deutsch are arm-in-arm with the Christian Socials, the landowners and the German Nationals, in Bohemia the Soukup and the Nemec are in a close phalanx with all the bourgeois parties — a touching reconciliation of the classes. And everywhere the national drunkenness: the international banner of peace! The socialists are pulling the bourgeoisie’s chestnuts out of the fire. They are helping, using their ideology and their authority, to cover up the moral bankruptcy of bourgeois society and to save it. They are helping to renovate and consolidate bourgeois class rule.
And the first practical coronation of this unctuous policy — the defeat of the Russian Revolution and the partition of Russia.
It is the politics of 4th August 1914, only turned upside down in the concave mirror of peace. The capitulation of class struggle, the coalition with each national bourgeoisie for the reciprocal wartime slaughter transformed into an international world coalition for a ‘negotiated peace’. The cheapest, the corniest old wives’ tale, a movie melodrama — that’s what they’re falling for: Capital suddenly vanished, class oppositions null and void. Disarmament, peace, democracy, and harmony of nations. Power bows before justice, the weak straighten their backs up. Krupp instead of cannons will produce Christmas lights, the American city Gari [?] will be turned into a Fröbel kindergarten. Noah’s Ark, where the lamb grazes peacefully next to the wolf, the tiger purrs and blinks like a big house cat, while the antelope crawls with horns tucked behind the ear, the lions and goats play with blind cows. And all that with the help of the magic formula of Wilson, of the president of the American billionaires, all that with the help of Clemenceau, Lloyd George and the Prince Max of Baden! Disarmament, after England and America are two new military powers! Disarmament, after the technology has immeasurably advanced. After all, states sit in the pocket of arms and finance capital through national debt! After colonies — colonies remain. The ideas of class struggle formally capitulate to national ideas here. The harmony of classes in every nation appears as the condition for and expansion of the harmony of nations that should emerge out of the world war in a ‘League of Nations’.
Nationalism is an instant trump card. From all sides, nations and nationettes stake out a claim for their right to state formation. Rotted corpses rise out of hundred-year-old graves, filled with fresh spring shoots, and “historyless” peoples, who never formed an independent state entity up until now, feel a violent urge towards state formation. Poland, Ukraine, Belarussians, Lithuanians, Czechs, Yugoslavia, ten new nations of the Caucasus. Zionists are already erecting their Palestine Ghetto, provisionally in Philadelphia. It’s Walpurgis Night at Blockula today!
Broom and pitch-fork, goat and prong… To-night who flies not, never flies.
But nationalism is only a formula. The core, the historical content that is planted in it, is as manifold and rich in connections as the formula of ‘national self-determination’, under which it is veiled, is hollow and sparse.
As in every great revolutionary period the most varied range of old and new scores come to be settled, oppositions are brought to their conclusions: antiquated remnants of the past, the most pressing issues of the present and the barely born problems of the future whirl together. The collapse of Austria and Turkey is the final liquidation of the feudal Middle Ages, an addendum to the work of Napoleon. In this context, however, Germany’s breakdown and diminution is the bankruptcy of the most recent and newest imperialism and its plans for world mastery, first formed in war. It is equally only the bankruptcy of a specific method of imperialist rule: by East Elbian reaction and military dictatorship, by siege and extermination methods, first used against the Hereros in the Kalahari Desert, now carried over to Europe. The disintegration of Russia, outwardly and in its formal results: the formation of small nation-states, analogous to the collapse of Austria and Turkey, poses the opposite problem: on the one hand, capitulation of proletarian politics on a national scale before imperialism, and on the other capitalist counterrevolution against the proletarian seizure of power.
A K. [Kautsky] sees in this, in his pedantic, school-masterly schematism, the triumph of ‘democracy’, whose component parts and manifestation form are simply the nation-state. The washed-out petty-bourgeois formalist naturally forget to look into the inner historical core, forgets, as an appointed temple guard of historical materialism, that the ‘nation-state’ and ‘nationalism’ are empty pods into which each historical epoch and set of class relations pour their particular material content. German and Italian ‘nation-states’ in the 1870s were the slogan and the programme of the bourgeois state, of bourgeois class rule. Its leadership was directed against medieval, feudal past, the patriarchal, bureaucratic state and the fragmentation of economic life. In Poland the ‘nation-state’ was the traditional slogan of agrarian-noble and petty-bourgeois opposition to modern capitalist development. It was a slogan whose leadership was directed against the modern phenomena of life: both bourgeois liberalism and its antipode, the socialist workers movement. In the Balkans, in Bulgaria, Serbia and Romania, nationalism, the powerful outbreak of which was displayed in the two bloody Balkan wars as a prelude to the world war, was one hand an expression of aspirational capitalist development and bourgeois class rule in all these states, it was an expression of the conflicting interests of the bourgeoisie among themselves as well as the clash of their development tendency with Austrian imperialism. Simultaneously, the nationalism of these countries, although at heart only the expression of a quite young, germ-like capitalism, was and is colored in the general atmosphere of imperialist development, even with distinct imperial tendencies. In Italy, nationalism is already thoroughly and exclusively a company plaque for a purely imperialist colonial appetite. The nationalism of the Tripolitan war and the Albanian appetite has as little in common with the Italian nationalism of the 1850s and 1860s as Mr. Sonnino has with Giuseppe Garibaldi.
In Russian Ukraine, up until the October uprising in 1917, nationalism was nothing, a bubble, the arrogance of roughly a dozen professors and lawyers who mostly couldn’t speak Ukrainian themselves. Since the Bolshevik Revolution it has become the very real expression of the petty-bourgeois counterrevolution, whose head is directed against the socialist working class. In India, nationalism is the expression of an emerging domestic bourgeoisie, which aims for independent exploitation of the country on its account instead of only serving as an object for English capital to leech. This nationalism, therefore, corresponds with its social content and its historical stage like the emancipation struggles of the United States of America at the outset of the 18th century.
So nationalism reflects back all conceivable interests, nuances, historical situations. It shines in all colors. It is everything and nothing, a mere shell. Everything hangs on it to assert its own particular social core.
So the universal, immediate world explosion of nationalism brings with it the most colorful confusion of special interests and tendencies in its bosom. But there is an axis that gives all these special interests a direction, a universal interest created by the particular historical situation: the apex against the threatening world revolution of the proletariat.
The Russian Revolution, with the Bolshevik rule it brought forth, has put the problem of social revolution on the agenda of history. It has pushed the class contradictions between capital and labor to the most extreme heights. In one swoop, it has opened up a gaping chasm between both classes in which volcanic fumes boil and fierce flames blaze. Just as the June Rebellion of the Paris proletariat and the June massacres split bourgeois society into two classes for the first time between which there can only be one law: a struggle of life and death, Bolshevik rule in Russia has placed bourgeois society face to face with the final struggle of life and death. It has destroyed and blown away the fiction of the tame working class that is relatively peacefully organized by socialism, which bragged in theoretical, harmless phrases but practically worshipped the principle: live and let live — that fiction, which was what the practice of German Social Democracy and in its footsteps, the entire International, consisted of for the last thirty years. The Russian Revolution instantly destroyed the modus vivendi between socialism and capitalism, created out of the last half-century of parliamentarism, with a rough fist and transformed socialism from the harmless phrases of electoral agitation, the blue skies of the distant future, into a bloodily serious problem of the present, of today. It has brutally ripped open the old, terrible wounds of bourgeois society that had been healing since the June Days in Paris in 1848.
All of this, of course, is initially only in the consciousness of the ruling classes. Just as the June Days, with the power of an electric shock, immediately imprinted the consciousness of an irreconcilable class opposition to the working class upon the bourgeoisie of all nations and cast a deadly hatred of the proletariat in their hearts whilst workers of all nations needed decades in order to adopt the same lessons of the June days for themselves, the consciousness of class opposition, it now repeats itself: The Russian Revolution has awakened a fuming, foaming, trembling fear and hatred of the threatening spectre of proletarian dictatorship in the entirety of the possessing classes in every single nation. It can only be compared with the sentiments of the Paris bourgeoisie during the June slaughters and the butchery of the Commune. ‘Bolshevism’ has become the catchword for practical, revolutionary socialism, for all endeavors of the working class to conquer power. In this rupturing of the social abyss within bourgeois society, in the international deepening and sharpening of class antagonism is the historical achievement of Bolshevism, and in this work — like in all great historical contexts — all errors and mistakes of Bolshevism vanish without a trace.
These sentiments are now the deepest heart of the nationalist delirium in which the capitalist world has seemingly fallen, they are the objective historical content to which the many-colored cards of announced nationalisms are reduced. These small, young bourgeoisie that are now striving for independent existence, are not merely trembling with the desire for winning unrestricted and untrammeled class rule but also for the long-awaited delight of the single-handed strangling of their mortal enemy: the revolutionary proletariat. This is a function they had to concede up until now to the disjointed state apparatus of foreign rule. Hate, like love, is only grudgingly left to a third wheel. Mannerheim’s blood orgies, the Finnish Gallifet, show how much that the blazing heat of hate that has sprouted up in the hearts of all small nations in the last few years, all the Poles, Lithuanians, Romanians, Ukrainians, Czechs, Croats, etc., only waited for the opportunity to finally disembowel the proletariat with ‘national’ means. From all these young nations, which like white and innocent lambs hopped along in the grassy meadows of world history, the carbuncle-like eyes of the grim tiger are already looking out and waiting to “settle the accounts” with the first stirrings of “Bolshevism”. Behind all of the idyllic banquets, the roaring festivals of brotherhood in Vienna, in Prague, in Zagreb, in Warsaw, Mannerheim’s open graves are already yawning and the Red Guards have to dig them themselves! The gallows of Charkow shimmer like faint silhouettes and the Lubinskys and Holubowitsches invited the German ‘liberators’ to Ukraine for their erection.
And the same fundamental idea reigns in the entire peace programme of Wilson. The “League of Nations” in the atmosphere of Anglo-American imperialism being drunk on victory and the frightening spectre of Bolshevism traversing the world stage can only bring forth one thing: a bourgeois world alliance for the repression of the proletariat. The first blood-soaked sacrifice that the High Priest Wilson, atop his omens, will make in front of the Ark of ‘The League of Nations’ will be Bolshevik Russia. The ‘self-determined nations’, victors and vanquished together, will overthrow it.
The ruling classes once again show their unerring instinct for their class interests, their wonderfully fine sensitivity for the dangers surrounding them. Whilst on the surface, the bourgeoisie are enjoying the loveliest weather and the proletarians of all countries are getting drunk on nationalist and ‘League of Nations’ spring breezes, bourgeois society is being torn limb from limb which heralds the impending change of seasons as the historical barometer falls. Whilst socialists are foolishly eager to pull their chestnuts of peace out of the fire of world war, as ‘national ministers’, they can’t help but see the inevitable, imminent fate behind their backs: the terrible rising spectre of social world revolution that has already silently stepped onto the back of the stage.
It is the objective insolvability of the tasks bourgeois society faces that makes socialism a historical necessity and world revolution unavoidable.
No one can predict how long this final period will last and what forms it will take. History has already left the well-trodden path and the comfortable routine. Every new step, every new turn of the road opens up new perspectives and new scenery.
What is important is to understand the real problem of the period. The problem is called: the dictatorship of the proletariat, the realization of socialism. The difficulties of the task do not lie in the strength of the opponent, the resistance of bourgeois society. Its ultima ratio: the army is useless for the suppression of the proletariat as a result of the war, it has even become revolutionary itself. Its material basis for existence: the maintenance of society has been shattered by the war. Its moral basis for existence: tradition, routine, and authority have all been blown away by the wind. The whole structure has become loosened, fluid, movable. The conditions for struggle have never been so favourable for any emergent class in world history. It can fall into the lap of the proletariat like a ripe fruit. The difficulty lies in the proletariat itself, in its lack of maturity, or rather, the immaturity of its leaders, the socialist parties. The working class balks, it recoils before the uncertain enormity of its duty again and again. But it must, it must. History takes away all of its excuses: to lead us out of the night and horror of oppressed humanity into the light of liberation.
- Lenin ‘On The Right of Nations of Self-Determination’ 1914
- Blanc, E., ‘The Rosa Luxemburg Myth: A Critique of Luxemburg’s Politics in Poland (1893–1919)’ , Historical Materialism, 26/1 (2018)
- Kasprak, M., ‘Dancing with the Devil: Rosa Luxemburg’s Conception of the Nationality Question in Polish Socialism’, Critique, 40/3 (2012), p. 433
- D’Amato, P. https://socialistworker.org/2014/03/25/what-do-we-say-about-the-national-question
- Lenin, V.I. ‘The National Question in Our Programme’ 1903
- Luxemburg, R. ‘Socialism in Poland’ (1897)
- Luxemburg, R. ‘The National Question’ (1907)
- Luxemburg, R., ‘The National Question’ (1907)
- Ijab, I., ‘The Nation of Socialist Intelligentsia’, p. 190
- Luxemburg, R., ‘Social Democracy and The National Struggles in Turkey’ (1902)
- Minassian, A. T., ‘Le mouvement révolutionnaire arménien, 1890–1903’, p. 558
- Mehring, F., ‘Die polnische Frage’
- Luxemburg, R. ‘Nicht nach Schema F’ | <urn:uuid:6c28532c-8eb2-4421-b868-bf1943d3df11> | CC-MAIN-2022-33 | https://theacheron.medium.com/rosa-luxemburg-fragment-on-war-national-questions-and-revolution-6db2c0d9cee2?source=user_profile---------8---------------------------- | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00097.warc.gz | en | 0.939936 | 7,322 | 2.953125 | 3 |
This article shows the VHDL coding of a very simple 8-bit CPU. The code was simulated using Quartus tool and should be used mainly as a didactic reference to understand how a CPU actually runs a sequence of instructions.
Nine years ago, on a post-graduation program at CEFET-SC, I was introduced, at the computer architecture course, to the hypothetical CPUs created by professor Raul Fernando Weber at UFRGS (Neander, Ahmes, Ramses e Cesar).
Later on, at the programmable logic course, professor Edson Nogueira de Melo proposed that each student designed circuits or practical applications using programmable logic and VHDL. Although that wasn’t my first contact with VHDL (I already attended to a mini-course from Augusto Einsfeldt), the truth was that I never implemented anything using programmable logic nor VHDL.
That’s when I realized that was the chance to glue things together and make an old dream come true: by using VHDL it would be possible to implement a basic CPU capable of running a small instruction set and able to demonstrate the basics of running sequencial code!
Then, along with my friend Roberto Amaral, we decided to use VHDL to implement one of the UFRGS’s hypothetical CPUs we studied on the computer architecture course. The machine of choice was Ahmes as it is a very small and simple CPU yet very flexible and functional.
The Ahmes Machine
The Ahmes programming model is absolutely simple: it is an 8-bit architecture with a 24 instruction set, 3 registers and a single addressing mode!
Due to its simplicity, there is no stack implementation and using sub-routines is not possible (although it is somewhat possible by using self-modifying code), there is also a limitation due to the maximum addressing capability of only 256 bytes of memory (on a single memory-space shared by code and data, using a Von Neumann architecture).
The available registers are: an 8-bit accumulator (AC), a 5-bit status register (N-negative, Z-zero, C-carry, B-borrow e V-overflow) and an 8-bit program counter (PC).
The complete 24 instruction set is shown below:
|0000 0000||NOP||no operation||no operation|
|0001 0000||STA addr||MEM(addr) ← AC||store the accumulator contents on the specified memory address|
|0010 0000||LDA addr||AC← MEM(addr)||load the accumulator with the value from the memory address|
|0011 0000||ADD addr||AC← MEM(addr) + AC||add the accumulator to the content of the memory address|
|0100 0000||OR addr||AC← MEM(addr) OR AC||logical OR|
|0101 0000||AND addr||AC← MEM(addr) AND AC||logical AND|
|0110 0000||NOT||AC← NOT AC||logical complement of the accumulator|
|0111 0000||SUB addr||AC← MEM(addr) – AC||subtract the accumulator from the memory content|
|1000 0000||JMP addr||PC ← addr||unconditional jump to addr|
|1001 0000||JN addr||if N=1 then PC ← addr||jump if negative (N=1)|
|1001 0100||JP addr||if N=0 then PC ← addr||jump if positive (N=0)|
|1001 1000||JV addr||if V=1 then PC ← addr||jump if overflow (V=1)|
|1001 1100||JNV addr||if V=0 then PC ← addr||jump if not overflow (V=0)|
|1010 0000||JZ addr||if Z=1 then PC ← addr||jump if zero (Z=1)|
|1010 0100||JNZ addr||if Z=0 then PC ← addr||jump if not zero (Z=0)|
|1011 0000||JC addr||if C=1 then PC ← addr||jump if carry (C=1)|
|1011 0100||JNC addr||if C=0 then PC ← addr||jump if not carry (C=0)|
|1011 1000||JB addr||if B=1 then PC ← addr||jump if borrow (B=1)|
|1011 1100||JNB addr||if B=0 then PC ← addr||jump if not borrow (B=0)|
|1110 0000||SHR||C←AC(0); AC(i-1)←AC(i); AC(7)← 0||logical shift to the right|
|1110 0001||SHL||C←AC(7); AC(i)←AC(i-1); AC(0)←0||logical shift to the left|
|1110 0010||ROR||C←AC(0); AC(i-1)←AC(i); AC(7)←C||rotate right through carry|
|1110 0011||ROL||C←AC(7); AC(i)←AC(i-1); AC(0)←C||rotate left through carry|
|1111 0000||HLT||halt||halts operation (not implemented)|
Table 1 – Ahmes Instruction Set
Due to the lack of any instruction timing constraints or specifications, we had the freedom to chose the easiest implementation (according to our limited VHDL and programmable logic knowledge).
Figure 1 shows Ahmes high level schematic. There are three main blocks: the CPU, the ALU (ULA in portuguese) and the memory block. Please notice that the memory block is here for the sake of simulation and validation only. On a real implementation it would be replaced by external (or internal) RAM/ROM/FLASH memories.
Figure 1 – Ahmes high level block diagram
The VHDL implementation was split into two parts: the ALU (Arithmetic and Logic Unit) responsible for logical and arithmetic operations was designed on a separate block and code, that allowed us to further test and validate it independently from the CPU code.
The ALU code is fairly simple: it has a 4-bit bus for operation selection, two 8-bit input operand buses and one 8-bit output operand bus. There are also 5 output lines for the N, Z, C, B and V flags and one additional input line for the carry input needed for the ROR and ROL instructions.
The ALU’s VHDL code is fairly simple as it is mostly a combinational logic with a few implemented operations.
LIBRARY ieee ; USE ieee.std_logic_1164.all ; USE ieee.std_logic_unsigned.all ; ENTITY ula IS PORT ( operacao : IN STD_LOGIC_VECTOR (3 DOWNTO 0); operA : IN STD_LOGIC_VECTOR(7 DOWNTO 0); operB : IN STD_LOGIC_VECTOR(7 DOWNTO 0); Result : buffer STD_LOGIC_VECTOR(7 DOWNTO 0); Cin : IN STD_LOGIC; N,Z,C,B,V : buffer STD_LOGIC ); END ula; ARCHITECTURE ula1 OF ula IS constant ADIC : STD_LOGIC_VECTOR(3 DOWNTO 0):="0001"; constant SUB : STD_LOGIC_VECTOR(3 DOWNTO 0):="0010"; constant OU : STD_LOGIC_VECTOR(3 DOWNTO 0):="0011"; constant E : STD_LOGIC_VECTOR(3 DOWNTO 0):="0100"; constant NAO : STD_LOGIC_VECTOR(3 DOWNTO 0):="0101"; constant DLE : STD_LOGIC_VECTOR(3 DOWNTO 0):="0110"; constant DLD : STD_LOGIC_VECTOR(3 DOWNTO 0):="0111"; constant DAE : STD_LOGIC_VECTOR(3 DOWNTO 0):="1000"; constant DAD : STD_LOGIC_VECTOR(3 DOWNTO 0):="1001"; BEGIN process (operA, operB, operacao,result,Cin) variable temp : STD_LOGIC_VECTOR(8 DOWNTO 0); begin case operacao is when ADIC => temp := ('0'&operA) + ('0'&operB); result <= temp(7 DOWNTO 0); C <= temp(8); if (operA(7)=operB(7)) then if (operA(7) /= result(7)) then V <= '1'; else V <= '0'; end if; else V <= '0'; end if; when SUB => temp := ('0'&operA) - ('0'&operB); result <= temp(7 DOWNTO 0); B <= temp(8); if (operA(7) /= operB(7)) then if (operA(7) /= result(7)) then V <= '1'; else V <= '0'; end if; else V <= '0'; end if; when OU => result <= operA or operB; when E => result <= operA and operB; when NAO => result <= not operA; when DLE => C <= operA(7); result(7) <= operA(6); result(6) <= operA(5); result(5) <= operA(4); result(4) <= operA(3); result(3) <= operA(2); result(2) <= operA(1); result(1) <= operA(0); result(0) <= Cin; when DAE => C <= operA(7); result(7) <= operA(6); result(6) <= operA(5); result(5) <= operA(4); result(4) <= operA(3); result(3) <= operA(2); result(2) <= operA(1); result(1) <= operA(0); result(0) <= '0'; when DLD => C <= operA(0); result(0) <= operA(1); result(1) <= operA(2); result(2) <= operA(3); result(3) <= operA(4); result(4) <= operA(5); result(5) <= operA(6); result(6) <= operA(7); result(7) <= Cin; when DAD => C <= operA(0); result(0) <= operA(1); result(1) <= operA(2); result(2) <= operA(3); result(3) <= operA(4); result(4) <= operA(5); result(5) <= operA(6); result(6) <= operA(7); result(7) <= '0'; when others => result <= "00000000"; Z <= '0'; N <= '0'; C <= '0'; V <= '0'; B <= '0'; end case; if (result="00000000") then Z <= '1'; else Z <= '0'; end if; N <= result(7); end process; END ula1;
The Ahmes CPU VHDL code is a slightly more complex and larger, so we are going to take a look at the implementation of three kinds of instructions: a data manipulation, an arithmetic (which makes use of the ALU) and a jump instruction.
The instruction decoding is performed by a finite state machine controlled by the internal CPU_STATE variable. A rising edge on the CPU clock input feeds the state machine.
Notice that on the two first stages (clocks) of instruction decoding, the CPU must fetch data (opcode) from the memory and feed the decoder. The current opcode is stored into the INSTR register for later decoding.
VARIABLE CPU_STATE : TCPU_STATE; -- CPU state variable begin if (reset='1') then -- reset operations CPU_STATE := BUSCA; -- set CPU state machine to fetch PC := "00000000"; -- set PC to zero MEM_WRITE <= '0'; -- disable memory write signal ADDRESS_BUS <= "00000000"; -- set the address bus to zero DATA_OUT <= "00000000"; -- set the DATA_OUT bus to zero OPERACAO <= "0000"; -- no operation on ALU ELSIF ((clk'event and clk='1')) THEN -- if it's a clock rising edge CASE CPU_STATE IS -- select the current state of the CPU's state machine WHEN BUSCA => -- first fetch cycle ADDRESS_BUS <= PC; -- load address bus with the PC content ERROR <= '0'; -- disable the error output CPU_STATE := BUSCA1; -- next state = BUSCA) WHEN BUSCA1 => -- second fetch cycle INSTR := DATA_IN; -- read the instruction and store into INST) CPU_STATE := DECOD; -- next state = DECOD WHEN DECOD => -- now we will start decoding the instruction CASE INSTR IS -- decod the INSTR content -- NOP - no operation, only increment PC WHEN NOP => PC := PC + 1; -- add 1 to PC CPU_STATE := BUSCA; -- restart instruction fetch -- STA - store the AC into the specified memory WHEN STA => ADDRESS_BUS <= PC + 1; -- fetch the operand CPU_STATE := DECOD_STA1; -- proceed on STA decoding
Starting from the third clock pulse the actual decoding process takes place, on the code snippet above we can see the NOP instruction and its related procedures (which is nothing, only the PC is incremented so it points to the next instruction). We can also see the first part of the STA decoding: the address bus is loaded with PC+1 in order to fetch the operand from memory and the state machine advances toward the next decoding stage (DECOD_STA1).
Please keep in mind that this code is just a first and unpretentious VHDL implementation. Although functional, there are a lot of improvements that could improve the decoding process and make it more efficient, one of them could be incrementing the PC right on the second decoding stage.
Proceeding with the STA decode, let’s take a look at its VHDL code:
WHEN DECOD_STA1 => -- STA decoding TEMP := DATA_IN; -- reads the operand CPU_STATE := DECOD_STA2; WHEN DECOD_STA2 => ADDRESS_BUS <= TEMP; DATA_OUT <= AC; -- put the accumulator on the data_out bus PC := PC + 1; -- add 1 to the PC CPU_STATE := DECOD_STA3; WHEN DECOD_STA3 => MEM_WRITE <= '1'; -- write to the memory PC := PC + 1; -- add 1 to the PC CPU_STATE := DECOD_STA4; WHEN DECOD_STA4 => MEM_WRITE <= '0'; -- disable memory write signal CPU_STATE := BUSCA; -- restart instruction fetch
On the fourth decoding stage, the input operand (the memory address which will receive the accumulator contents) is read and stored into a temporary variable (TEMP). Then it placed on the address bus (in order to select the desired memory address) and the accumulator contents is placed on the output data bus.
On the sixth stage the data is written on the memory and on the seventh and last stage the write line is disabled and the state machine restarts from the fetch state.
Now let’s take a look at the ADD instruction decoding. The first two stages are the same as the STA instruction. From the third stage the actual ADD decoding takes place, when the operand is read from memory:
WHEN ADD => -- fetch the operand ADDRESS_BUS <= PC + 1; -- proceed with ADD decoding CPU_STATE := DECOD_ADD1;
The next stages proceed with the decoding process. On the sixth stage (DECOD_ADD3) it’s where the real “magic” takes place: the operands are loaded into the ALU inputs and the ADD operation is selected on the ALU lines. The last decoding stage (DECOD_STORE), which is common to several instructions, is where the result is written onto the accumulator.
WHEN DECOD_ADD1 => -- ADD decoding TEMP := DATA_IN; -- load the operand address CPU_STATE := DECOD_ADD2; WHEN DECOD_ADD2 => ADDRESS_BUS <= TEMP; -- load the address bus with the operand address CPU_STATE := DECOD_ADD3; WHEN DECOD_ADD3 => OPER_A <= DATA_IN; -- load ALU's OPER_A input with the data read from memory OPER_B <= AC; -- load ALU's OPER_B input with the data from the accumulator OPERACAO <= ULA_ADD; -- select ULA's ADD operation PC := PC + 1; -- add 1 to the PC CPU_STATE := DECOD_STORE; WHEN DECOD_STORE => AC := RESULT; -- load the accumulator from the result PC := PC + 1; -- increments the PC CPU_STATE := BUSCA; -- restart instruction decoding
The last instruction we are going to see is JZ (not the famous rapper) which is jump if zero. Its decoding is very simple, with the first two stages common with every other instruction as we already seen. The third stage has the following code:
WHEN JZ => -- jump if zero IF (Z='1') THEN -- if Z=1 ADDRESS_BUS <= PC + 1; -- fetch the operand CPU_STATE := DECOD_JMP; -- proceed as a JMP ELSE -- if Z=0 PC := PC + 2; -- add 2 to the PC CPU_STATE := BUSCA; -- restart instruction fetch END IF;
The next stage (fourth) is common to any jump instruction and loads the PC with the operand, thus making the actual jump.
WHEN DECOD_JMP => -- JMP decoding PC := DATA_IN; -- load PC with the operand data CPU_STATE := BUSCA; -- restart instruction decoding
The remaining instructions follow the same philosophy seen here and I believe the fully commented VHDL listing can help understanding Ahmes operation.
As I already told, the Ahmes CPU was tested using the simulation tool on Altera’s Quartus II. In order to ease our tests, we designed a simple VHDL memory which works as ROM and RAM. Addresses 0 to 24 are loaded with a small assembly program on reset and addresses 128 to 132 work as constante and variable storage. The VHDL listing follows:
LIBRARY ieee ; USE ieee.std_logic_1164.all ; USE ieee.std_logic_unsigned.all ; ENTITY memoria IS PORT ( address_bus : IN INTEGER RANGE 0 TO 255; data_in : IN INTEGER RANGE 0 TO 255; data_out : OUT INTEGER RANGE 0 TO 255; mem_write : IN std_logic; rst : IN std_logic ); END memoria; ARCHITECTURE MEMO OF MEMORIA IS constant NOP : INTEGER := 0; constant STA : INTEGER := 16; constant LDA : INTEGER := 32; constant ADD : INTEGER := 48; constant IOR : INTEGER := 64; constant IAND: INTEGER := 80; constant INOT: INTEGER := 96; constant SUB : INTEGER := 112; constant JMP : INTEGER := 128; constant JN : INTEGER := 144; constant JP : INTEGER := 148; constant JV : INTEGER := 152; constant JNV : INTEGER := 156; constant JZ : INTEGER := 160; constant JNZ : INTEGER := 164; constant JC : INTEGER := 176; constant JNC : INTEGER := 180; constant JB : INTEGER := 184; constant JNB : INTEGER := 188; constant SHR : INTEGER := 224; constant SHL : INTEGER := 225; constant IROR: INTEGER := 226; constant IROL: INTEGER := 227; constant HLT : INTEGER := 240; TYPE DATA IS ARRAY (0 TO 255) OF INTEGER; BEGIN process (mem_write,rst) VARIABLE DATA_ARRAY: DATA; BEGIN IF (RST='1') THEN -- down counter from 10 (contents of address 130) to 0 DATA_ARRAY(0) := LDA; -- load A with contents of address 130 (A=10) DATA_ARRAY(1) := 130; DATA_ARRAY(2) := SUB; -- subtract the content of address 132 from A (A=A-1) DATA_ARRAY(3) := 132; DATA_ARRAY(4) := JZ; -- jump to address 8 if A=0 DATA_ARRAY(5) := 8; DATA_ARRAY(6) := JMP; -- jump to address 2 (loop) DATA_ARRAY(7) := 2; -- we finished the down counting, now add the contents of addresses 130 with 131 and store into 128 DATA_ARRAY(8) := LDA; -- load A with the contents of 130 (A=10) DATA_ARRAY(9) := 130; DATA_ARRAY(10) := ADD; -- add A with the contents of 131 (A=10+18) DATA_ARRAY(11) := 131; DATA_ARRAY(12) := STA; -- store A into 128 DATA_ARRAY(13) := 128; -- logical OR of (128) with (129) rotating 4 bits to the left, store the result into (133) DATA_ARRAY(14) := LDA; -- load A with (129) DATA_ARRAY(15) := 129; DATA_ARRAY(16) := SHL; -- shift one bit to the left (LSB is zero) DATA_ARRAY(17) := SHL; -- shift one bit to the left (LSB is zero) DATA_ARRAY(18) := SHL; -- shift one bit to the left (LSB is zero) DATA_ARRAY(19) := SHL; -- shift one bit to the left (LSB is zero) DATA_ARRAY(20) := IOR; -- logical OR with address 128 DATA_ARRAY(21) := 128; DATA_ARRAY(22) := STA; -- store the result into 133 DATA_ARRAY(23) := 133; DATA_ARRAY(24) := HLT; -- halts the CPU -- Variables and contants DATA_ARRAY(128) := 0; DATA_ARRAY(129) := 5; DATA_ARRAY(130) := 10; DATA_ARRAY(131) := 18; DATA_ARRAY(132) := 1; ELSIF (RISING_EDGE(MEM_WRITE)) THEN DATA_ARRAY(ADDRESS_BUS) := DATA_IN; END IF; DATA_OUT <= DATA_ARRAY(ADDRESS_BUS); end process; END MEMO;
The assembly listing for the code stored in the memory is this:
LDA 130 SUB 132 JZ 8 JMP 2 LDA 130 ADD 131 STA 128 LDA 129 SHL SHL SHL SHL IOR 128 STA 133 HLT
For simulation purposes, we also used a waveform file with two signals: a clock and a reset pulse:
Figure 2 – Waveform file ahmes1.vwf
Compiling the project results on the following numbers: the CPU uses 222 logic elements and 99 registers, while the ALU uses another 82 logic elements, or a total of 284 logic elements and 99 registers for the whole CPU (except the memory), that means only 6.2% of the logic elements and 2.1% of the available registers on an Altera Cyclone II EP2C5 FPGA, the model used on the simulation shown here.
Notice that I used the Quartus II Web Edition version 9.1sp2 instead of the latest Quartus version (prime 16.0). That was because I had some issues on the install, specially regarding the Modelsim simulation tool (probably a conflict with another piece of software on my notebook).
Figure 3 shows a part of the simulation output from Ahmes. It shows the LDA 130 complete sequence:
Figure 3 – Simulation results on Quartus II
This article showed that implementing a CPU in VHDL isn’t so difficult and although we didn’t synthesized Ahmes on a real FPGA, all the basis for learning how a microprocessor operates can be seen here.
I hope this article can inspire others to study the operation and implementation of CPUs using VHDL (or Verilog, or any other HDL) because, at least for me, studying, understanding and creating CPUs is something really exciting.
All the files regarding Ahmes are available for download at my GitHub account. | <urn:uuid:55be2d79-2678-4632-9b54-ff669d683585> | CC-MAIN-2022-33 | https://embeddedsystems.io/ahmes-a-simple-8-bit-cpu-in-vhdl/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00696.warc.gz | en | 0.747503 | 5,485 | 3.359375 | 3 |
Primary sclerosing cholangitis (PSC) is a chronic, cholestatic liver disease of unknown etiology, characterized by inflammation, destruction, and eventual fibrosis of intrahepatic and extrahepatic bile ducts which can lead to end-stage liver disease.
- Focal strictures of the biliary tree lead to cholestasis and a characteristic beaded appearance on cholangiography.
- The disease may progress silently, or with recurrent episodes of cholangitis characterized by right upper quadrant pain, fever, and jaundice.
- Insidious, but continuous, progression to cirrhosis with concomitant portal hypertension and liver failure is typical.
- PSC is much less common than alcoholic liver disease; nonetheless, because it often affects otherwise healthy young people, it is the fourth most common indication for liver transplantation in the
- Delbet first described the syndrome of PSC in 1924, the disease was considered a rare medical curiosity with fewer than 100 cases reported up until 1970
- With the advent improved imaging techniques, particularly endoscopic retrograde cholangiography (ERC) in 1974, the numbers of cases diagnosed in most major centers increased.
- Mayo Clinic and
Royal Free Hospitalin increased interest in the disease as it was quickly realized that the disorder had an association with inflammatory bowel disease (IBD), more often affecting young males with ulcerative colitis. London
- In one study it was found that between the years 1976 and 2000 the incidence of PSC in men (1.25/100,000 person-years) was twice that of women (0.54/100,000 person-years).
- The prevalence of PSC, during the same time period, was three times greater in men (20.9/100,000 versus 6.3/100,000) than women. The same study confirmed the findings that 73% of cases have IBD, most of them ulcerative colitis.
- One of the reasons why the prevalence of this disease appears to be increasing is that the availability of diagnostic tests has increased.
- Many patients may simply have mildly increased liver enzymes and through thorough investigations be found to have PSC.
- The widespread implementation of ERCP and MRCP has likely led to a greater number of patients being diagnosed at an earlier stage of the disease, which has also contributed to an improved understanding of the disorder.
PSC and inflammatory bowel disease —
- UC has been reported in 25 to 90 percent of patients with PSC.
- A survey of 23 hospitals in
, for example, examined the reported cases of PSC from 1984 to 1988; UC was present in 44 percent. Spain
- It is likely that this figure is an underestimate, since the colonic mucosa may be grossly normal in appearance despite the presence of histologic colitis.
- The true prevalence of UC in PSC is probably closer to 90 percent when rectal and sigmoid biopsies are routinely obtained.
- A survey of 1500 patients with UC in
, for example, found that 72 (5 percent) had an elevated serum alkaline phosphatase; endoscopic retrograde cholangiopancreatography (ERCP) was performed in 65, of whom 55 (85 percent) had evidence of PSC. Sweden
- PSC was more prevalent in patients with pancolitis than in those with distal colitis (5.5 versus 0.5 percent).
- It was also more common in men than women.
- Another report found that more than 7 percent of patients with UC may have PSC.
PSC also occurs in patients with Crohn's disease.
- In one report of 262 patients with Crohn's disease, 38 (15 percent) had long-standing abnormal liver biochemical tests and underwent endoscopic cholangiography and liver biopsy.
- Nine of these patients (3.4 percent) were diagnosed with PSC.
- Approximately 70 percent of patients with PSC are men, with a mean age at diagnosis of 40 years, even though the sex distribution is equal between men and women in the overall UC population.
- However, in the small subset of patients without UC, the male:female ratio is lower (0.8:1) and patients are diagnosed at an older age.
- Women with PSC are generally diagnosed at an older age.
CLASSIFICATIONThe early classifications of PSC were very rigid and excluded patients with gallstones, previous biliary tract surgery, inflammatory bowel disease, and retroperitoneal fibrosis.
- Additionally, progression of disease over a 2-year time period was mandatory prior to the diagnosis.
- These strict criteria seem unjustified and present classification schemes divide sclerosing cholangitis into
- primary (of unknown etiology) and
- secondary (with a known or suspected underlying cause).
- primary (of unknown etiology) and
Criteria for the diagnosis of primary sclerosing cholangitis.
Source: Porayko et al.
- Presence of typical cholangiographic abnormalities of PSC (involving bile ducts segmentally or extensively)
- Compatible clinical, biochemical, and hepatic histologic findings (recognizing that they are nonspecific)
- Exclude the following in most instances.
- Biliary calculi (unless related to stasis)
- Biliary tract surgery (other than simple cholecystectomy)
- Congenital abnormalities of the biliary tract
- AIDS-associated cholangiopathy
- Ischemic strictures
- Bile duct neoplasms (unless PSC previously established)
- Exposure to irritant chemicals (fl oxuridine, formalin)
- Evidence of another type of liver disease, such as primary biliary cirrhosis or chonic active hepatitis.
The most common secondary causes of sclerosing cholangitis include
- ischemia (arising from operative trauma, hepatic arterial infusion of floxuridine, allograft rejection),
- recurrent biliary sepsis,
- multifocal cholangiocarcinoma,
- AIDS, and
- toxic agents (formaldehyde, absolute alcohol).
Radiographically, secondary causes of sclerosing cholangitis simulate PSC but the clinical course and therapeutic options may differ considerably.
Caroli and Rosner developed an anatomical classification in which the condition is divided according to whether involvement of the biliary tree is diffuse or segmental.
- Segmental involvement could further be divided into disease that affects the hepatic duct junction, the common hepatic duct, or the common bile duct.
Longmire’s classification of primary sclerosing cholangitis.
Type Frequency Clinical/radiological features (%)
Type -1 5–10% Affecting primarily distal common bile duct
Type -2 5–10% Occurring soon after attack of acute necrotizing cholangitis
Type -3 40–50% Chronic diffuse
Type -4 40–50% Chronic diffuse associated with inflammatory bowel disease
Type of duct/ Cholangiographic appearance classification
Type I Multiple strictures, normal caliber of bile ducts
Type II Multiple strictures, saccular dilations, decreased arborization
Type III Only central branches filled, severe pruning
Type I Slight irregularity of duct contour, no stricture
Type II Segmental stricture
Type III Stricture of almost the entire length of the duct
Type IV Extremely irregular margin, diverticulum outpouchings.
The classic onion-skin lesions are rarely seen on percutaneous biopsy of the liver; therefore, the diagnosis has usually been made through cholangiography.
Histologically, PSC tends to gradually progress through four reasonably well-characterized stages.
Stage 1 is the earliest, characterized by degeneration of epithelial cells in the bile duct and an inflammatory infiltrate localized to the portal triads.
In stage 2, fibrosis and inflammation infiltrate the hepatic parenchyma with subsequent destruction of periportal hepatocytes resulting in piecemeal necrosis and loss of bile ducts.
In stage 3, cholestasis becomes more prominent and portal-to-portal fibrotic septa are characteristic.
In stage 4, frank cirrhosis develops, with histological features similar to other causes of cirrhosis.
The most common association is with inflammatory bowel disease, which affects up to 75% of patients with PSC.
- Of these patients, over 80% have ulcerative colitis (UC) and less than 20% have Crohn’s disease.
- Conversely, only 2.5 to 7.5% of patients with UC have or will develop PSC.
- The true prevalence is likely much higher, but because many patients with UC are asymptomatic and show only minimal elevation in liver enzymes, cholangiography is not performed and they may remain undiagnosed.
Many other disorders, particularly inflammatory disorders, show an association with PSC. These include
- hypereosinophilic syndrome,
- Sjögren’s syndrome,
- systemic sclerosis,
- celiac disease,
- Behçet’s syndrome,
- histiocytosis X,
- sicca complex,
- rheumatoid arthritis,
- systemic mastocytosis,
- histiocytosis X, and
- Reidel’s thyroiditis.
The cause of PSC is unknown, and multiple mechanisms are likely to play a role.
- The tight association between PSC and UC (a known autoimmune disease) suggests an autoimmune process. However, other mechanisms are likely to be important since only a minority of patients with UC have PSC.
- An inflammatory reaction in the liver and bile ducts may be induced by chronic or recurrent entry of bacteria into the portal circulation. Liver damage may also result from the accumulation of toxic bile acids that are abnormally produced by colonic bacteria or chronic viral infection.
- Ischemic damage to the bile ducts may occur.
Given the close association of PSC with ulcerative colitis, early investigators postulated that recurrent portal bacteremia might be an important factor in the development of the disorder.
- Recurrent portal infection could lead to chronic biliary tract infection, inflammation, and subsequent fibrosis and classical stricture formation.
- One study even found that portal bacteremia was present in patients who had colonic surgery.
- Subsequent studies, however, could not confirm the findings of portal vein phlebitis.
- Furthermore, if recurrent colitis leads to portal vein phlebitis, colectomy (or at least controlled colonic disease) should have a protective effect.
- This has not been demonstrated to be true.
- Additionally, hepatic histology does not support portal venous infection since the hallmark of this disorder, portal phlebitis, is mild or absent in most patients with PSC.
- Thus, there is little evidence to support the colonic-bacterial infection hypothesis.
If portal bacteremia from a colonic source is not a critical factor, then toxins that might be released from a diseased colon could be suspect.
- Theoretically, toxic bile acids such as lithocholic acid, which arise from bacterial activity within the colon, can be absorbed through a diseased colon with its increased mucosal permeability.
- Lithocholic acid is formed from chenodeoxycholic acid by bacterial 7-α- dehydroxylation in the colon, and it has even been shown to be hepatotoxic in animals.
- Unfortunately, abnormalities in bile acid metabolism in PSC or UC patients have not been demonstrated.
- Furthermore, in human tissue, lithocholic acid is rapidly sulfated and rendered nontoxic, a process which does not occur in animal models.
- Unfortunately, abnormalities in bile acid metabolism in PSC or UC patients have not been demonstrated.
- Other toxic substances that have been considered more recently are N-formylated chemotactic peptides,
- produced by enteric flora,
- which have been shown in animal studies to induce fibrosis and damage to major bile ducts through colonic absorption and enterohepatic circulation.
- Increased biliary excretion of these peptides has been shown in experimentally induced colitis in animal models.
- Further investigation to delineate the role of these peptides in the etiology of PSC is required.
- produced by enteric flora,
The major criticism of the theories of colonic toxins causing PSC comes from studies looking at the natural history of the disorder.
- It has been demonstrated that the severity of the colitis bears little relation to the development or severity of PSC.
- Furthermore, patients who have a colectomy show no change in their PSC natural history.
- Some patients develop PSC years after a colectomy or even prior to the onset of their colitis.
- Some patients who develop PSC never even have inflammatory bowel disease.
The association of appendectomy with IBD is an interesting one.
- Appendectomy has been demonstrated to have some interesting associations with UC and UC-associated PSC.
- In a case–control study in
, patients with PSC/UC, PSC alone, and UC were matched to controls in regards to the effects of appendectomy and smoking, and PSC in regards to disease onset, severity, and extent. Australia
- Appendectomy rates in PSC patients were no different from controls; however, the appendectomy rates in those with UC were four times less than controls, suggesting a protective effect of appendectomy in this patient population.
- Additionally, those patients with appendectomy in both PSC and UC groups resulted in a 5- year delay in onset of either intestinal or biliary symptoms.
Abnormalities of copper metabolism have also been implicated in the pathogenesis of PSC.
- Several authors have noted that liver samples from patients with PSC show an excess of hepatic copper, which is known to be hepatotoxic.
- However,unlike other disorders with excessive copper deposition, treatment with chelating agents (penicillamine), has not been shown to have any benefit.
- Likely, as with many cholestatic disorders, copper accumulation is the result of poor biliary excretion, rather than a primary inciting event critical to the pathogenesis of the disorder.
Chronic infection of the biliary tree has been implicated in the pathogenesis of PSC through several observations.
- Longmire,who noted that some patients appear to develop PSC after an initial episode of acute necrotizing cholangitis, classified this group as a separate category (type 2) of PSC.
Patients with acquired immunodeficiency syndrome (AIDS) have been noted to have a sclerosing cholangitis that is felt to be caused by opportunistic infection (i.e. cytomegalovirus, cryptosporidium).
- Unfortunately,an extensive investigation of 37 PSC patients showed evidence of cytomegalovirus (polymerase chain reaction (PCR) testing of liver tissue) in only one patient.
- Although reversibility of sclerosing disease in an infectious environment has been demonstrated in immunocompromised patients who have the underlying infection treated, this has not been demonstrated in normal hosts who have a fully functional immune system.
- Early reports suggested that patients with PSC had a significant increase in antibody titers to this virus compared to controls.
- More recent data, however, show no difference in prevalence or titers of Reovirus between controls and PSC patients.
Rubella can also cause an obliterative cholangitis of the intrahepatic ducts in the fetus, although the histological picture differs from that of PSC. Despite these negative studies, an infectious etiological agent that alters antigenic determinants has yet to be excluded in PSC.
Immune activation —
There are multiple lines of evidence supporting an immunopathogenic cause for PSC. A number of abnormalities in humoral immunity have been described in these patients:
- Up to 50 percent have an elevated IgM level, and some may also have an increased IgG fraction.
- Autoantibodies are frequently present in patients with PSC, with titers in the range associated with autoimmune hepatitis. The most common are antismooth muscle antibodies and antinuclear antibodies, which are found in approximately 75 percent of patients
- Antibodies directed against cytoplasmic and nuclear antigens of neutrophils with a characteristic perinuclear staining pattern (P-ANCA) are found in up to 80 percent of adults with PSC.
- The antibodies appear to be directed against a myeloid 50 kilodalton nuclear envelope protein, not myeloperoxidase as in typical P-ANCA antibodies. In one report, P-ANCA had a 100 percent specificity for PSC compared to controls with other liver diseases; P-ANCA is also found in 25 to 30 percent of unaffected first degree family members of patients with PSC. P-ANCA has also been identified in children with PSC but not in those with UC alone.
- These antibodies are not related to the presence or absence of UC.
Abnormalities of the cellular immune response have also been described in patients with PSC:
- There are conflicting data reporting either an increase or decrease in the total number of circulating T cells; however, the number of CD4 positive T-cells in the liver is increased.
- Bile duct epithelial cells in PSC may be targets for immune-mediated attack by T cells.
- The bile duct cells in PSC express antigens which cross-react with colonic epithelial cells.
- Bile duct cells aberrantly express HLA class II antigens, and ICAM (intercellular adhesion molecule)-1 is expressed by ductular epithelial cells.
- There may be a genetic predisposition to PSC since these patients have an increased prevalence of HLA-B8, -DR3, and -DRw52a .
- One study, for example found that HLA DRw52a was present in 100 percent of patients with PSC.
- Subsequent reports, however, have only found a 50 percent prevalence of this haplotype.
- Both HLA-DRw52a and -DR4, which occurs less frequently, appear to increase the risk for severe or progressive disease.
Ischemic ductal injury —
- Ischemic injury to the bile ducts results in a clinical, biochemical, and cholangiographic picture similar to PSC.
- Intraarterial infusion of floxuridine also results in a comparable appearance.
- Thus, it is possible that ischemic injury to peribiliary arterioles and capillaries may be involved in the pathogenesis of PSC.
- However, there are no data to support this hypothesis, or to demonstrate that hepatic or biliary blood flow is deficient in patients with this disorder.
Cystic fibrosis transmembrane conductance regulator mutations —
- Because of the radiologic and histologic similarities between PSC and cystic fibrosis, mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) have been sought in patients with PSC.
- One preliminary study suggested that a subset of patients with PSC had evidence of CFTR-mediated ion transport dysfunction; affected patients had a chloride secretory response intermediate between patients with cystic fibrosis and controls. | <urn:uuid:424c5b33-e5e7-460a-95dc-fa081cbdc071> | CC-MAIN-2017-51 | http://surgerysearch.blogspot.com/2008/05/primary-sclerosing-cholangitis-psc.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522343.41/warc/CC-MAIN-20171213084839-20171213104839-00527.warc.gz | en | 0.93793 | 4,158 | 2.609375 | 3 |
|ORIGINAL RESEARCH ARTICLE
|Year : 2016 | Volume
| Issue : 2 | Page : 95-106
Using scientific inquiry to increase knowledge of vaccine theory and infectious diseases
Zachary F Walls1, John B Bossaer2, David Cluck3
1 Department of Pharmaceutical Sciences, Gatton College of Pharmacy, East Tennessee State University; Center of Excellence for Inflammation, Infectious Disease and Immunity, Johnson City, TN, USA
2 Department of Pharmacy Practice, Gatton College of Pharmacy, East Tennessee State University, Johnson City, TN, USA
3 Center of Excellence for Inflammation, Infectious Disease and Immunity; Department of Pharmacy Practice, Gatton College of Pharmacy, East Tennessee State University, Johnson City, TN, USA
|Date of Web Publication||19-Aug-2016|
Zachary F Walls
Department of Pharmaceutical Sciences, Gatton College of Pharmacy, East Tennessee State University, P. O. Box 70594, Johnson City, TN 37614
Source of Support: None, Conflict of Interest: None
Background: The aim of this study was to design and evaluate a laboratory activity based on scientific inquiry to educate first-year pharmacy students in the U.S. about vaccination theory and the attributes of common pathogens. Methods: The laboratory activity had two principal sections. The first consisted of an interactive game during which students rolled a die to determine outcomes based on a set of pre-determined criteria. In the second section, students generated and tested hypotheses about vaccine theory using a computer simulation that modeled disease transmission within a large population. In each section students were asked to evaluate epidemiological data and make inferences pertinent to vaccination effectiveness. Results: Mean scores on a knowledge-based assessment given immediately before and immediately after the activity increased from 46% to 71%. Discussion: A laboratory activity designed to stimulate scientific inquiry within pharmacy students enabled them to increase their knowledge of common vaccines and infectious diseases.
Keywords: Active learning, computer simulation, laboratory activity, pharmacy students, role-playing game
|How to cite this article:|
Walls ZF, Bossaer JB, Cluck D. Using scientific inquiry to increase knowledge of vaccine theory and infectious diseases. Educ Health 2016;29:95-106
| Background|| |
Vaccines represent one of the crowning achievements of medical technology. The development of vaccines has reduced the incidence of infection by a myriad of pathogens that until recently had plagued society throughout recorded history. And yet, admiration and adoption of vaccines is not universal. This is due in part to their unique position in the pharmaceutical landscape, and, unfortunately, in part to popular misconceptions linking vaccines to various maladies, including autism. In the United States, pharmacists administer many vaccines to the public and thus must be properly informed so that they can discuss the risks and benefits of vaccines with their patients.
It is also important to encourage and fortify the scientific literacy of student pharmacists. In the U.S., all pharmacists entering practice must possess a Doctorate of Pharmacy (PharmD) degree. This degree is conferred following successful completion of a four-year program during which students receive didactic instruction in basic, pharmaceutical, and clinical sciences, as well as experiential education in multiple settings (hospital, community, etc.). Due to the ever-expanding catalog of medicinal drugs and the particulars associated with their therapeutic applications, the PharmD curriculum often requires a reduction in time devoted to scientific inquiry and experimentation for the sake of didactic instruction. This paradigm generally results in pharmacists being experts in the facts of pharmaceutical therapy but not being clinicians capable of investigating complex problems. The laboratory environment, due to its physical and temporal properties, has the potential for dissemination of information via scientific inquiry. The goal of the laboratory exercise described herein was to educate students on vaccine effectiveness and herd immunity by using the basic tenets of the scientific method.
This goal is highly significant in relation to the standards put forth by the agency responsible for pharmacy school accreditation in the U.S., the Accreditation Council for Pharmacy Education (ACPE). The 2016 standards state that content areas such as the "properties of microorganisms responsible for human disease", the "augmentation of the human immune system to prevent disease", and the "cause and effect patterns of health and disease in large populations" are central to a "contemporary, high quality pharmacy education". This laboratory experience represents an innovative implementation of these standards.
There is a body of literature on inquiry-based laboratory exercises, although most published exercises have been designed for the undergraduate level. Several reports dealing specifically with inquiry-based education in pharmacy education have also been published, though largely in the context of didactic instruction. , In addition, a handful of articles detailing the use of laboratory research in the pharmacy curriculum to increase understanding of the scientific method are available. , All of these articles extoll the virtue of inquiry-based learning and confirm its value in increasing understanding and knowledge retention.
The purpose of this educational innovation and its evaluation was to develop and validate a laboratory exercise that reinforced and augmented instruction on vaccines and their related pathogens covered briefly in other didactic courses in the curriculum, e.g., immunology. The approaches chosen to achieve this goal were guided scientific inquiry and computer-simulated experimentation in order to challenge first year students to use the scientific method to answer complex questions. Guided scientific inquiry refers to activities designed to help students arrive at a specified answer by engaging in scientific processes. These methods are based on the educational principle of social constructivism, which argues that knowledge is constructed in the mind of the learner rather than transferred from the instructor.
This laboratory exercise occurs at the end of the spring semester of the first year, approximately 8 months into the curriculum. It was specifically scheduled for this time to take advantage of concepts introduced throughout the first year and to integrate several disciplines, including biopharmaceutics and immunology.
| Methods|| |
The laboratory exercise included 82 students enrolled in their first year of a four-year Doctor of Pharmacy (PharmD) degree program. All students had previously completed 61 credit hours of specified undergraduate courses prior to entering the pharmacy program. The students were divided into two sections for the Integrated Environment for Applied Learning and Skills (IdEALS) course. This course is the second in a 6 course sequence designed to span the length of the didactic PharmD program. The goals of the sequence are to provide an opportunity for hands-on learning and integrate aspects of basic science, pharmaceutical science, and pharmacy practice. Each course within the sequence is assigned 1 credit hour, and each laboratory session typically spans 4 hours once a week.
The vaccine lab consisted of two parts. Part 1 was designed to imitate a role-playing game in which a player's fate is determined by the roll (or rolls) of a die (e.g., Dungeons and Dragons). Dice are small cubes with a different number of dots on each face, ranging from 1 to 6. They are commonly used in children's games and familiar to all U.S. students. Students were numbered 1 through 6 and then given a single 6-sided die and a laminated note card containing demographic information of a fictitious character [Figure 1]. Additionally, a handout was distributed detailing the rules of the game [Appendix 1 [Additional file 1]]. Based on their number, students were identified as vaccinated or non-vaccinated, and infected or uninfected. Eight scenarios in total were played, with each scenario varying either the pathogen or the percentage of the population vaccinated. Four pathogens were chosen for the exercise: Influenza virus, measles virus, Bordetella pertussis, and Ebola virus. The influenza virus was chosen because the influenza vaccine is the most commonly administered immunization by pharmacists in the U.S. The measles virus was chosen because of measles' highly contagious properties, controversy over the MMR (measles, mumps, and rubella) combined vaccine, and recent outbreaks of measles in the U.S. due to a reduction in vaccine coverage. Bordetella pertussis was chosen for its severity in children and recent U.S. outbreaks. The Ebola virus was chosen for its newsworthiness and its greatly different characteristics compared to the other pathogens chosen.
|Figure 1: Example of laminated card with fictitious character demographics and die given to each student|
Click here to view
Game play consisted of multiple rounds, with each round containing 2 steps. The first step determined the player's viability (chances of living if infected), which was influenced by the particular pathogen and the character's age. It was calculated by throwing the die three times, and then matching the sum of the rolls to the corresponding variables [Table 1]. The second step was performed between two players and determined whether an infected player passed the pathogen on to an uninfected player. Transmission was influenced by the particular pathogen and the player's vaccination status. It was calculated by throwing each players' die twice, and then matching the sum of their rolls to the corresponding variables [Table 1]. Players continued these 2-step rounds until all pairwise interactions between players had been made.
Following completion of each scenario, students were instructed to enter the results of their character into a cloud-based spreadsheet using a link that had been disseminated via the course's learning management system (Desire 2 Learn) website [Figure 2]. This method of data collection permitted real-time analysis of each variable's effect on various outcomes. For example, as shown in [Figure 1], scenarios 1 and 2 compared the outcomes of an influenza virus outbreak when approximately half of the population is vaccinated (scenario 1) versus when approximately 90 percent of the population is vaccinated (scenario 2). These results were used to illustrate the impact of herd immunity and the effectiveness of the influenza vaccine.
|Figure 2: Real time analysis of data entered by students. Each student was assigned an arbitrary number and then asked to record the results of their "character" once the scenario had ended by answering yes (Y) or no (N) to several questions. The graphs were linked to the responses and updated in real time so that results could be discussed and compared as soon as the scenario ended. This graph shows the results of two influenza scenarios. In scenario 1, approximately half of the participants were vaccinated. In scenario 2, approximately 90% of the participants were vaccinated|
Click here to view
Part 2 of the vaccine lab consisted of students using a spreadsheet-based model of disease propagation and vaccine effectiveness. The model used Bayesian probability and estimates of vaccine effectiveness, pathogen mortality, secondary household attack rate, and the duration of infectivity were based on information provided by the U.S. Centers for Disease Control and Prevention (CDC). The model consisted of 10,000 cells representing a closed population of 10,000 individuals. Each cell was designated at random as either "vaccinated" or "unvaccinated" based on the "% vaccinated" initial condition. The status of individual cells was visualized using conditional formatting. A small percentage of the population (0.1%) was chosen at random to be "sick". The rate of disease propagation depended on the variables mentioned above as well as a random variable governing interaction between neighboring cells. An example of the visual output of the model for measles virus as a function of percent of the population vaccinated can be seen in [Figure 3]. The file containing the model, along with a separate file containing instructions on how to modify variable and perform the necessary calculations were distributed via the course's learning management system (Desire 2 Learn) website (Supplementary File).
|Figure 3: Vaccine computer simulator output. The series of images depict the progression of the measles virus as a function of both time (iterations) and vaccination coverage. The simulation was written in a Microsoft Excel spreadsheet to facilitate manipulation by students. Conditional formatting was used to represent the status of individuals within a population. Green = sick, red = vaccinated, yellow = unvaccinated, black = dead, red with white "X" = recovered from natural infection|
Click here to view
Students were guided through several different scenarios dealing with the four selected pathogens. With each subsequent scenario, the students were given fewer of the starting variables and asked to generate hypotheses about the necessary vaccine effectiveness and/or vaccination rate in order to protect a certain percentage of the population. The students were then able to test their hypotheses by running the simulations and recording the outcomes. All results from the simulation experiments were entered into a separate cloud-based spreadsheet so that results could be analyzed in real time [Figure 4]. The real time reporting allowed the instructor to monitor group progress as well as emphasize trends in the data to the entire class.
|Figure 4: Real time analysis of data entered by students. The chart reflects the results of simulations carried out by students using the computer simulator to identify the optimal level of vaccination against measles in order to protect unvaccinated individuals. Students altered the percent of the population vaccinated against the measles virus and then ran the simulator until the outbreak was contained. Graphs such as this were linked to the responses from the entire class and updated in real time so that the results could be discussed and compared continuously|
Click here to view
Students were assessed with a knowledge-based multiple choice quiz. Quiz questions were generated by the authors of this study and evaluated for content and consistency by group consensus. The mean scores of the pre-assessment and post-assessment were compared using the paired t-test (two-tailed), and the median scores were compared using the Wilcoxon matched-pairs signed-rank test. Both parametric and non-parametric analyses were performed to account for possible non-normally distributed data. All statistical analysis was performed using Prism 6 (GraphPad). Statistical significance for individual questions was determined using McNemar's test (P < 0.05). This study was approved as exempt research by the Institutional Review Board of East Tennessee State University.
| Results|| |
To evaluate the effectiveness as this laboratory exercise, students were given a pre-lab assessment at the beginning of the class period and again assessed using the same 15 item tool following completion of the laboratory exercise. Eighty-one students completed the pre-assessment and 82 completed the post-assessment (99%). Eleven of the 15 knowledge-based assessment questions (73%) showed a statistically significant improvement in student performance in the post-lab assessment relative to the pre-lab assessment. One question, #15, showed a statistically significant change in percent correct, but student performance declined rather than increased. This question, which asked the students to identify the pathogen responsible for the greatest number of U.S. deaths in 2014, was answered in one of the student handouts, but was not emphasized by any of the instructors. In addition, both the mean and median assessment score improved significantly. Of the 81 students that completed both the pre and post assessments, 75 (93%) had scores that improved (the scores of 3 did not change, and the scores of 3 others decreased) [Table 2].
| Discussion|| |
A strong record of using game play to educate students in the life sciences can be found in the literature. Most reports detail the creation of card or board games, and at least one reports the results of using an interactive video game. ,, Evaluations of all of these innovations demonstrate the advantage of using games to increase student understanding and knowledge of complex biological concepts. Additionally, Donohoe and colleagues have described a laboratory exercise that deals exclusively with vaccines. ,,, The authors report on the implementation and results of a well-designed laboratory exercise that improves students' knowledge of three common vaccines (influenza, pneumococcal, and shingles) and the practical concerns regarding their administration.
The laboratory exercise described above based on game play and computer simulations also significantly increased students' knowledge of facts and statistics related to vaccines. Additionally, it provided an opportunity for students to use the scientific method to test hypotheses. Taking place at the end of their first year of instruction, it took advantage of other topics taught during that year, including immunology and biopharmaceutics.
The students generally appeared to enjoy the laboratory exercise, although this was not assessed formally. Anecdotally, instructors observed that some individuals responded positively to the dice game and caught on to the rules and scoring quite quickly, while others responded to the computer simulation more favorably. As this lab occurred twice (once each for two different sections) with different instructors for each section, it was found that the dice game benefitted from input from a practicing clinical pharmacist who moderated one of the sections. Students seemed to respond enthusiastically as the game was compared to real-life statistics and observations from practice. Conversely, it was found that the computer simulations benefitted from group guidance by the program's author, who moderated the other section. Students were able to receive feedback about their hypotheses with respect to the variables in the program and gain an appreciation for experimental repetition during the section led by the instructor who wrote the code.
Although statistical analysis of pre and post assessments indicated that the students increased their knowledge of the subjects addressed, it is unclear if this increase is permanent, or whether it reflects a transient retention of facts. A longitudinal study would be necessary to ascertain the lasting value of this laboratory exercise.
Given the strong improvement in student assessment scores, the faculty members involved in this course will continue to include this exercise as part of the semester's instruction, however, certain changes and improvements are desired. With respect to the dice game, an additional layer of game theory will be implemented in future years. Game players will be given a finite number of vaccines and must decide as a collective how to distribute them, given the susceptibility to disease and geographical restrictions of certain players. It is anticipated that this will provoke different hypotheses within the group about how best to protect the greatest number of players. The game will then be played out as before with the results recorded and results analyzed to determine the most viable hypothesis.
With respect to the computer simulation, certain changes will be made to the underlying code. For instance, as the code is currently written, the duration of transmissibility competes with the mortality rate as determined by Bayesian probability based on reported values. The models would most likely be more faithful if the chance of death was only calculated after a constant incubation period. For pathogens with a low mortality rate (such as the measles virus), this change will not have a large effect, but for pathogens with a long incubation period and a high mortality rate (such as the Ebola virus), this change could have large ramifications. Additionally, the model will be changed to incorporate demographic information, and random ages will be assigned to each individual and their probabilities for infection and mortality will change based on that assignment. These changes will provide greater value to the lab and enhance the students' understanding of vaccines and public health.
Lastly, although this laboratory exercise was designed primarily for first year pharmacy students, it could be easily adapted for medical students, graduate students studying public health, or undergraduate students studying immunology and microbiology. Knowledge of vaccines and infectious diseases remains important today, and it is critical that the next generation of life scientists and healthcare professionals are properly educated on these subjects so that they may counteract the array of misinformation pervasive in society. Furthermore, educating students at all levels to use the scientific method will help create a scientifically literate populace, an admirable goal in itself.
The authors would like to acknowledge Dr. David Roane, Chair of the Department of Pharmaceutical Sciences, for his general support and encouragement.
Financial support and sponsorship
This research was supported by gifts to the Bill Gatton College of Pharmacy.
Conflicts of interest
There are no conflicts of interest.
| References|| |
Pulendran B, Ahmed R. Immunological mechanisms of vaccination. Nat Immunol 2011;12:509-17.
Scully T. The age of vaccines. Nature 2014;507:S2-3.
Mormann M, Gilbertson C, Milavetz G, Vos S. Dispelling vaccine myths: MMR and considerations for practicing pharmacists. J Am Pharm Assoc 2012;52:e282-6.
Edwards N, Gorman Corsten E, Kiberd M, Bowles S, Isenor J, Slayter K, et al.
Pharmacists as immunizers: A survey of community pharmacists′ willingness to administer adult immunizations. Int J Clin Pharm 2015;37:292-5.
Accreditation Council for Pharmacy Education. Accreditation Standards and Key Elements for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree. Effective July 1, 2016. http://www.acpe-accredit.org/pdf/Standards2016FINAL
pdf. [Last accessed on 2016 Jun 10].
Kolkhorst FW, Mason CL, DiPasquale DM, Patterson P, Buono MJ. An inquiry-based learning model for an exercise physiology laboratory course. Adv Physiol Educ 2001;25:117-22.
Soltis R, Verlinden N, Kruger N, Carroll A, Trumbo T. Process-oriented guided inquiry learning strategy enhances students′ higher level thinking skills in a pharmaceutical sciences course. Am J Pharm Educ 2015;79:11.
Brown SD. A process-oriented guided inquiry approach to teaching medicinal chemistry. Am J Pharm Educ 2010;74:121.
Vaidean GD, Vansal SS, Moore RJ, Feldman S. Student scientific inquiry in the core curriculum. Am J Pharm Educ 2013;77:176.
Ramsauer VP. An elective course to engage pharmacy students in research activities. Am J Pharm Educ 2011;75:138.
Furtak EM. The problem with answers: An exploration of guided scientific inquiry teaching. Sci Educ 2006;90:453.
Bodner G, Klobuchar M, Geelan D. The many forms of constructivism. J Chem Educ 2001;78:1107.
Gygax G, Arneson D. Dungeons & Dragons. Lake Geneva, WI: Tactical Studies Rule 1974.
U.S. Department of Health and Human Services. Annual Pharmacy-Based Influenza and Adult Immunization Survey 2013. National Vaccine Program Office; December 2013.
Clemmons NS, Gastanaduy PA, Fiebelkorn AP, Redd SB, Wallace GS; Centers for Disease Control and Prevention (CDC). Measles - United States, January 4-April 2, 2015. MMWR Morb Mortal Wkly Rep 2015;64:373-6.
Winter K, Glaser C, Watt J, Harriman K; Centers for Disease Control and Prevention (CDC). Pertussis epidemic - California, 2014. MMWR Morb Mortal Wkly Rep 2014;63:1129-32.
Incident Management System Ebola Epidemiology Team, CDC; Guinea Interministerial Committee for Response Against the Ebola Virus; World Health Organization; CDC Guinea Response Team; Liberia Ministry of Health and Social Welfare; CDC Liberia Response Team; Sierra Leone Ministry of Health and Sanitation; CDC Sierra Leone Response Team; Viral Special Pathogens Branch, National Center for Emerging and Zoonotic Infectious Diseases, CDC; Centers for Disease Control and Prevention (CDC). Update: Ebola virus disease epidemic - West Africa, February 2015. MMWR Morb Mortal Wkly Rep 2015;64:186-7.
U.S. Department of Helath and Human Services. Centers for Disease Control and Prevention; 2015.
Bochennek K, Wittekindt B, Zimmermann SY, Klingebiel T. More than mere games: A review of card and board games for medical education. Med Teach 2007;29:941-8.
Gutierrez AF. Development and effectiveness of an educational card game as supplementary material in understanding selected topics in biology. CBE Life Sci Educ 2014;13:76-82.
Su T, Cheng MT, Lin SH. Investigating the effectiveness of an educational card game for learning how human immunology is regulated. CBE Life Sci Educ 2014;13:504-15.
Boeker M, Andel P, Vach W, Frankenschmidt A. Game-based e-learning is more effective than a conventional instructional method: A randomized controlled trial with third-year medical students. PLoS One 2013;8:e82328.
Donohoe KL, Mawyer TM, Stevens JT, Morgan LA, Harpe SE. An active-learning laboratory on immunizations. Am J Pharm Educ 2012;76:198.
Peffer ME, Beckler ML, Schunn C, Renken M, Revak A. Science classroom inquiry (SCI) simulations: A novel method to scaffold science learning. PLoS One 2015;10:e0120638.
Rowe MP, Gillespie BM, Harris KR, Koether SD, Shannon LJ, Rose LA. Redesigning a General Education Science Course to Promote Critical Thinking. CBE Life Sci Educ 2015;14. pii: Ar30.
Stone EM. Guiding students to develop an understanding of scientific inquiry: A science skills approach to instruction and assessment. CBE Life Sci Educ 2014;13:90-101.
[Figure 1], [Figure 2], [Figure 3], [Figure 4]
[Table 1], [Table 2] | <urn:uuid:8fd60bb7-cc01-4b60-b906-80a511c35051> | CC-MAIN-2022-33 | https://educationforhealth.net/article.asp?issn=1357-6283;year=2016;volume=29;issue=2;spage=95;epage=106;aulast=Walls;type=3 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00097.warc.gz | en | 0.935367 | 5,617 | 2.953125 | 3 |
Copyright 2020 Melanie Spiller. All rights reserved.
Instrument Biography: The Lyre
MelanieSpiller and Coloratura Consulting
The lyre was ubiquitous from ancient times until the Middle Ages. It was present in ancient Egypt and Mesopotamia, endured in Asia, prospered in Africa, and wandered all over Europe and Great Britain. Even so, it’s been nearly completely absent from musical experience for the last 600 years. But that doesn’t make it irrelevant. Without it, the harp, zither, lute, guitar, violin, vielle, and countless other instruments would never have been invented. Back in ancient Egypt, the instruments in an “orchestra” (this term meant something different back then) were very quiet, like the lyre, harp, and flute. Middle Eastern groups of musicians came to resemble noisier Asian orchestras around 3000 BCE with the influx of newly conquered peoples and their instruments. By 1700-1500-BCE, this change affected the social standing of musicians—where once music had been a hobby for the elite, under the New Empire, music became the purview of professionals, often of ill repute. Upper-class conservatives preserved the old music in temples and schools, leaving noisier music to the general population—just like today! The instruments adopted or developed by Egyptians during this period of transition were lyres (during the Hellenistic period), kitharas (a posh version of the lyre), lutes,harps,flutes, reed instruments (similar to oboes and clarinets), castanets, cymbals, bells, drums, and rattles. There are examples of failed attempts to make trumpets from this time as well.It’s probable that the development of all these other instruments began because even the largest lyre couldn’t play more than two octaves. In fact, most could only play less than one octave because they had only three, five, or six strings. It’s also probable that this had long been an acceptable range because singers would have been all male, rendering a broader range unnecessary. (Because women often sing in both head and chest voice, even an untrained woman usually has nearly double the range of most men. It’s not a judgement fellahs, it’s the great estrogen/testosterone divide.)Some musicologists think of the lyre as part of the zither family (which also includes lutes, guitars, kantele, and psalteries). Other musicologists insist that they’re not in the same family because zithers (and lutes, guitars, kanteles, and psalteries) have strings that cross the soundboard for their entire length or nearly the entire length, whereas a lyre’s strings cover the soundboard for half or less of their length. The poetic recitations of the ancient Greeks were accompanied by lyres. Apparently, the Greek god Apollo played one, and for a while, the instrument became a cult favorite in ancient Greece during the rise of Apollo’s cult. An account by Homer credits the invention of the lyre to the Greek god Hermes, but a Thracian account claims that they had used the lyre long before the Greeks. In truth, this ancient stringed instrument was, along with the kithara, the most important stringed instrument of both ancient Greece and ancient Rome, not to mention Asia, Africa, and Egypt. Although its popularity waned a bit by the Middle Ages, its association with King David brought a small resurgence in popularity in Europe, and a lyre often appeared in illustrations of musicians and angels from the late 7th century onward.The Judeo-Christian Bible mentions the lyre in 42 places. The Septuagent translates the word for lyre 20 times as kithara(in Psalms, Job, and Isaiah), 17 times as kinnyra (in Samuel, Kings, and Chronicles), and several other times in Greek forms. The Vulgate translates 37 of the 42 times as cithara, once as cithara pro octava, in two places as psalterium, once as organum, and twice as lyra. The Aquila, Symmachos, and Theodotion (versions of the Bible) use either kithara or psalterion.The expression “lyric music” originally meant “music sung to the lyre.” Betcha didn’t know that! Oh, and just so we’re all playing in the same band, the difference between a harp and a lyre is that the lyre has a soundboard with two arms sticking out of it, roughly parallel like a U-shape, and with a crossbar connecting the two arms. The strings of a lyre run from the soundboard to the crossbar, parallel to the arms and across the face of the soundboard. The harp is a triangle and the strings are perpendicular to the soundboard, sticking out of it rather than running across it. Neither has strings that can be depressed to change the note.
There isn’t much evidence of lyres in Mesopotamia before the Greeks came, but if flourished after that. Curt Sachs, one of the world’s greatest musicologists, said that there is no evidence of lyres anywhere until about the 15th century BCE, about 1200 or 1300 years after harps appeared. However, archaeological evidence disputes this. For instance, 20thcentury archaeologists exploring royal tombs at Ur, a Sumerian city on the Euphrates, found several lyres and harps, as well as paintings of them being played, from around 2500 BCE.From the times of the pharaohs, around 1900 BCE, there are lyres in paintings (frescos), as played by Semitic or possibly Hebrew nomads, who came to ask for royal permission to settle in Egypt. A painting from c1650 BCE of the Hyksos depicts a Bedouin coming to visit the governor while playing a lyre of the same type that was brought to Mesopotamia by Semitic people. During Akhenaton’s time (the 1330s BCE), Syrian girls played lyres with fingers or a plectrum according to tomb paintings. And, from about 1200 BCE, there’s a piece of ivory carved with a Canaanite king surrounded by luxury and lions (!) with a musician playing a lyre for his entertainment.In the time of Ramses III (around the 1160s BCE) at Thebes, the usually seven-stringed lyre took a new form as the two gracefully curved arms were made in different lengths so that the crossbar was not quite parallel to the top of the soundbox. The arms had carved animal heads at their ends.A vase from Megiddo depicts a lyre from around 1025 BCE, thought to be in the style that King David would have played. It was either brought to Egypt by the Israelites or the Canaanites and was discovered by the Hebrews in their new homeland. There are surviving instruments from the end of the first millennium BCE in the Cairo Museum and one in the Metropolitan Museum in New York.In Greece, a form of lyre was called a phorminx and like other lyres, was chiefly used as an accompanying instrument. Learning to play the lyre was considered a core element of education in Athens. Both men and women played the lyre, and it was used to accompany dancing, singing, and recitation of epic poetry, such as Homer’s “Iliad” and “Odyssey.” The lyre was also used in ceremonies, like weddings, and sometimes they used it just for fun. The Greek form of lyre called the kithara would have been played by a professional who performed at public ceremonies. A lyre, on the other hand, would have been played by amateurs—free-born men who didn’t earn their livings by playing and performing. The Egyptians adopted Asian instruments during the Hellenistic period (between 323 BCE and the first century CE), including lutes, kitharas, lyres, flutes, clarinet-types, and oboe-types, castanets, cymbals, bells, drums, and rattles, including sistrums. On the Isle of Skye in Scotland, a lyre from 300 BCE has been found. In India, there were paintings made of dancing girls playing lyres (and harps and drums), until, in the 1st century CE in the Indo-Scythic courts, images of men appeared with lutes, lyres, and double oboes. The lyres and oboes disappeared fairly fast, as the Greek influence on Indian music was minimal.Clement of Alexandria (c150-c200 CE) approved of the lyre and the kithara because they’d been played by King David, but in general disapproved of instruments for Christian music. He feared that the pagan influence was too strong in those other instruments. He also admonished his fellow Christians to avoid the chromatic and theatrical melodies of the heathens (meaning the Greeks), and advised them to return to the spiritual songs, the traditional psalm singing of David. He cites one example in an ancient Greek drinking song:Among the Ancient Greeks, in their banquets over brimming cups, a song was called skolion, after the manner of the Hebrew psalms, all together raising the paean with the voice, and sometimes taking turns in the song while they drank to everyone’s health, while those that were more musical than the rest sang to the lyre. But the Christians weren’t alone in looking at music as worship. A passage in the Talmud encourages people to sing in celebration:The song of thanksgiving was sung to the accompaniment of lutes, lyres, and cymbals at every corner and upon every great stone in Jerusalem. Diodorus Siculus, in the 1st century BCE, used a lyre-like instrument to accompany Celtic songs. The Celtic version had an arched yoke to which the strings were attached rather than to a crossbar. The Celtic name was the crot or the cruit, which later evolved into the crwth in Welsh and the crowd in English. Crwths have six strings, four of which run across the fingerboard; the other two act as drones.It seems that the Celtic north developed lyres independent of the Greek and Roman lyres. They were found in drawings from the 8th century CE, and looked surprisingly like Sumerian instruments. They were used by Anglo-Saxon minstrels and their continental contemporaries. Similar instruments were found all over the Europe.In the 11th century CE, inventors combined the yoke of the lyre with the neck of fingerboard instruments, eventually evolving it into the stringed instrument we know today as the guitar. The lyre-player’s function was to perform a free and florid version of the same melody that was sung—not harmony or accompaniment but heterophony, which anticipated ornamental variation but didn’t provide counterpoint. There was a lot going on in the early Middle Ages regarding music innovation. In particular, harmonies, rhythms, and chords resulted as part of the development of music notation. (For more on this, see The History of Music Notation.) By the late Middle Ages, the lyre had become less popular than other plucked or bowed instruments, because they had greater flexibility in tone, tuning, and playing multiple notes simultaneously. For instance. the fiedel or vielle, with its fingerboard and bow, appeared around then. Its descendents, like the gamba and the violin, are still popular. (There will be a blog on this someday.)People still play lyres in North-Eastern Africa, but you’ll be hard-pressed to find them anywhere else.
The lyre has a soundbox, two arms, a crossbar that connects the two arms, and gut strings that are attached at the base of the soundbox, cross the length of the soundbox, and stretch across an open space to be attached at the crossbar. There’s a second bar, parallel to the crossbar, that functions like a bridge to raise the strings above the surface of the soundbox. The strings stretch from the bridge to the crossbar, and are held there by strips of fatty ox hide. Twisting the fatty hide changes the pitch by tightening or loosening the strings. The soundbox is hollow, often made of wood or tortoiseshell, and the arms can be made from the same piece of wood as the soundbox, added pieces of wood, or occasionally horns, antlers, or branches. Sometimes these arms have carvings at their ends. The crossbar can be made of wood, branches, metal, wire, or antler and can be parallel to the top edge of the soundbox, at an angle, or curved away from the soundbox.Most lyres are small, from half a foot wide and a foot-and-a-quarter long, to about four feet long and a foot wide. They were meant to be played while seated or standing, and occasionally from horseback. The lyre was held in the left hand, resting on the left hip, perpendicular to the body.A lyre has from five to seven strings, although there are instruments with fewer and some with more. The strings are all of equal width and length and a change in pitch is the result of varying the tension of the strings. If they’re too thick or too loosely strung, they sound feeble, if too thin or too tightly strung, they break. In comparison, the harp’s strings are of different lengths, and a harp has more notes to offer; that’s probably why the harp’s popularity has endured and the lyre’s hasn’t.Like the harp, the string with the deepest note on the lyre is furthest from the player’s body. The lyre is played by placing the fingers of the left hand on certain strings to stop them from sounding, and strumming or plucking the strings with a plectrum held in the right hand. Mycenaean (Greek) examples include two ivory lyres, with their crossbars pierced for eight strings. These pieces further the general belief that Greece got lyres from Egyptians and Phoenicians. Earlier forms, from the 8th century BCE, were small, with round bases and four strings. Slightly later, around the end of the 8th century, there’s a Hittite relief that shows a six-stringed lyre. By the end of the 7th century BCE, there are images of seven-stringed instruments, played with a plectrum.In ancient Greece, the tuning would have been E G A B D (five strings—or pentatonic tuning—in intervals of a third, three seconds and another third). I didn’t find any details on other tunings.Tuning pegs developed in the early middle ages, but interest in the lyre was already fading, so this development didn’t catch on.A tether (leather or cloth) attaches the bottom of the lyre to the left wrist, helping to balance the lyre on the left hip when the player stands. The wrist strap sometimes extends to be more of a sling, with decorative tassels and other ornaments. Specific fingers on the left hand are used to pluck or damp specific strings. The right hand wields the plectrum, which looks like a small spoon and dangles from the instrument by a small cord in some instances. The right hand was used to pluck and strum the instrument with the plectrum and with bare fingertips.The plectrum is made of animal horn. Playing close to the bridge (on the soundbox) produces a bright, loud sound, with harmonics and sympathetic strings sounding as a result of the strings vibrating. The plectrum is used for introductory passages—it produces too loud a sound to accompany the voice—and the strings are plucked with bare fingers during speaking or singing. The lyre doubles the voice part or plays it at the octave rather than providing harmonies or accompaniment.The lyra, which was a variation of the lyre, was a lyre-shaped instrument made of a tortoise shell with a tympanum (the top surface) of ox hide. A yoke was attached to the shell to form the arms; the older ones were made of antelope horns and later, they were made from curved pieces of wood. There were seven gut strings (or fewer) and, like the lyre, it was played with bare fingers or a plectrum. The difference is that the bare left hand plucked the melody and the right hand, with a plectrum fastened to it by a thong, swept across all of the strings rhythmically during the breaks between sung choruses. Some musicologists assert that the lyra was brought by the Hellenes when they migrated into Greece from the north of the Balkan peninsular and Hungary. Similar instruments were played by Egyptians, Jews, Hittites, Elamites, and Assyrians, so Greece was sort of forced to join in the fun.From the Sennacherib period (705-681 BCE) in Assyria, there are pictures of lyres with straight but unequal arms and others with gracefully curved arms, like the barbiton. The barbiton is a lyre with long arms that angle slightly outward until they curve suddenly, at the very top, back toward one another. The arms are connected by a short crossbar. The barbiton has a very small soundbox and is played with the fingers of the left hand and a plectrum held in the right hand, just like the rest.The kithara is a large lyre, used in processions and sacred ceremonies as well as in the Greek theater, and was always played with the musician standing. Kithara players who sang as they played were called kitharodes. A Sumerian instrument from Ur, called a bull lyre, had religious significance. It looks kind of like a model of a ship, with a figurehead on the bow end that’s carved to look like a bull, and the horns of the bull forming the arms of the lyre, smoothly carved into cylinders and at a slight outward angle. The strings radiated from a single point in the center of the soundbox and attached to a smooth cylindrical crossbar. The number of strings would have varied, and they were knotted around sticks that that could be turned to change the tension/tuning at the crossbar. Replicas of this instrument are very pretty.The early Medieval lyre in Europe was smallish and was made entirely from a single piece of wood. It had six or seven strings running from pegs on the crossbar and attached to a tailpiece on the soundboard. If you do a search for the Sutton Hoo lyre, this is the type that you’ll see. Although eschewed by Archilochus (c680-645 BCE), the lyre was the preferred accompaniment of Sappho (c620-c570 BCE) and Alcaeus (c620-the 6th century BCE). Nothing remains of the melodies; only the lyrics remain.Philo of Alexandria (c20 BCE-50 CE), who was an early Jewish philosopher, saw the seven strings of the lyre as representing the seven planets.The NameThis instrument has had many names:Arabian peninsula: tanburaBangladesh: ektaraEgypt: kissar, tanbura, simsimiyya, k-nn-rEnglish: roteOld English: crowdOld Irish: cruit or crotEstonia: talharpa Ethiopia: begena, dita, krarFinland: jouhikkoGerman: cythara teutonicaGreece: barbiton, kithara, lyra, phorminx, kinnyraIndia: ektaraIraq: sammu, tanbura, zami, zinarIsrael: kinnor Kenya: kibugander, litungu, nyatiti, obokanoNepal: sarangiNorway: gigaPakistan: barbat, ektara, tanburaPersian: kunnarScottish: gue Semitic: kenanawr around 1200 BCESiberia: nares-juxSudan: kissar, tanburaSyrian: kenaraTanzania: litunguUganda: endongo, ntongoliWelsh: crwthYemen: tanbura, simsimiyyaIn the third chapter of the Book of Daniel of the Judeo-Christian Bible, King James translators named the instrument the quyteros, which translates to a kithara or lyre. The Book of Daniel was written in the 2nd century BCE.Homer (who lived sometime between the 9th and the 12th century BCE) called a four- or six-stringed lyre the phorminx. That’s what Apollo is playing at the end of the First Book of the “Iliad.” When Odysseus and his companions visit Achilles in his tent, they find him singing and accompanying himself on a phorminx that has a silver crossbar. Phemius in the First Book of the “Odyssey” and blind Demodocus in the Eighth Book sing as they accompany themselves on the phorminx. Both the Syrian kenara and the Arabic-Persian kunar are thought by some experts to be the root etymology of the term kinnor, but other experts disagree and say that the origin is unclear. The Phoenicians played a kinnor too, and it’s possible that they got the name from the Greeks, who had a kinnyra, from which the word kinnyrai (to lament) was derived. As an unusual linguistic peculiarity, kinnor has two plural forms, one masculine—kinnorim—and one feminine—kinnorot. It’s unexplainable, and not found in the names of any other instruments.
Because music notation was just taking off when interest in the lyre was waning, there isn’t much evidence of compositions specifically for the lyre. I found only two citations.A Syrian fellow called Bardesanes (154-233 CE) and his son Harmonios composed a complete gnostic psalter of 150 psalms to be sung to the lyre (ad lyra cantum) in the “Jewish” fashion.Italian Baroque composer Jacopo Peri (1561-1633 CE) wrote the lyre into accompaniments where two choirs were doubled—the first was doubled by lyre, harp, large lute, and “sotto Basso di Viola” and the second choir was doubled by lyre, harp, chitarrone, and “Basso di Viola.”
I found only one famous lyre player, besides the one’s in Homer’s works: Italian philosopher Marsilio Ficino (1433-1499), who regarded the lyre as therapeutic. Oh, and while I have your attention, when I was reading all this varied material, I came across this caution for musicians in general:“Whoever drinks (especially wine) to the accompaniment of four musical instruments brings five punishments to the world. Woe unto them that rise up early in the morning, that they may follow strong drink, that tarry late into the night, ‘til wine inflame them! And the harp, and the lute, the tabaret and the pipe, and wine, are in their feasts, but they regard not the work of the Lord.” From the Babylonian Talmud: Tractate Sotah, folio 48a, lines 43-44.
“Musical Instruments; Their History in Western Culture from the Stone Age to the Present Day,” by Karl Geiringer, translated by Bernard Miall. George Allen & Unwin LTD, London, 1949.“Companion to Medieval & Renaissance Music,” edited by Tess Knighton and David Fallows. University of California Press, Berkeley, 1997. “Music in the Middle Ages,” by Gustave Reese. W.W. Norton and Company, New York, 1940.“The Concise Oxford History of Music,” by Gerald Abraham. Oxford University Press, Oxford, 1979.“A History of Western Music,” J. Peter Berkholder, Donald Jay Grout, Claude V. Palisca. W.W. Norton & Company, New York, 2010.“A Dictionary of Early Music,” by Jerome and Elizabeth Roche. Oxford University Press, New York, 1981.“Music in Ancient Israel,” by Alfred Sendrey. Philosophical Library, New York, 1969.“The Music of the Jews in the Diaspora,” by Alfred Sendrey. Thomas Yoselof, New York, 1970.“The Rise of Music in the Ancient World: East and West,” Curt Sachs. Dover Publications, Inc., Mineola, 1971.“Music in Ancient Greece and Rome,” by John G. Landers. Routledge, London, 1999.“Women in Music,” edited by Carol Neuls-Bates. Northeastern University Press, Boston, 1996.“Music, Body, and Desire in Medieval Culture; Hildegard of Bingen to Chaucer,” by Bruce W. Holsinger. Stanford University Press, Stanford, 2001. | <urn:uuid:8f7472d6-e5a9-4a8c-893b-5fe01827e481> | CC-MAIN-2022-33 | http://melaniespiller.com/inst%20lyre.htm | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00097.warc.gz | en | 0.96094 | 5,461 | 3.828125 | 4 |
A few years ago, Gene Robinson, of Urbana, Illinois, asked some associates in southern Mexico to help him kidnap some 1,000 newborns. For their victims they chose bees. Half were European honeybees, Apis mellifera ligustica, the sweet-tempered kind most beekeepers raise. The other half were ligustica’s genetically close cousins, Apis mellifera scutellata, the African strain better known as killer bees. Though the two subspecies are nearly indistinguishable, the latter defend territory far more aggressively. Kick a European honeybee hive and perhaps a hundred bees will attack you. Kick a killer bee hive and you may suffer a thousand stings or more. Two thousand will kill you.
Working carefully, Robinson’s conspirators—researchers at Mexico’s National Center for Research in Animal Physiology, in the high resort town of Ixtapan de la Sal—jiggled loose the lids from two African hives and two European hives, pulled free a few honeycomb racks, plucked off about 250 of the youngest bees from each hive, and painted marks on the bees’ tiny backs. Then they switched each set of newborns into the hive of the other subspecies.
Robinson, back in his office at the University of Illinois at Urbana-Champaign’s Department of Entomology, did not fret about the bees’ safety. He knew that if you move bees to a new colony in their first day, the colony accepts them as its own. Nevertheless, Robinson did expect the bees would be changed by their adoptive homes: He expected the killer bees to take on the European bees’ moderate ways and the European bees to assume the killer bees’ more violent temperament. Robinson had discovered this in prior experiments. But he hadn’t yet figured out how it happened.
He suspected the answer lay in the bees’ genes. He didn’t expect the bees’ actual DNA to change: Random mutations aside, genes generally don’t change during an organism’s lifetime. Rather, he suspected the bees’ genes would behave differently in their new homes—wildly differently.
This notion was both reasonable and radical. Scientists have known for decades that genes can vary their level of activity, as if controlled by dimmer switches. Most cells in your body contain every one of your 22,000 or so genes. But in any given cell at any given time, only a tiny percentage of those genes is active, sending out chemical messages that affect the activity of the cell. This variable gene activity, called gene expression, is how your body does most of its work.
The fish underwent massive surges in gene expression that immediately blinged up his pewter coloring with lurid red and blue streaks and, in a matter of hours, caused him to grow some 20 percent. It was as if Jason Schwartzman, coming to work one day to learn the big office stud had quit, morphed into Arnold Schwarzenegger by close of business.
Sometimes these turns of the dimmer switch correspond to basic biological events, as when you develop tissues in the womb, enter puberty, or stop growing. At other times gene activity cranks up or spins down in response to changes in your environment. Thus certain genes switch on to fight infection or heal your wounds—or, running amok, give you cancer or burn your brain with fever. Changes in gene expression can make you thin, fat, or strikingly different from your supposedly identical twin. When it comes down to it, really, genes don’t make you who you are. Gene expression does. And gene expression varies depending on the life you live.
Every biologist accepts this. That was the safe, reasonable part of Robinson’s notion. Where he went out on a limb was in questioning the conventional wisdom that environment usually causes fairly limited changes in gene expression. It might sharply alter the activity of some genes, as happens in cancer or digestion. But in all but a few special cases, the thinking went, environment generally brightens or dims the activity of only a few genes at a time.
Robinson, however, suspected that environment could spin the dials on “big sectors of genes, right across the genome”—and that an individual’s social environment might exert a particularly powerful effect. Who you hung out with and how they behaved, in short, could dramatically affect which of your genes spoke up and which stayed quiet—and thus change who you were.
Robinson was already seeing this in his bees. The winter before, he had asked a new post-doc, Cédric Alaux, to look at the gene-expression patterns of honeybees that had been repeatedly exposed to a pheromone that signals alarm. (Any honeybee that detects a threat emits this pheromone. It happens to smell like bananas. Thus “it’s not a good idea,” says Alaux, “to eat a banana next to a bee hive.”)
To a bee, the pheromone makes a social statement: Friends, you are in danger. Robinson had long known that bees react to this cry by undergoing behavioral and neural changes: Their brains fire up and they literally fly into action. He also knew that repeated alarms make African bees more and more hostile. When Alaux looked at the gene-expression profiles of the bees exposed again and again to alarm pheromone, he and Robinson saw why: With repeated alarms, hundreds of genes—genes that previous studies had associated with aggression—grew progressively busier. The rise in gene expression neatly matched the rise in the aggressiveness of the bees’ response to threats.
Robinson had not expected that. “The pheromone just lit up the gene expression, and it kept leaving it higher.” The reason soon became apparent: Some of the genes affected were transcription factors—genes that regulate other genes. This created a cascading gene-expression response, with scores of genes responding.
This finding inspired Robinson’s kidnapping-and-cross-fostering study. Would moving baby bees to wildly different social environments reshape the curves of their gene-expression responses? Down in Ixtapan, Robinson’s collaborators suited up every five to 10 days, opened the hives, found about a dozen foster bees in each one, and sucked them up with a special vacuum. The vacuum shot them into a chamber chilled with liquid nitrogen. The intense cold instantly froze the bees’ every cell, preserving the state of their gene activity at that moment. At the end of six weeks, when the researchers had collected about 250 bees representing every stage of bee life, the team packed up the frozen bees and shipped them to Illinois.
There, Robinson’s staff removed the bees’ sesame-seed-size brains, ground them up, and ran them through a DNA microarray machine. This identified which genes were busy in a bee’s brain at the moment it met the bee-vac. When Robinson sorted his data by group—European bees raised in African hives, for instance, or African bees raised normally among their African kin—he could see how each group’s genes reacted to their lives.
Robinson organized the data for each group onto a grid of red and green color-coded squares: Each square represented a different gene, and its color represented the group’s average rate of gene expression. Red squares represented genes that were especially active in most of the bees in that group; the brighter the red, the more bees in which that gene had been busy. Green squares represented genes that were silent or underactive in most of the group. The printout of each group’s results looked like a sort of cubist Christmas card.
When he got the cards, says Robinson, “the results were stunning.” For the bees that had been kidnapped, life in a new home had indeed altered the activity of “whole sectors” of genes. When their gene expression data was viewed on the cards alongside the data for groups of bees raised among their own kin, a mere glance showed the dramatic change. Hundreds of genes had flipped colors. The move between hives didn’t just make the bees act differently. It made their genes work differently, and on a broad scale.
What’s more, the cards for the adopted bees of both species came to ever more resemble, as they moved through life, the cards of the bees they moved in with. With every passing day their genes acted more like those of their new hive mates (and less like those of their genetic siblings back home). Many of the genes that switched on or off are known to affect behavior; several are associated with aggression. The bees also acted differently. Their dispositions changed to match that of their hive mates. It seemed the genome, without changing its code, could transform an animal into something very like a different subspecies.
These bees didn’t just act like different bees. They’d pretty much become different bees. To Robinson, this spoke of a genome far more fluid—far more socially fluid—than previously conceived.
Robinson soon realized he was not alone in seeing this. At conferences and in the literature, he kept bumping into other researchers who saw gene networks responding fast and wide to social life. David Clayton, a neurobiologist also on the University of Illinois campus, found that if a male zebra finch heard another male zebra finch singing nearby, a particular gene in the bird’s forebrain would "re up—and it would do so differently depending on whether the other finch was strange and threatening, or familiar and safe.
Others found this same gene, dubbed ZENK ramping up in other species. In each case, the change in ZENK's activity corresponded to some change in behavior: a bird might relax in response to a song, or become vigilant and tense. Duke researchers, for instance, found that when female zebra finches listened to male zebra finches’ songs, the females’ ZENK gene triggered massive gene-expression changes in their forebrains—a socially sensitive brain area in birds as well as humans. The changes differed depending on whether the song was a mating call or a territorial claim. And perhaps most remarkably, all
of these changes happened incredibly fast—within a half hour, sometimes within just five minutes.
ZENK, it appeared, was a so-called “immediate early gene,” a type of regulatory gene that can cause whole networks of other genes to change activity. These sorts of regulatory gene-expression response had already been identified in physiological systems such as digestion and immunity. Now they also seemed to drive quick responses to social conditions.
One of the most startling early demonstrations of such a response occurred in 2005 in the lab of Stanford biologist Russell Fernald. For years, Fernald had studied the African cichlid Astatotilapia burtoni, a freshwater fish about two inches long and dull pewter in color. By 2005 he had shown that among burtoni, the top male in any small population lives like some fishy pharaoh, getting far more food, territory, and sex than even the No. 2 male. This No. 1 male cichlid also sports a bigger and brighter body. And there is always only one No. 1.
I wonder, Fernald thought, what would happen if we just removed him?
So one day Fernald turned out the lights over one of his cichlid tanks, scooped out big flashy No. 1, and then, 12 hours later, flipped the lights back on. When the No. 2 cichlid saw that he was now No. 1, he responded quickly. He underwent massive surges in gene expression that immediately blinged up his pewter coloring with lurid red and blue streaks and, in a matter of hours, caused him to grow some 20 percent. It was as if Jason Schwartzman, coming to work one day to learn the big office stud had quit, morphed into Arnold Schwarzenegger by close of business.
These studies, says Greg Wray, an evolutionary biologist at Duke who has focused on gene expression for over a decade, caused quite a stir. “You suddenly realize birds are hearing a song and having massive, widespread changes in gene expression in just 15 minutes? Something big is going on.”
This big something, this startlingly quick gene-expression response to the social world, is a phenomenon we are just beginning to understand. The recent explosion of interest in “epigenetics”—a term literally meaning “around the gene,” and referring to anything that changes a gene’s effect without changing the actual DNA sequence—has tended to focus on the long game of gene-environment interactions: how famine among expectant mothers in the Netherlands during World War II, for instance, affected gene expression and behavior in their children; or how mother rats, by licking and grooming their pups more or less assiduously, can alter the wrappings around their offspring’s DNA in ways that influence how anxious the pups will be for the rest of their lives. The idea that experience can echo in our genes across generations is certainly a powerful one. But to focus only on these narrow, long-reaching effects is to miss much of the action where epigenetic influence and gene activity is concerned. This fresh work by Robinson, Fernald, Clayton, and others—encompassing studies of multiple organisms, from bees and birds to monkeys and humans—suggests something more exciting: that our social lives can change our gene expression with a rapidity, breadth, and depth previously overlooked.
Why would we have evolved this way? The most probable answer is that an organism that responds quickly to fast-changing social environments will more likely survive them. That organism won’t have to wait around, as it were, for better genes to evolve on the species level. Immunologists discovered something similar 25 years ago: Adapting to new pathogens the old-fashioned way—waiting for natural selection to favor genes that create resistance to specific pathogens—would happen too slowly to counter the rapidly changing pathogen environment. Instead, the immune system uses networks of genes that can respond quickly and flexibly to new threats.
We appear to respond in the same way to our social environment. Faced with an unpredictable, complex, ever-changing population to whom we must respond successfully, our genes behave accordingly—as if a fast, fluid response is a matter of life or death.
About the time Robinson was seeing fast gene expression changes in bees, in the early 2000s, he and many of his colleagues were taking notice of an up-and-coming UCLA researcher named Steve Cole.
Cole, a Californian then in his early 40s, had trained in psychology at the University of California-Santa Barbara and Stanford; then in social psychology, epidemiology, virology, cancer, and genetics at UCLA. Even as an undergrad, Cole had “this astute, fine-grained approach,” says Susan Andersen, a professor of psychology now at NYU who was one of his teachers at UC Santa Barbara in the late 1980s. “He thinks about things in very precise detail.”
"If you actually measure stress, using our best available instruments, it can't hold a candle to social isolation. Social isolation is the best-established, most robust social or psychological risk factor for disease out there. Nothing can compete."
In his post-doctoral work at UCLA, Cole focused on the genetics of immunology and cancer because those fields had pioneered hard-nosed gene-expression research. After that, he became one of the earliest researchers to bring the study of whole-genome gene-expression to social psychology. The gene’s ongoing, real-time response to incoming information, he realized, is where life works many of its changes on us. The idea is both reductive and expansive. We are but cells. At each cell’s center, a tight tangle of DNA writes and hands out the cell’s marching orders. Between that center and the world stand only a series of membranes.
“Porous membranes,” notes Cole.
“We think of our bodies as stable biological structures that live in the world but are fundamentally separate from it. That we are unitary organisms in the world but passing through it. But what we’re learning from the molecular processes that actually keep our bodies running is that we’re far more fluid than we realize, and the world passes through us.”
Cole told me this over dinner. We had met on the UCLA campus and walked south a few blocks, through bright April sun, to an almost empty sushi restaurant. Now, waving his chopsticks over a platter of urchin, squid, and amberjack, he said, “Every day, as our cells die off, we have to replace one to two percent of our molecular being. We’re constantly building and re-engineering new cells. And that regeneration is driven by the contingent nature of gene expression.
“This is what a cell is about. A cell,” he said, clasping some amberjack, “is a machine for turning experience into biology.”
When Cole started his social psychology research in the early 1990s, the microarray technology that spots changes in gene expression was still in its expensive infancy, and saw use primarily in immunology and cancer. So he began by using the tools of epidemiology—essentially the study of how people live their lives. Some of his early papers looked at how social experience affected men with HIV. In a 1996 study of 80 gay men, all of whom had been HIV-positive but healthy nine years earlier, Cole and his colleagues found that closeted men succumbed to the virus much more readily.
He then found that HIV-positive men who were lonely also got sicker sooner, regardless of whether they were closeted. Then he showed that closeted men without HIV got cancer and various infectious diseases at higher rates than openly gay men did. At about the same time, psychologists at Carnegie Mellon finished a well-controlled study showing that people with richer social ties got fewer common colds.
Something about feeling stressed or alone was gumming up the immune system—sometimes fatally.
“You’re besieged by a virus that’s going to kill you,” says Cole, “but the fact that you’re socially stressed and isolated seems to shut down your viral defenses. What’s going on there?”
He was determined to find out. But the research methods on hand at the time could take him only so far: “Epidemiology won’t exactly lie to you. But it’s hard to get it to tell you the whole story.” For a while he tried to figure things out at the bench, with pipettes and slides and assays. “I’d take norepinephrine [a key stress hormone] and squirt it on some infected T-cells and watch the virus grow faster. The norepinephrine was knocking down the antiviral response. That’s great. Virologists love that. But it’s not satisfying as a complete answer, because it doesn’t fully explain what’s happening in the real world.
“You can make almost anything happen in a test tube. I needed something else. I had set up all this theory. I needed a place to test it.”
His next step was to turn to rhesus monkeys, a lab species that allows controlled study. In 2007, he joined John Capitanio, a primatologist at the University of California-Davis, in looking at how social stress affected rhesus monkeys with SIV, or simian immunodeficiency virus, the monkey version of HIV. Capitanio had found that monkeys with SIV fell ill and died faster if they were stressed out by constantly being moved into new groups among strangers—a simian parallel to Cole’s 1996 study on lonely gay men.
Capitanio had run a rough immune analysis that showed the stressed monkeys mounted weak antiviral responses. Cole offered to look deeper. First he tore apart the lymph nodes—“ground central for infection”—and found that in the socially stressed monkeys, the virus bloomed around the sympathetic nerve trunks, which carry stress signals into the lymph node.
“This was a hint,” says Cole: The virus was running amok precisely where the immune response should have been strongest. The stress signals in the nerve trunks, it seemed, were getting either muted en route or ignored on arrival. As Cole looked closer, he found it was the latter: The monkeys’ bodies were generating the appropriate stress signals, but the immune system didn’t seem to be responding to them properly. Why not? He couldn’t find out with the tools he had. He was still looking at cells. He needed to look inside them.
Finally Cole got his chance. At UCLA, where he had been made a professor in 2001, he had been working hard to master gene-expression analysis across an entire genome. Microarray machines—the kind Gene Robinson was using on his bees—were getting cheaper. Cole got access to one and put it to work.
Thus commenced what we might call the lonely people studies.
First, in collaboration with University of Chicago social psychologist John Cacioppo, Cole mined a questionnaire about social connections that Cacioppo had given to 153 healthy Chicagoans in their 50s and 60s. Cacioppo and Cole identified the eight most socially secure people and the six loneliest and drew blood samples from them. (The socially insecure half-dozen were lonely indeed; they reported having felt distant from others for the previous four years.) Then Cole extracted genetic material from the blood’s leukocytes (a key immune-system player) and looked at what their DNA was up to.
He found a broad, weird, strongly patterned gene-expression response that would become mighty familiar over the next few years. Of roughly 22,000 genes in the human genome, the lonely and not-lonely groups showed sharply different gene-expression responses in 209. That meant that about one percent of the genome—a considerable portion—was responding differently depending on whether a person felt alone or connected. Printouts of the subjects’ gene-expression patterns looked much like Robinson’s red-and-green readouts of the changes in his cross-fostered bees: Whole sectors of genes looked markedly different in the lonely and the socially secure. And many of these genes played roles in inflammatory immune responses.
Now Cole was getting somewhere.
Normally, a healthy immune system works by deploying what amounts to a leashed attack dog. It detects a pathogen, then sends inflammatory and other responses to destroy the invader while also activating an anti-inflammatory response—the leash—to keep the inflammation in check. The lonely Chicagoans’ immune systems, however, suggested an attack dog off leash—even though they weren’t sick. Some 78 genes that normally work together to drive inflammation were busier than usual, as if these healthy people were fighting infection. Meanwhile, 131 genes that usually cooperate to control inflammation were underactive. The underactive genes also included key antiviral genes.
This opened a whole new avenue of insight. If social stress reliably created this gene-expression profile, it might explain a lot about why, for instance, the lonely HIV carriers in Cole’s earlier studies fell so much faster to the disease.
But this was a study of just 14 people. Cole needed more.
Over the next several years, he got them. He found similarly unbalanced gene-expression or immune-response profiles in groups including poor children, depressed people with cancer, and people caring for spouses dying of cancer. He topped his efforts off with a study in which social stress levels in young women predicted changes in their gene activity six months later. Cole and his collaborators on that study, psychologists Gregory Miller and Nicolas Rohleder of the University of British Columbia, interviewed 103 healthy Vancouver-area women aged 15 to 19 about their social lives, drew blood, and ran gene-expression profiles, and after half a year drew blood and ran profiles again. Some of the women reported at the time of the initial interview that they were having trouble with their love lives, their families, or their friends. Over the next six months, these socially troubled subjects took on the sort of imbalanced gene-expression profile Cole found in his other isolation studies: busy attack dogs and broken leashes. Except here, in a prospective study, he saw the attack dog breaking free of its restraints: Social stress changed these young women’s gene-expression patterns before his eyes.
In early 2009, Cole sat down to make sense of all this in a review paper that he would publish later that year in Current Directions in Psychological Science. Two years later we sat in his spare, rather small office at UCLA and discussed what he’d found. Cole, trimly built but close to six feet tall, speaks in a reedy voice that is slightly higher than his frame might lead you to expect. Sometimes, when he’s grabbing for a new thought or trying to emphasize a point, it jumps a register. He is often asked to give talks about his work, and it’s easy to see why: Relaxed but animated, he speaks in such an organized manner that you can almost see the paragraphs form in the air between you. He spends much of his time on the road. Thus the half-unpacked office, he said, gesturing around him. His lab, down the hall, “is essentially one really good lab manager”—Jesusa M. Arevalo, whom he frequently lists on his papers—“and a bunch of robots,” the machines that run the assays.
“We typically think of stress as being a risk factor for disease,” said Cole. “And it is, somewhat. But if you actually measure stress, using our best available instruments, it can’t hold a candle to social isolation. Social isolation is the best-established, most robust social or psychological risk factor for disease out there. Nothing can compete.”
This helps explain, for instance, why many people who work in high-stress but rewarding jobs don’t seem to suffer ill effects, while others, particularly those isolated and in poverty, wind up accruing lists of stress-related diagnoses—obesity, Type 2 diabetes, hypertension, atherosclerosis, heart failure, stroke.
Despite these well-known effects, Cole said he was amazed when he started finding that social connectivity wrought such powerful effects on gene expression.
“Or not that we found it,” he corrected, “but that we’re seeing it with such consistency. Science is noisy. I would’ve bet my eyeteeth that we’d get a lot of noisy results that are inconsistent from one realm to another. And at the level of individual genes that’s kind of true—there is some noise there.” But the kinds of genes that get dialed up or down in response to social experience, he said, and the gene networks and gene-expression cascades that they set off, “are surprisingly consistent—from monkeys to people, from five-year-old kids to adults, from Vancouver teenagers to 60-year-olds living in Chicago.”
Cole's work carries all kinds of implications—some weighty and practical, some heady and philosophical. It may, for instance, help explain the health problems that so often haunt the poor. Poverty savages the body. Hundreds of studies over the past few decades have tied low income to higher rates of asthma, flu, heart attacks, cancer, and everything in between. Poverty itself starts to look like a disease. Yet an empty wallet can’t make you sick. And we all know people who escape poverty’s dangers. So what is it about a life of poverty that makes us ill?
Cole asked essentially this question in a 2008 study he conducted with Miller and Edith Chen, another social psychologist then at the University of British Columbia. The paper appeared in an odd forum: Thorax, a journal about medical problems in the chest. The researchers gathered and ran gene-expression profiles on 31 kids, ranging from nine to 18 years old, who had asthma; 16 were poor, 15 well-off. As Cole expected, the group of well-off kids showed a healthy immune response, with elevated activity among genes that control pulmonary inflammation. The poorer kids showed busier inflammatory genes, sluggishness in the gene networks that control inflammation, and—in their health histories—more asthma attacks and other health problems. Poverty seemed to be mucking up their immune systems.
Cole, Chen, and Miller, however, suspected something else was at work—something that often came with poverty but was not the same thing. So along with drawing the kids’ blood and gathering their socioeconomic information, they showed them films of ambiguous or awkward social situations, then asked them how threatening they found them.
The poorer kids perceived more threat; the well-off perceived less. This difference in what psychologists call “cognitive framing” surprised no one. Many prior studies had shown that poverty and poor neighborhoods, understandably, tend to make people more sensitive to threats in ambiguous social situations. Chen in particular had spent years studying this sort of effect.
But in this study, Chen, Cole, and Miller wanted to see if they could tease apart the effect of cognitive framing from the effects of income disparity. It turned out they could, because some of the kids in each income group broke type. A few of the poor kids saw very little menace in the ambiguous situations, and a few well-off kids saw a lot. When the researchers separated those perceptions from the socioeconomic scores and laid them over the gene-expression scores, they found that it was really the kids’ framing, not their income levels, that accounted for most of the difference in gene expression. To put it another way: When the researchers controlled for variations in threat perception, poverty’s influence almost vanished. The main thing driving screwy immune responses appeared to be not poverty, but whether the child saw the social world as scary.
But where did that come from? Did the kids see the world as frightening because they had been taught to, or because they felt alone in facing it? The study design couldn’t answer that. But Cole believes isolation plays a key role. This notion gets startling support from a 2004 study of 57 school-age children who were so badly abused that state social workers had removed them from their homes. The study, often just called “the Kaufman study,” after its author, Yale psychiatrist Joan Kaufman, challenges a number of assumptions about what shapes responses to trauma or stress.
The Kaufman study at first looks like a classic investigation into the so-called depression risk gene—the serotonin transporter gene, or SERT—which comes in both long and short forms. Any single gene’s impact on mood or behavior is limited, of course, and these single-gene, or “candidate gene,” studies must be viewed with that in mind. Yet many studies have found that SERT's short form seems to render many people (and rhesus monkeys) more sensitive to environment; according to those studies, people who carry the short SERT are more likely to become depressed or anxious if faced with stress or trauma.
Kaufman looked first to see whether the kids’ mental health tracked their SERT variants. It did: The kids with the short variant suffered twice as many mental-health problems as those with the long variant. The double whammy of abuse plus short SERT seemed to be too much.
Then Kaufman laid both the kids’ depression scores and their SERT variants across the kids’ levels of “social support.” In this case, Kaufman narrowly defined social support as contact at least monthly with a trusted adult figure outside the home. Extraordinarily, for the kids who had it, this single, modest, closely defined social connection erased about 80 percent of the combined risk of the short SERT variant and the abuse. It came close to inoculating kids against both an established genetic vulnerability and horrid abuse.
"A cell," Steve Cole said, clasping some amberjack, "is a machine for turning experience into biology."
Or, to phrase it as Cole might, the lack of a reliable connection harmed the kids almost as much as abuse did. Their isolation wielded enough power to raise the question of what’s really most toxic in such situations. Most of the psychiatric literature essentially views bad experiences—extreme stress, abuse, violence—as toxins, and “risk genes” as quasi-immunological weaknesses that let the toxins poison us. And abuse is clearly toxic. Yet if social connection can almost completely protect us against the well-known effects of severe abuse, isn’t the isolation almost as toxic as the beatings and neglect?
The Kaufman study also challenges much conventional Western thinking about the state of the individual. To use the language of the study, we sometimes conceive of “social support” as a sort of add-on, something extra that might somehow fortify us. Yet this view assumes that humanity’s default state is solitude. It’s not. Our default state is connection. We are social creatures, and have been for eons. As Cole’s colleague John Cacioppo puts it in his book Loneliness, Hobbes had it wrong when he wrote that human life without civilization was “solitary, poor, nasty, brutish, and short.” It may be poor, nasty, brutish, and short. But seldom has it been solitary.
Toward the end of the dinner I shared with Cole, after the waiter took away the empty platters and we sat talking over green tea, I asked him if there was anything I should have asked but had not. He’d been talking most of three hours. Some people run dry. Cole does not. He spoke about how we are permeable fluid beings instead of stable unitary isolates; about recursive reconstruction of the self; about an engagement with the world that constantly creates a new you, only you don’t know it, because you’re not the person you would have been otherwise—you’re a one-person experiment that has lost its control.
He wanted to add one more thing: He didn’t see any of this as deterministic.
We were obviously moving away from what he could prove at this point, perhaps from what is testable. We were in fact skirting the rabbit hole that is the free-will debate. Yet he wanted to make it clear he does not see us as slaves to either environment or genes.
“You can’t change your genes. But if we’re even half right about all this, you can change the way your genes behave—which is almost the same thing. By adjusting your environment you can adjust your gene activity. That’s what we’re doing as we move through life. We’re constantly trying to hunt down that sweet spot between too much challenge and too little.
“That’s a really important part of this: To an extent that immunologists and psychologists rarely appreciate, we are architects of our own experience. Your subjective experience carries more power than your objective situation. If you feel like you’re alone even when you’re in a room filled with the people closest to you, you’re going to have problems. If you feel like you’re well supported even though there’s nobody else in sight; if you carry relationships in your head; if you come at the world with a sense that people care about you, that you’re valuable, that you’re okay; then your body is going to act as if you’re okay—even if you’re wrong about all that.”
Cole was channeling John Milton: “The mind is its own place, and in itself can make a heaven of hell, a hell of heaven.”
Of course I did not realize that at the moment. My reaction was more prosaic.
“So environment and experience aren’t the same,” I offered.
“Exactly. Two people may share the same environment but not the same experience. The experience is what you make of the environment. It appears you and I are both enjoying ourselves here, for instance, and I think we are. But if one of us didn’t like being one-on-one at a table for three hours, that person could get quite stressed out. We might have much different experiences. And you can shape all this by how you frame things. You can shape both your environment and yourself by how you act. It’s really an opportunity.”
Cole often puts it differently at the end of his talks about this line of work. “Your experiences today will influence the molecular composition of your body for the next two to three months,” he tells his audience, “or, perhaps, for the rest of your life. Plan your day accordingly.” | <urn:uuid:46e64897-8480-43cd-a6da-4043fefa9e82> | CC-MAIN-2022-33 | https://psmag.com/social-justice/the-social-life-of-genes-64616 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00697.warc.gz | en | 0.96542 | 7,818 | 3.171875 | 3 |
In 1929, Hilaire Belloc published, both in England and in America, a novel entitled The Missing Masterpiece, with illustrations by G.K. Chesterton. This forgotten potboiler concerns a highly successful, pompous, unprincipled art dealer in London named Sir Henry Bessington, who is confronted with two versions of a “Symbolist” painting called Âme Bourgeoise—Middle-Class Soul. (The humor throughout the book is at that English public school level.) Eventually, after a prolonged series of unlikely events involving each of the paintings, the owners of them go to court to determine which is the authentic picture, with arguments made to a jury distinguished by its ignorance.
At the trial, which is the climactic moment of the novel, the chief expert witness is the “Curator of the Oil Paintings Department in the Imperial Museum,” Dr. Edward Mowlem, who is described as possessing “a reputation and a bundle of facts so considerable that he was already in the first rank of the profession.” But Mowlem’s claims to expertise are called into serious question when he is forced to admit that he has never actually laid eyes on the version he adamantly claims is the original. The jury’s verdict decrees that both pictures are the original, that both owners had behaved badly, and that each should be fined £20,000. Shortly thereafter, a third copy of the painting is discovered, which of course adds to the general confusion at the novel’s end.
Belloc wrote his novel to capitalize on the widespread interest in a real trial, held in New York City that same year, which became one of the most celebrated trials in the history of art—a trial in which two paintings were compared for authenticity and where the validity of experts’ claims was recurrently questioned. The enlightened reader in 1929 would quickly have discerned that Belloc’s book was actually a sort of roman à clef, in which Sir Henry Bessington is a thinly disguised Sir Joseph Duveen and Edward Mowlem is based on Bernard Berenson, both of whom played major roles in the 1929 New York trial over two versions of a picture called La Belle Ferronnière, one of which may or may not have been painted by Leonardo da Vinci.
The central figure in the New York trial, the fabled art dealer Joseph Duveen, was a flamboyant figure who continues to fascinate the public. In 1952, S.N. Behrman published a highly successful portrait of Duveen based on articles he had written for The New Yorker, and as recently as 2004, Simon Gray wrote The Old Masters, his last play, which Harold Pinter directed in London. The play is not about the Belle Ferronnière controversy but depicts the later confrontation between Duveen and Berenson over the so-called Allendale Nativity, which ultimately led to their conclusive parting of the ways, Berenson insisting it was a Titian, when Duveen wanted it labeled a (more valuable) Giorgione.
John Brewer, a historian of seventeenth- and eighteenth-century England teaching at Caltech who has now written the most recent, most thorough account of the people and events surrounding the New York trial, doesn’t refer to Belloc’s novel but says that there was a revue in Paris in 1924 about La Belle Ferronnière with the title Oh the Pretty Girls; in 1993 there was also a documentary on BBC2, directed by Christopher Spencer and entitled Every Picture Tells a Story: The Two Belles. Why is the story of such continuing interest? Chiefly, I suspect, it’s because of the enormous sums of money involved, but also the central characters themselves are intriguing—powerful, foolish, pretentious. At the heart of the story, however, there are serious, engrossing issues about how we perceive works of art.
Harry Hahn, a poor boy from Kansas and the litigious protagonist of Brewer’s tale, seems to have been stricken early on with delusions of grandeur, and the information he provided about himself in later years appears to be as unreliable as much of the other “factual” evidence in this affair. On different occasions, he claimed to have been born in different towns in Kansas; in 1917 he enlisted in the army, serving first in Texas and then in France; and although he boasted that he was a highly decorated captain and an aviator, there is reason to suppose he was actually a sergeant and a mechanic. In 1919 he married a French girl named Andrée Ladoux, who lived not with her parents but with her so-called (but not actual) godmother, Josephine Massot, a milliner, in Dinard.
One of Josephine’s friends was an eccentric woman of dubious aristocracy named Louise de Montaut, whom Andrée came to call her aunt, even though they weren’t related. Mme de Montaut possessed (although it is not clear it was rightfully hers) a painting that she had always been told was by Leonardo da Vinci, and when Andrée and Harry Hahn got married on July 12, 1919, she—amazingly—gave them this picture of potentially immense value as a wedding present, because, as she loftily explained later, one doesn’t sell one’s inherited family possessions, one gives them away. There is, however, some reason to doubt that she ever actually gave it to the Hahns as a gift.
This painting, La Belle Ferronnière, was brought to America, not by the Hahns when they returned to Junction City, Kansas, in 1919, where Harry became a car salesman, but by Mme de Montaut, who arrived in New York in June 1920. Later, when they were concerned to confer drama on their undertakings, the Hahns claimed that to get it successfully out of France the painting had first been smuggled by Josephine Massot into Belgium, Falstaff-like, in a basket of laundry.
Even before they had left France, however, Harry and Andrée Hahn had begun to make efforts to sell the painting in America. Only three days after Mme de Montaut and the painting arrived in America, Joseph Duveen received a phone call from a reporter at the New York World, who asked his opinion of the version of La Belle Ferronnière that had been offered to the Kansas City Art Institute for something on the order of $250,000. Although he had never seen the Hahn picture, Duveen did not hesitate to declare it a fake, pointing out that the original was, after all, in the Louvre, and hence this could only be a copy.
The painting in Paris Duveen referred to is a late-fifteenth-century portrait of a woman recognizable as being from the court of Milan—perhaps, it is thought, Lucrezia Crivelli, who was the mistress of the Duke of Milan, Ludovico il Moro, or possibly his wife, Beatrice d’Este. (Another of his mistresses, Cecilia Gallerani, is depicted by Leonardo as the Lady with an Ermine, now in Kraków.) The painting apparently entered the French royal collection at the end of the fifteenth century during the reign of Louis XII and subsequently passed to François I, who brought Leonardo to France for the last three years of his life. From very early on, the picture was known as La Belle Ferronnière, whether because it was mistakenly confused with another portrait of the wife of a man named Le Ferron (who was the mistress of François I) or because the band with a jewel that adorns the subject’s forehead was called a ferronnière. The Louvre painting most recently made an appearance at the beginning of the film The Da Vinci Code.1
The Hahns engaged a lawyer with the aromatic name of Hyacinthe Ringrose, who brought suit against Duveen in New York, asking the inconceivably large sum, in those days, of $500,000 in compensation for his “slander of title.” Very quickly, Duveen assembled eight experts, including Harvard professor Edward Forbes and Princeton professor Frank Jewett Mather, to examine the picture in Ringrose’s office; two of them were undecided, but the rest judged against it. Duveen, however, not content with this victory, proceeded to have a number of photographs taken of the painting, which he then sent to some of the outstanding experts in Europe, including Sir Charles Holmes (the director of the National Gallery in London), Wilhelm von Bode (the director of the Kaiser Friedrich Museum in Berlin), Roger Fry, Salomon Reinach, Adolfo Venturi, and Bernard Berenson. They unanimously rejected the attribution to Leonardo.
At that point, unable to leave well enough alone, Duveen decided, with the consent of the Hahns, to place the two paintings side by side in the Louvre so that experts could compare them without resorting to photographs. On September 15, 1923, a group comprised of Holmes, Fry, Venturi, and several other experts including Arthur Pillans Laurie, a professor of chemistry who had studied painters and paintings from his scientific perspective, as well as Duveen, Mrs. Hahn, Ringrose, and a few others, assembled to compare the two paintings. Berenson, shy of the publicity involved, had done so privately a few days before.
With the exception of Laurie, who reserved judgment, the experts all agreed that the Hahn picture was not by Leonardo and that the Louvre one was. Prior to that, however, most of them had in fact expressed doubts about Leonardo as the painter of the Louvre picture. As Duveen himself had written to a prominent lawyer in Kansas City:
The Louvre picture is not passed by the most eminent connoisseurs as having been painted by Leonardo da Vinci, and I may say that I am in accord with their opinion. It is suggested that the Louvre picture is very close to Leonardo da Vinci, but is not by his hand—probably it was painted by [Leonardo’s pupil] Boltraffio.
He was soon to regret ever having made such a statement; for when the “eminent connoisseurs” in Paris all changed their minds and agreed that the Louvre picture was by Leonardo, it looked to Hahn and his lawyers suspiciously like collusion.
After some delay, the Hahn case was brought to trial in New York City on February 6, 1929, before a jury that knew almost nothing about art, art history, or the art market. Duveen’s arrogant testimony assured the jurors of his vast experience (“my study of all the great pictures of the world”) and expertise (“I do not recall ever making [a] mistake in the authorship of a picture”).
He lectured condescendingly to the jury, instructing them about connoisseurship and contemptuously denigrating the Hahn picture. The trial went on for some three weeks, during which the Hahns’ attorney attempted to discredit Duveen, and Duveen’s lawyers attempted to discredit the Hahn painting.
One side argued that Duveen had maliciously ruined the Hahns’ chances of selling their Leonardo; the other side maintained that the painting couldn’t be sold because it was only a worthless copy. After fourteen hours of deliberation, however, the jury, which was required to deliver a unanimous verdict, found itself unable to, nine members voting in favor of Hahn and three in favor of Duveen; so the judge was obliged to order a retrial. But that second trial never took place, because Duveen, fatigued by it all and not wishing to have the matter prolonged, settled with the Hahns out of court, paying them $60,000 but nonetheless insisting that their painting was not by Leonardo.
Most of the testimony by Duveen’s experts during the trial had been read into the record, because many of them were in Europe and did not wish to come to America to testify. They may also have wanted to avoid trying to defend the art of connoisseurship, always somewhat nebulous and hard to define with any precision, in the pragmatic setting of the law court. Like Duveen’s testimony, some of their depositions seem snobbish and patronizing, and Berenson especially, like many shy and nervous people under pressure, comes off as imperious and solipsistically self-assured.
He explained that he had originally doubted that the Louvre Belle Ferronnière was by Leonardo but now he was convinced it was—an opinion he continued to hold for the rest of his life. When Ringrose asked Berenson if he had notified the authorities in the Louvre of his revised opinion, he curtly replied, “There are no authorities in the Louvre.” (Sometimes criticized for changing his attributions, he once observed that “consistency requires you to be as ignorant today as you were a year ago.”) When asked whether the Louvre picture was painted on wood or canvas, Berenson, who boasted that he had looked at the painting “a thousand times,” replied that he didn’t know. “What?” exclaimed Ringrose, “you claim to have studied it so much, and you can’t answer a simple question?” To which Berenson unfortunately riposted, “It’s as if you asked me on what kind of paper Shakespeare wrote his immortal sonnets.” It is “not interesting,” he insisted: “It is not interesting on what paper Shakespeare wrote Hamlet.”
The imperfection of his analogy must have been evident even to him. But Berenson was always so focused on the artistic expression of a work of art that he often revealed a surprising nonchalance about its physical state, and even Duveen’s excessive attempts to restore paintings to pristine condition in order to please his clients don’t seem, so far as I can tell, to have bothered Berenson as much as they might have.2 Berenson admitted to Ringrose that he was not an expert on the chemical composition of pigments or on the ways artists paint. He said that a lifetime of looking had given him a “sixth sense” that enabled him to attribute paintings to specific schools, eras, or artists—a claim to which both the Hahns’ lawyer and the judge in the case took exception because it was indefinable and had no scientific verification. As Berenson explained to the lawyer who questioned him, to authenticate a painting one had to have a comprehensive knowledge of all the accepted works by the artist. “You then get,” he said,
a sense, if you have had sufficiently long training…. This is not a matter for beginners. It takes a very long training before you get this sort of sixth sense that comes from accumulated knowledge.
The expert connoisseurs were generally dismissive of scientific evidence; but the simple fact is that in the 1920s the use of scientific methods in attributing pictures was still in its infancy, so much so that when X-rays of the Hahn painting were presented in court, a radiologist had to read them, because none of the art historians could. Later, however, Duveen, who maintained that he didn’t believe in the evidence of X-rays, managed to find a young man at Harvard’s Fogg Museum, Alan Burroughs, who was beginning to pioneer the use of X-ray evidence and had actually taken an X-ray of the Louvre Belle Ferronnière. Burroughs managed to give concrete, scientific evidence suggesting that the Hahn picture was a copy of the one in the Louvre.
Harry Hahn, denied by the jury the attribution he wanted, concluded that he was the victim of a conspiracy by what he called “the art racket.” Over the next three or four years, he wrote an intemperate, tendentious book, entitled The Rape of La Belle, for which he didn’t manage to find a publisher until 1946. In his angry account of the dishonest intrigue he claims Duveen had mounted against him, Berenson is, of course, one of the chief villains, and like the judge at the trial, Hahn could not abide Berenson’s claim to have a sixth sense:
One thing is certain, however, the sixth sense divining faculty possessed by Mr. Berenson has been very rewarding. It has also been of inestimable value in composing that attribution music from which the maestro, Sir Joseph Duveen, did his highly profitable fiddling. Science may here be out, but fat cash is surely in. If Mr. Berenson is crazy, maybe it is with the craziness of a fox; if he is deficient in common sense, he has an insect’s compensating instinct for fixing up a nice nest.
Following the account of the trial in 1929, Brewer gives, for the next 150 pages or so, an exhaustive, exhausting chronicle, in perhaps more detail than many readers will desire, of the subsequent fate of the Hahn picture. We learn that for most of the past eight decades it has sat unseen in storage, first in New York City until after World War II, then in Kansas, and then in Nebraska. During this time, the family squabbled over its possession, Harry Hahn struggled to get his book published and the painting sold, and a wide assortment of people attempted to make a fortune from the picture. The Hahns used it as collateral to pay their legal and other expenses, and they used Duveen’s $60,000 to move back to France for a few years, all the while trying to authenticate and sell the painting. We are told about the origins of the Nelson-Atkins Art Museum in Kansas City, about the Hahns’ divorce and their respective remarriages, as well as about the famous trials concerning Otto Wacker’s fake Van Goghs in 1932 and Han van Meegeren’s forged Vermeers in 1945.3
A lengthy chapter details the vicissitudes of Harry Hahn’s manuscript of The Rape of La Belle and the decisive parts played in its eventual publication by a Kansas City businessman, Frank Glenn, and the populist American artist Thomas Hart Benton, who wrote a feisty introduction for the published book. The repeated attempts to prove the Hahn picture’s authenticity, which came to involve Maurits van Dantzig, a self-proclaimed expert on forgeries, Helmut Ruhemann, the highly controversial restorer at London’s National Gallery, Kenneth Clark, and Philip Hendy, are painstakingly recounted, and we are taken step by detailed step through the labyrinth of pecuniary skullduggery, as one person after another tried to profit financially from the Hahn painting and, as Brewer puts it with prodigious understatement, “the financial arrangements surrounding the painting [became] increasingly Byzantine.” By 1996, there were twenty-nine liens worth almost $42 million on the painting.
Since Brewer published his book, the Hahn heirs, who lost control over the painting for a while, came into possession of it once again, as the result of a confidential legal agreement executed in March 2009, and it was auctioned at Sotheby’s on January 28. According to Sotheby’s catalog, recent technical examination of the “[in]famous portrait,” including pigment analysis, indicates that the Hahn painting dates from the seventeenth century, thus confirming the opinion expressed by the Oxford professor Martin Kemp, one of the world’s leading Leonardo scholars, who examined it in 1993, said it was not by Leonardo, dated it to the first half of the seventeenth century, and thought it might possibly have been painted by the French baroque painter Laurent de La Hyre. Although the presale estimate was $300,000–$500,000, the painting actually sold at auction for $1,300,000; with the addition of the buyer’s premium fee, the American private collector who bought it paid just over a million and a half dollars for the picture that once belonged to Harry and Andrée Hahn.
Connoisseurship, though difficult to describe, is not the mystery it is often thought to be. Rather, it is a skill acquired through a great deal of hard work, discipline, study, and sophistication. It is not hocus-pocus, sleight of hand, or casual guessing, although it may appear so to the uninitiated. But we recognize our friends’ voices on the telephone or their handwriting on envelopes, and most listeners can distinguish between a piece for piano by Debussy and one by Mozart. All these are forms of connoisseurship, and they are acquired skills. When I was an undergraduate, I once met a curator of decorative arts at a major museum who told me, to my amazement and amusement, “the only things I can’t date are stoves.” And at Harvard, we regularly taught students in literature to identify passages of prose or poetry, if possible by specific author, but if not, at least by period. These are merely more sophisticated, intellectual forms of connoisseurship than recognizing your friend’s voice when you answer the phone. All are based on experience and comparison, memory and intelligence, but there are no scientific formulae to describe them or scientific aids to achieve them.
In October 1988, at the Gabinetto Vieusseux in Florence, the late Sydney Freedberg, one of the most accomplished and highly regarded connoisseurs of Italian painting, gave a lecture in which he attempted to explain how the connoisseur works.4 Defining connoisseurship as “the use of expert knowledge of a field…to identify objects in it, determine their quality, and assess their character,” Freedberg was at pains to “demystify” connoisseurship, insisting that it “is not a product of that unfathomable, non-rational and animal-seeming thing that is called ‘intuition,'” but rather the result of vast, intense, informed visual experience carefully stored in the memory.
In his essay, too dense and complex to be quickly summarized here, he stresses the supreme importance of a capacious visual memory. He also takes account of the various scientific aids (“radiography in its multiple forms, infrared and ultraviolet devices, and macro-photographs…analyses of pigments and of varnishes, dendrochronology and thermoluminescence”) that have been developed in recent years; yet he emphasizes that they are only “ancillary to the connoisseur’s mode of operation,” helping him to confirm or deny the evidence of his eye. Science can tell us that a painting had to be painted before 1400, but only connoisseurship can give us the understanding that it was painted by Giotto.
Kenneth Clark, when asked in 1960 to state his opinion of both the Louvre and the Hahn paintings, craftily avoided stating categorically that the Louvre picture was by Leonardo but he did say that it “is the original of the Fifteenth Century, and the Hahn picture a post-Raphaelesque copy.” He then went on to invoke a characteristic “mode of operation” of connoisseurship:
I believe that by taking a group of authentic drawings and pictures by Leonardo, and demonstrating his type of modelling and then taking a number of post-Raphaelesque heads and showing their type of modelling, it would be possible to prove that the Louvre picture fell into the first category and the Hahn picture into the second, even though the Hahn picture is an extremely close and skilful copy….
Brewer, who does not mention Freedberg’s important essay, nevertheless seems to be in general agreement with it. As a professor of the humanities at Caltech, he is understandably interested in the relationship between connoisseurship and science, and he is eloquent about the ways that “ordinary folk, common sense, science and objectivity” can be intimidated when confronted with the elitist world of privileged collectors, high culture, and arcane expertise. But in the end he endorses Helmut Ruhemann’s vision, in which, as Brewer says, “the admittedly subjective talents of the trained and experienced eye worked together with the ‘objective’ expertise of science.” However, it is worth noting that Brewer also quotes Ruhemann more explicitly, stating, “I believe that all these [scientific] devices, even if they become more perfect with time, will never be able to compete with the instinct of the true connoisseur, his unaided eye will always be the decisive factor.” Surely not only Freedberg but Berenson himself would have agreed with that.
In a personal “Afterword,” Brewer describes how, with considerable difficulty, he finally managed to get access to the Hahn Belle Ferronnière at an undisclosed location in Omaha. There is a poignant moment when the picture is finally unveiled and he realizes that, despite all his research and everything he has learned over the years about the Hahn picture, he lacks expertise as a connoisseur and doesn’t really know what he’s looking at: “Seeing the picture,” he says, “was in some respects an empty gesture…. I was just another of those people who stand before a portrait and ask themselves, ‘Is it a masterpiece? Is it a Leonardo? How do I know?'”
—January 28, 2010
For a full discussion of the complex matter of the provenance and name of the Louvre painting, see Janet Cox-Rearick, The Collection of Francis I: Royal Treasures (Fonds Mercator/Abrams, 1996), pp. 145–146. ↩
For the most recent discussion of Duveen and his methods, see the fascinating article by Jonathan Brown in the Metropolitan Museum of Art’s just-published pamphlet Velázquez Rediscovered (2009) entitled “A Restored Velázquez, a Velázquez Restored”: ↩
“Berenson, Connoisseurship, and the History of Art,” reprinted both in The New Criterion (February 1989), and in I Tatti Studies: Essays in the Renaissance, Vol. 3 (1989). ↩ | <urn:uuid:12734cbc-7349-4bb4-81be-5e738fe98214> | CC-MAIN-2022-33 | https://www.nybooks.com/articles/2010/02/25/not-so-grand-illusion/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00697.warc.gz | en | 0.976221 | 5,681 | 2.65625 | 3 |
What are cephalic disorders?
Cephalic disorders are congenital conditions that stem from damage to, or abnormal development of, the budding nervous system. Cephalic is a term that means "head" or "head end of the body." Congenital means the disorder is present at, and usually before, birth. Although there are many congenital developmental disorders, this fact sheet briefly describes only cephalic conditions.
Cephalic disorders are not necessarily caused by a single factor but may be influenced by hereditary or genetic conditions or by environmental exposures during pregnancy such as medication taken by the mother, maternal infection, or exposure to radiation. Some cephalic disorders occur when the cranial sutures (the fibrous joints that connect the bones of the skull) join prematurely. Most cephalic disorders are caused by a disturbance that occurs very early in the development of the fetal nervous system.
The human nervous system develops from a small, specialized plate of cells on the surface of the embryo. Early in development, this plate of cells forms the neural tube, a narrow sheath that closes between the third and fourth weeks of pregnancy to form the brain and spinal cord of the embryo. Four main processes are responsible for the development of the nervous system: cell proliferation, the process in which nerve cells divide to form new generations of cells; cell migration, the process in which nerve cells move from their place of origin to the place where they will remain for life; cell differentiation, the process during which cells acquire individual characteristics; and cell death, a natural process in which cells die. Understanding the normal development of the human nervous system, one of the research priorities of the National Institute of Neurological Disorders and Stroke, may lead to a better understanding of cephalic disorders.
Damage to the developing nervous system is a major cause of chronic, disabling disorders and, sometimes, death in infants, children, and even adults. The degree to which damage to the developing nervous system harms the mind and body varies enormously. Many disabilities are mild enough to allow those afflicted to eventually function independently in society. Others are not. Some infants, children, and adults die, others remain totally disabled, and an even larger population is partially disabled, functioning well below normal capacity throughout life.
What are the different kinds of cephalic disorders?
ANENCEPHALY is a neural tube defect that occurs when the cephalic (head) end of the neural tube fails to close, usually between the 23rd and 26th days of pregnancy, resulting in the absence of a major portion of the brain, skull, and scalp. Infants with this disorder are born without a forebrain - the largest part of the brain consisting mainly of the cerebrum, which is responsible for thinking and coordination. The remaining brain tissue is often exposed - not covered by bone or skin.
Infants born with anencephaly are usually blind, deaf, unconscious, and unable to feel pain. Although some individuals with anencephaly may be born with a rudimentary brainstem, the lack of a functioning cerebrum permanently rules out the possibility of ever gaining consciousness. Reflex actions such as breathing and responses to sound or touch may occur. The disorder is one of the most common disorders of the fetal central nervous system. Approximately 1,000 to 2,000 American babies are born with anencephaly each year. The disorder affects females more often than males.
The cause of anencephaly is unknown. Although it is believed that the mother's diet and vitamin intake may play a role, scientists agree that many other factors are also involved.
There is no cure or standard treatment for anencephaly and the prognosis for affected individuals is poor. Most infants do not survive infancy. If the infant is not stillborn, then he or she will usually die within a few hours or days after birth. Anencephaly can often be diagnosed before birth through an ultrasound examination.
Recent studies have shown that the addition of folic acid to the diet of women of child-bearing age may significantly reduce the incidence of neural tube defects. Therefore it is recommended that all women of child-bearing age consume 0.4 mg of folic acid daily.
COLPOCEPHALY is a disorder in which there is an abnormal enlargement of the occipital horns - the posterior or rear portion of the lateral ventricles (cavities or chambers) of the brain. This enlargement occurs when there is an underdevelopment or lack of thickening of the white matter in the posterior cerebrum. Colpocephaly is characterized by microcephaly (abnormally small head) and delayed development. Other features may include motor abnormalities, muscle spasms, and seizures.
Although the cause is unknown, researchers believe that the disorder results from an intrauterine disturbance that occurs between the second and sixth months of pregnancy. Colpocephaly may be diagnosed late in pregnancy, although it is often misdiagnosed as hydrocephalus (excessive accumulation of cerebrospinal fluid in the brain). It may be more accurately diagnosed after birth when signs of microcephaly, delayed development, and seizures are present.
There is no definitive treatment for colpocephaly. Anticonvulsant medications can be given to prevent seizures, and doctors try to prevent contractures (shrinkage or shortening of muscles). The prognosis for individuals with colpocephaly depends on the severity of the associated conditions and the degree of abnormal brain development. Some children benefit from special education.
HOLOPROSENCEPHALY is a disorder characterized by the failure of the prosencephalon (the forebrain of the embryo) to develop. During normal development the forebrain is formed and the face begins to develop in the fifth and sixth weeks of pregnancy. Holoprosencephaly is caused by a failure of the embryo's forebrain to divide to form bilateral cerebral hemispheres (the left and right halves of the brain), causing defects in the development of the face and in brain structure and function.
There are three classifications of holoprosencephaly. Alobar holoprosencephaly, the most serious form in which the brain fails to separate, is usually associated with severe facial anomalies. Semilobar holoprosencephaly, in which the brain's hemispheres have a slight tendency to separate, is an intermediate form of the disease. Lobar holoprosencephaly, in which there is considerable evidence of separate brain hemispheres, is the least severe form. In some cases of lobar holoprosencephaly, the patient's brain may be nearly normal.
Holoprosencephaly, once called arhinencephaly, consists of a spectrum of defects or malformations of the brain and face. At the most severe end of this spectrum are cases involving serious malformations of the brain, malformations so severe that they are incompatible with life and often cause spontaneous intrauterine death. At the other end of the spectrum are individuals with facial defects - which may affect the eyes, nose, and upper lip - and normal or near-normal brain development. Seizures and cognitive impairment and development may occur.
The most severe of the facial defects (or anomalies) is cyclopia, an abnormality characterized by the development of a single eye, located in the area normally occupied by the root of the nose, and a missing nose or a nose in the form of a proboscis (a tubular appendage) located above the eye.
Ethmocephaly is the least common facial anomaly. It consists of a proboscis separating narrow-set eyes with an absent nose and microphthalmia (abnormal smallness of one or both eyes). Cebocephaly, another facial anomaly, is characterized by a small, flattened nose with a single nostril situated below incomplete or underdeveloped closely set eyes.
The least severe in the spectrum of facial anomalies is the median cleft lip, also called premaxillary agenesis.
Although the causes of most cases of holoprosencephaly remain unknown, researchers know that approximately one-half of all cases have a chromosomal cause. Such chromosomal anomalies as Patau's syndrome (trisomy 13) and Edwards' syndrome (trisomy 18) have been found in association with holoprosencephaly. There is an increased risk for the disorder in infants of diabetic mothers.
There is no treatment for holoprosencephaly and the prognosis for individuals with the disorder is poor. Most of those who survive show no significant developmental gains. For children who survive, treatment is symptomatic. Although it is possible that improved management of diabetic pregnancies may help prevent holoprosencephaly, there is no means of primary prevention.
HYDRANENCEPHALY is a rare condition in which the cerebral hemispheres are absent and replaced by sacs filled with cerebrospinal fluid. Usually the cerebellum and brainstem are formed normally. An infant with hydranencephaly may appear normal at birth. The infant's head size and spontaneous reflexes such as sucking, swallowing, crying, and moving the arms and legs may all seem normal. However, after a few weeks the infant usually becomes irritable and has increased muscle tone (hypertonia). After several months of life, seizures and hydrocephalus may develop. Other symptoms may include visual impairment, lack of growth, deafness, blindness, spastic quadriparesis (paralysis), and intellectual deficits.
Hydranencephaly is an extreme form of porencephaly (a rare disorder, discussed later in this fact sheet, characterized by a cyst or cavity in the cerebral hemispheres) and may be caused by vascular insult (such as stroke) or injuries, infections, or traumatic disorders after the 12th week of pregnancy.
Diagnosis may be delayed for several months because the infant's early behavior appears to be relatively normal. Transillumination, an examination in which light is passed through body tissues, usually confirms the diagnosis. Some infants may have additional abnormalities at birth, including seizures, myoclonus (involuntary sudden, rapid jerks), and respiratory problems.
There is no standard treatment for hydranencephaly. Treatment is symptomatic and supportive. Hydrocephalus may be treated with a shunt.
The outlook for children with hydranencephaly is generally poor, and many children with this disorder die before age 1. However, in rare cases, children with hydranencephaly may survive for several years or more.
INIENCEPHALY is a rare neural tube defect that combines extreme retroflexion (backward bending) of the head with severe defects of the spine. The affected infant tends to be short, with a disproportionately large head. Diagnosis can be made immediately after birth because the head is so severely retroflexed that the face looks upward. The skin of the face is connected directly to the skin of the chest and the scalp is directly connected to the skin of the back. Generally, the neck is absent.
Most individuals with iniencephaly have other associated anomalies such as anencephaly, cephalocele (a disorder in which part of the cranial contents protrudes from the skull), hydrocephalus, cyclopia, absence of the mandible (lower jaw bone), cleft lip and palate, cardiovascular disorders, diaphragmatic hernia, and gastrointestinal malformation. The disorder is more common among females.
The prognosis for those with iniencephaly is extremely poor. Newborns with iniencephaly seldom live more than a few hours. The distortion of the fetal body may also pose a danger to the mother's life.
LISSENCEPHALY, which literally means "smooth brain," is a rare brain malformation characterized by microcephaly and the lack of normal convolutions (folds) in the brain. It is caused by defective neuronal migration, the process in which nerve cells move from their place of origin to their permanent location.
The surface of a normal brain is formed by a complex series of folds and grooves. The folds are called gyri or convolutions, and the grooves are called sulci. In children with lissencephaly, the normal convolutions are absent or only partly formed, making the surface of the brain smooth.
Symptoms of the disorder may include unusual facial appearance, difficulty swallowing, failure to thrive, and severe psychomotor retardation. Anomalies of the hands, fingers, or toes, muscle spasms, and seizures may also occur.
Lissencephaly may be diagnosed at or soon after birth. Diagnosis may be confirmed by ultrasound, computed tomography (CT), or magnetic resonance imaging (MRI).
Lissencephaly may be caused by intrauterine viral infections or viral infections in the fetus during the first trimester, insufficient blood supply to the baby's brain early in pregnancy, or a genetic disorder. There are two distinct genetic causes of lissencephaly - X-linked and chromosome 17-linked.
The spectrum of lissencephaly is only now becoming more defined as neuroimaging and genetics has provided more insights into migration disorders. Other causes which have not yet been identified are likely as well.
Lissencephaly may be associated with other diseases including isolated lissencephaly sequence, Miller-Dieker syndrome, and Walker-Warburg syndrome.
Treatment for those with lissencephaly is symptomatic and depends on the severity and locations of the brain malformations. Supportive care may be needed to help with comfort and nursing needs. Seizures may be controlled with medication and hydrocephalus may require shunting. If feeding becomes difficult, a gastrostomy tube may be considered.
The prognosis for children with lissencephaly varies depending on the degree of brain malformation. Many individuals show no significant development beyond a 3- to 5-month-old level. Some may have near-normal development and intelligence. Many will die before the age of 2. Respiratory problems are the most common causes of death.
MEGALENCEPHALY, also called macrencephaly, is a condition in which there is an abnormally large, heavy, and usually malfunctioning brain. By definition, the brain weight is greater than average for the age and gender of the infant or child. Head enlargement may be evident at birth or the head may become abnormally large in the early years of life.
Megalencephaly is thought to be related to a disturbance in the regulation of cell reproduction or proliferation. In normal development, neuron proliferation - the process in which nerve cells divide to form new generations of cells - is regulated so that the correct number of cells is formed in the proper place at the appropriate time.
Symptoms of megalencephaly may include delayed development, convulsive disorders, corticospinal (brain cortex and spinal cord) dysfunction, and seizures. Megalencephaly affects males more often than females.
The prognosis for individuals with megalencephaly largely depends on the underlying cause and the associated neurological disorders. Treatment is symptomatic. Megalencephaly may lead to a condition called macrocephaly (defined later in this fact sheet). Unilateral megalencephaly or hemimegalencephaly is a rare condition characterized by the enlargement of one-half of the brain. Children with this disorder may have a large, sometimes asymmetrical head. Often they suffer from intractable seizures and mental retardation. The prognosis for those with hemimegalencephaly is poor.
MICROCEPHALY is a neurological disorder in which the circumference of the head is smaller than average for the age and gender of the infant or child. Microcephaly may be congenital or it may develop in the first few years of life. The disorder may stem from a wide variety of conditions that cause abnormal growth of the brain, or from syndromes associated with chromosomal abnormalities.
Infants with microcephaly are born with either a normal or reduced head size. Subsequently the head fails to grow while the face continues to develop at a normal rate, producing a child with a small head, a large face, a receding forehead, and a loose, often wrinkled scalp. As the child grows older, the smallness of the skull becomes more obvious, although the entire body also is often underweight and dwarfed. Development of motor functions and speech may be delayed. Hyperactivity and cognitive impairment are common occurrences, although the degree of each varies. Convulsions may also occur. Motor ability varies, ranging from clumsiness in some to spastic quadriplegia in others.
Generally there is no specific treatment for microcephaly. Treatment is symptomatic and supportive.
In general, life expectancy for individuals with microcephaly is reduced and the prognosis for normal brain function is poor. The prognosis varies depending on the presence of associated abnormalities.
PORENCEPHALY is an extremely rare disorder of the central nervous system involving a cyst or cavity in a cerebral hemisphere. The cysts or cavities are usually the remnants of destructive lesions, but are sometimes the result of abnormal development. The disorder can occur before or after birth.
Porencephaly most likely has a number of different, often unknown causes, including absence of brain development and destruction of brain tissue. The presence of porencephalic cysts can sometimes be detected by transillumination of the skull in infancy. The diagnosis may be confirmed by CT, MRI, or ultrasonography.
More severely affected infants show symptoms of the disorder shortly after birth, and the diagnosis is usually made before age 1. Signs may include delayed growth and development, spastic paresis (slight or incomplete paralysis), hypotonia (decreased muscle tone), seizures (often infantile spasms), and macrocephaly or microcephaly.
Individuals with porencephaly may have poor or absent speech development, epilepsy, hydrocephalus, spastic contractures (shrinkage or shortening of muscles), and cognitive impairment. Treatment may include physical therapy, medication for seizure disorders, and a shunt for hydrocephalus. The prognosis for individuals with porencephaly varies according to the location and extent of the lesion. Some patients with this disorder may develop only minor neurological problems and have normal intelligence, while others may be severely disabled. Others may die before the second decade of life.
SCHIZENCEPHALY is a rare developmental disorder characterized by abnormal slits, or clefts, in the cerebral hemispheres. Schizencephaly is a form of porencephaly. Individuals with clefts in both hemispheres, or bilateral clefts, are often developmentally delayed and have delayed speech and language skills and corticospinal dysfunction. Individuals with smaller, unilateral clefts (clefts in one hemisphere) may be weak on one side of the body and may have average or near-average intelligence. Patients with schizencephaly may also have varying degrees of microcephaly, delayed development and cognitive impairnment, hemiparesis (weakness or paralysis affecting one side of the body), or quadriparesis (weakness or paralysis affecting all four extremities), and may have reduced muscle tone (hypotonia). Most patients have seizures and some may have hydrocephalus.
In schizencephaly, the neurons border the edge of the cleft implying a very early disruption in development. There is now a genetic origin for one type of schizencephaly. Causes of this type may include environmental exposures during pregnancy such as medication taken by the mother, exposure to toxins, or a vascular insult. Often there are associated heterotopias (isolated islands of neurons) which indicate a failure of migration of the neurons to their final position in the brain.
Treatment for individuals with schizencephaly generally consists of physical therapy, treatment for seizures, and, in cases that are complicated by hydrocephalus, a shunt.
The prognosis for individuals with schizencephaly varies depending on the size of the clefts and the degree of neurological deficit.
What are other less common cephalies?
ACEPHALY literally means absence of the head. It is a much rarer condition than anencephaly. The acephalic fetus is a parasitic twin attached to an otherwise intact fetus. The acephalic fetus has a body but lacks a head and a heart; the fetus's neck is attached to the normal twin. The blood circulation of the acephalic fetus is provided by the heart of the twin. The acephalic fetus can not exist independently of the fetus to which it is attached.
EXENCEPHALY is a condition in which the brain is located outside of the skull. This condition is usually found in embryos as an early stage of anencephaly. As an exencephalic pregnancy progresses, the neural tissue gradually degenerates. It is unusual to find an infant carried to term with this condition because the defect is incompatible with survival.
MACROCEPHALY is a condition in which the head circumference is larger than average for the age and gender of the infant or child. It is a descriptive rather than a diagnostic term and is a characteristic of a variety of disorders. Macrocephaly also may be inherited. Although one form of macrocephaly may be associated with developmental delays and cognitive impairment, in approximately one-half of cases mental development is normal. Macrocephaly may be caused by an enlarged brain or hydrocephalus. It may be associated with other disorders such as dwarfism, neurofibromatosis, and tuberous sclerosis.
MICRENCEPHALY is a disorder characterized by a small brain and may be caused by a disturbance in the proliferation of nerve cells. Micrencephaly may also be associated with maternal problems such as alcoholism, diabetes, or rubella (German measles). A genetic factor may play a role in causing some cases of micrencephaly. Affected newborns generally have striking neurological defects and seizures. Severely impaired intellectual development is common, but disturbances in motor functions may not appear until later in life.
OCTOCEPHALY is a lethal condition in which the primary feature is agnathia - a developmental anomaly characterized by total or virtual absence of the lower jaw. The condition is considered lethal because of a poorly functioning airway. In octocephaly, agnathia may occur alone or together with holoprosencephaly.
Another group of less common cephalic disorders are the craniostenoses. Craniostenoses are deformities of the skull caused by the premature fusion or joining together of the cranial sutures. Cranial sutures are fibrous joints that join the bones of the skull together. The nature of these deformities depends on which sutures are affected.
BRACHYCEPHALY occurs when the coronal suture fuses prematurely, causing a shortened front-to-back diameter of the skull. The coronal suture is the fibrous joint that unites the frontal bone with the two parietal bones of the skull. The parietal bones form the top and sides of the skull.
OXYCEPHALY is a term sometimes used to describe the premature closure of the coronal suture plus any other suture, or it may be used to describe the premature fusing of all sutures. Oxycephaly is the most severe of the craniostenoses.
PLAGIOCEPHALY results from the premature unilateral fusion (joining of one side) of the coronal or lambdoid sutures. The lambdoid suture unites the occipital bone with the parietal bones of the skull. Plagiocephaly is a condition characterized by an asymmetrical distortion (flattening of one side) of the skull. It is a common finding at birth and may be the result of brain malformation, a restrictive intrauterine environment, or torticollis (a spasm or tightening of neck muscles).
SCAPHOCEPHALY applies to premature fusion of the sagittal suture. The sagittal suture joins together the two parietal bones of the skull. Scaphocephaly is the most common of the craniostenoses and is characterized by a long, narrow head.
TRIGONOCEPHALY is the premature fusion of the metopic suture (part of the frontal suture which joins the two halves of the frontal bone of the skull) in which a V-shaped abnormality occurs at the front of the skull. It is characterized by the triangular prominence of the forehead and closely set eyes.
What research is being done?
Within the Federal Government, the National Institute of Neurological Disorders and Stroke (NINDS), one of the National Institutes of Health (NIH), has primary responsibility for conducting and supporting research on normal and abnormal brain and nervous system development, including congenital anomalies. The National Institute of Child Health and Human Development, the National Institute of Mental Health, the National Institute of Environmental Health Sciences, the National Institute of Alcohol Abuse and Alcoholism, and the National Institute on Drug Abuse also support research related to disorders of the developing nervous system. Gaining basic knowledge about how the nervous system develops and understanding the role of genetics in fetal development are major goals of scientists studying congenital neurological disorders.
Scientists are rapidly learning how harmful insults at various stages of pregnancy can lead to developmental disorders. For example, a critical nutritional deficiency or exposure to an environmental insult during the first month of pregnancy (when the neural tube is formed) can produce neural tube defects such as anencephaly.
Scientists are also concentrating their efforts on understanding the complex processes responsible for normal early development of the brain and nervous system and how the disruption of any of these processes results in congenital anomalies such as cephalic disorders. Understanding how genes control brain cell migration, proliferation, differentiation, and death, and how radiation, drugs, toxins, infections, and other factors disrupt these processes will aid in preventing many congenital neurological disorders.
Currently, researchers are examining the mechanisms involved in neurulation - the process of forming the neural tube. These studies will improve our understanding of this process and give insight into how the process can go awry and cause devastating congenital disorders. Investigators are also analyzing genes and gene products necessary for human brain development to achieve a better understanding of normal brain development in humans.
Where can I get more information?
For more information on neurological disorders or research programs funded by the National Institute of Neurological Disorders and Stroke, contact the Institute's Brain Resources and Information Network (BRAIN) at:
P.O. Box 5801
Bethesda, MD 20824
Information also is available from the following organizations:
"Cephalic Disorders Fact Sheet", NINDS, Publication date September 2003.
NIH Publication No. 98-4339
Back to: Cephalic Disorders Information Page
Publicaciones en Español
Office of Communications and Public Liaison
National Institute of Neurological Disorders and Stroke
National Institutes of Health
Bethesda, MD 20892
NINDS health-related material is provided for information purposes only and does not necessarily represent endorsement by or an official position of the National Institute of Neurological Disorders and Stroke or any other Federal agency. Advice on the treatment or care of an individual patient should be obtained through consultation with a physician who has examined that patient or is familiar with that patient's medical history. | <urn:uuid:11364af7-d843-4dbb-9d7b-2c9b848f3ff9> | CC-MAIN-2022-33 | https://www.ninds.nih.gov/cephalic-disorders-fact-sheet | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00497.warc.gz | en | 0.932808 | 5,791 | 3.453125 | 3 |
Canton Rescue Kilcummin
August 1940 was also the month in which one of the most courageous and difficult rescues of shipwrecked seamen took place off the coast of Ireland during World War 11. The rescue took place off Kilcummin Head when local fishermen showed skilful seamanship and noble humanity when they risked their lives in treacherous seas to row out three miles to tow to safety a lifeboat full of weary sailors in danger of being smashed against the rocks by a north-west wind blowing at almost gale force.
At 5.30am on Friday, August 9, 1940, the unescorted Swedish motor merchant, Canton, was hit by one torpedo from U-30 and sank 70 miles west of Tory Island. The ship was on its way to Liverpool from Calcutta via Freetown, carrying 3,000 tons of pig iron, 2,700 tons of linseed, 1,152 tons of general cargo and 1,034 tons of hessian.
Two lifeboats were launched; the captain and 15 crew were never heard of again, but the second boat with 16 men reached the Mayo coast at about 10am on Sunday morning, August 11, 1940.
By the time they came in sight of the Kilcummin shoreline at 10am on Sunday, August 11, 1940, the 16 seamen (13 Swedes, two Norwegians and one Filipino) were exhausted and cold after two nights being tossed about in a cramped lifeboat in a wild North Atlantic. Dejected, the survivors were in no fit state to row their vessel through the heavy seas, avoiding reefs and skerries, to the small harbour near where General Humbert’s French Forces landed in 1798.
Following in the family tradition, Jim McLoughlin (1909-1992) was a marine pilot since he was 19, guiding cargo boats into Killala Bay and up through the narrow silt reefs of the Moy Estuary to Ballina Harbour. He owned a 22-foot blue and white skiff he named St. Anne which he used for coastal fishing along with seven other men, relatives and close friends, including his younger brother, Mike.
Alerted by the coastwatchers at Kilcummin LOP, who had spotted the lifeboat in difficulty about three miles off Kilcummin Head, Jim McLoughlin and his comrades did not hesitate for one minute when they looked out over the bay from the cliffs at Croagh Mor and saw the plight of the men aboard the lifeboat. Within minutes, they had launched St. Anne into the dangerous sea swell that kept all boats harbour-bound that inclement summer Sunday.
The crew of St. Anne on that day comprised James McLoughlin (31) Skipper and Michael McLoughlin (24), who were brothers; John McLoughlin (70) a cousin of the previous two men, John Kelly (33), John Langan (60), William Knox (40) all of Parke; William Hughes (60), Ballinlena, and Thomas Hughes (40), Kilcummin.
The drama and gallantry of the rescue could well have turned into a terrible tragedy for Kilcummin, but for the seamanship and bravery of the crew of St. Anne. Over 80 years later those of us who are familiar with Kilcummin’s big waves can appreciate the enormous courage shown by the men of St. Anne to even attempt to row beyond the shelter of the wave-lashed harbour.
Rowing out three miles through the treacherous currents and high rolling seas, the Kilcummin men saw that the lifeboat was almost twice the size of the skiff and taking water. A collision would see them all drown. There was no response to calls to take a tow line; the lifeboatmen who were crouching under a tarpaulin were cold, seasick and losing hope of rescue. Bravely, Jim McLoughlin jumped from St. Anne into the lifeboat and as the survivors slowly emerged, he secured the line and took the tiller as the oarsman in St. Anne began pulling for home.
“The haul home was tough going for the rest of them. The much larger lifeboat had 17 people and a lot of water in it and they had to keep ahead enough at times so that she wouldn’t bear down on them. They were cold, wet, and tired. As they approached the little harbour, they could see cars full of people as well as ass-carts and bicycles and those on foot and horseback from all over the region.” 43
As the Canton crew came ashore at Kilcummin, one of the crew who was a Filipino “fell on his knees, blessed himself, and kissed the ground ” 44. Some of his comrades were not as lucky; the bodies of Edwin Andersson, from Malmo, Sweden, and Heikel Sverin, both Canton crewmen, came ashore later in the month in Donegal and two empty lifeboats from the steamer also washed up.
The survivors were welcomed with kindness and compassion by the people of Kilcummin who gave them food and beverages. After receiving medical attention from Dr J.J. Igoe, Ballina, and Dr Madden Dublin, the mariners were conveyed to Ballina by military lorry where they were met by the Shipwrecked and Mariners Royal Benevolent Society and the local branch of the Irish Red Cross Society. Arriving in Ballina, the stricken seafarers were declined entry to the local hospital as “the number of patients was so great at the time that there was no accommodation for them”. However, the matron lent 24 blankets which were returned. 44
The sailors were taken to Enniscrone where they were accommodated in the local school. Later, travelling to Dublin they were accommodated in various hotels including Jury’s, College Green, and Rothesay Hotel, Eden Quay. The crew left Dublin in stages between December 1940 and March 1941 to join the Swedish vessel, SS Mansuria, bound from Liverpool to Petsamo, Finland.
Some weeks after the rescue, Ballina businessman, E. M. Boshell, local representative of the Shipwrecked Mariners Society, and also a member of Ballina Harbour Board, and also Lloyd’s Agent at Ballina, recommended the fishermen for recognition for their bravery to the Royal National Lifeboat Institution.
It wasn’t until November 1991 when Jim McLoughlin who was 82-years-old and living in the United States that his bravery was finally recognised when a Swedish government representative, Anders Sjodin, presented him with a silver goblet with his name and the date of the rescue engraved on it. 45
The 16 sailors rescued off Kilcummin were Oscar Andreas Johansson, Gothenburg, Sweden (married) 2nd engineer (left Dublin on 28th March 1941 to join the Swedish vessel SS Mansuria which was docked in Liverpool); Per Oscar Johannesson (46), 4th engineer, Gothenburg, Sweden (Left for Grimsby 5th December 1940); Karl Gustaf Siegfried Thorsson, AB Seaman, Karlaham, Sweden (left 14th March 1941); Nils Axel Ekberg, Motorman, Gothenburg, Sweden (left 14th March 1941); Henrik Teodor Henriksson, Carpenter, Helshineborg (left 28th March 1941); Ove Hagbert Olofsson, Motorman, Sweden (left 14th March 1941); Holger Teodor Forsberg, Steward, Sweden (left 28th March 1941); Harold Kristiansen, Norway; Erling Andersen, Norway; Nils Gottard Westberg, 1st Officer, Sweden (left 28th March 1941); Evald Oliver Andersson, AB Seaman, Sweden (left 28th March 1941); Karl Viktor Johansson, AB Seaman, Sweden (left 28th March 1941); Paul Gunar Wihlborg, Second Officer, Sweden (left 14th March 1941); Zosimo Tabudlong, Philippine Islander; Agne Natanaal Kortz, Motorman, Sweden (left 14th March 1941). 45b
On Wednesday morning, September 11, 1940, coastalwatchers at Kilcummin found a body dressed in a naval uniform washed ashore. From papers found on the body, it was found that the body was that of Cadet Geoffrey Charles Butcher, Appletree Cottage, 21 Fairfield Road, Orpington, Kent. He enlisted on the 17th of June and was aged 19 years. Other papers bore the marks “Imperial Transport”.
He was one of three crew members of MV Upwey Grange whose remains were washed ashore along the Mayo coast after the ship, heading for London from Buenos Aires, with 11 passengers and a cargo of 5,500 tons of frozen and tinned meat was hit by one torpedo from U-37 and sank 184 miles west of Achill Head, on August 8, 1940.
All three lifeboats got away from the ship safely and Captain William Ernest Williams ordered them all to hoist sail and make for Ireland. The wind was favourable for this, blowing strongly from the west, but the sea conditions were poor and the speed with which they had had to abandon their ship rendered them vulnerable to hypothermia. With Williams in the lead, the boats kept company for a few hours, but in the rough sea and swell, the boats were separated. 46
HMS Vanquisher came to the rescue of the first lifeboat. The second boat had made about 180 miles in three days and two nights and was within 50 miles of Achill Head when they were rescued by the Cardiff steam trawler, Naniwa, and on August 13; forty-eight crew members and eight passengers were landed safely in Cardiff.
We will never know the exact circumstances of what happened to Captain William’s lifeboat in the numbing cold of the turbulent North Atlantic, but we get some insight into the possible fate of the small boat from the account given by rescued First Officer Ellis, referring to the third day of their lifeboat ordeal.
“Things were, by now, very unpleasant; we shipped much water and baled continuously. A very high sea was running with a strong westerly wind; frequent squalls of moderate gale force from NW made a nasty cross sea.” 46
Nothing further was heard of Captain Williams’s boat. He was lost along with 31 crew members, one gunner and three passengers, including Upwey Grange’s Chief Engineer Major Clifford Mackrow (48) and Cadet Geoffrey Charles Butcher, “both of whom were known to have been in Williams’s lifeboat”. 47
Clifford Major Mackrow was washed ashore at Inishkea. He was a married man who lived at 49 Castleview Gardens, Ilford, Essex. He is buried in Kilcommon Church of Ireland Churchyard, Belmullet.
The bodies of two other crewmen from Upwey Grange were found in Mayo; AB George James Walters (47) was washed up at Achill Island and he is buried in Achill Holy Trinity Church of Ireland graveyard; Third Officer Edgar Hugh Mayes (33) from Northhampton is buried in Termoncarragh cemetery.
An Irishman William Francis O’Donnell (30), an engineer on the M.V. Upwey Grange, also died and he is commemorated at Tower Hill Memorial in London.
On Sunday, November 3, 1940, HMS Patroclus, an armed merchant cruiser, was 150 miles west of Bloody Foreland when she was torpedoed by the German submarine U-99. Royal Navy Reserve Seaman, Harry Kirkpatrick, was one of 56 of her 319 crew lost in the attack. His body was washed up at Cushlecka, Mulranny, and found by Patrick Moran, Cuskleacka, at about 10am, on December 18, 1940. He was buried in Achill Holy Trinity Church of Ireland Churchyard in Achill Sound. An only son, a wristlet watch found on the body was later forwarded to his mother in Orkney.
New research by myself and Bill Dziadyk, a retired Lieutenant Commander in the Royal Canadian Navy (RCN), has made it possible to identify a number of unknown S.S. Nerissa victims, buried in Mayo in 1941, after their bodies were washed ashore following the sinking of the Canadian troopship, 80 nautical miles off the Donegal coast on April 30 1941.
“Amid the terrible screams and cries of the drowning, Lieutenant Colonel G.C. Smith (Royal Canadian Armored Corps) heard Joseph Lomas cry out for his wife Elizabeth and 3 children until it was clear there would be no answer, ever.” 47b
The last heartbreaking cry of a distraught father realising his wife and three little children had drowned amid the unspeakable terror as S.S. Nerissa sunk within four minutes of being struck by torpedoes fired from U-552, 80 nautical miles off the Donegal coast, on April 30 1941.
His desperate cries were the last that were heard of Joseph Lomas (31) as, he, along with his wife, Elizabeth (26), and their children, Terence (6), Joan (4), and Margaret (3), drowned in the icy waters of the North Atlantic.
The tragic end to their young lives is symbolic of all the young lives cut short during World War 2’s longest battle that took place off our western shores, just beyond the tranquil horizon, as if in some surreal parallel universe.
When I began researching this article, two years ago, I had no idea that my investigations would discover that the unidentified bodies of two of the Lomas children had washed up on beaches in North Mayo during the early summer of 1941; the final paragraph in their short lives can only now be written.
Because of wartime secrecy, the Gardai did not have access to casualty lists from ships torpedoed off our shores. Thus, the identities of the two children who washed ashore in Mayo in the early summer of 1941 have remained a mystery for over eight decades.
This part of my research took place in late May 2021, and, on reflection, my discovery now seems a rather strange coincidence in light of the synchronicity in time, almost exactly 80 years to the day when the bodies from Nerissa began to wash up along the Mayo coast.
A one-paragraph report in the Western People (May 31 1941), under a sub-heading, entitled “Husband, Wife and Child?”, caught my attention and immediately I knew that there had to be a connection with the Lomas family.
The 14-line article came under a report of the finding of a man in a naval uniform near Ballycastle who can also now be identified following historic investigations carried out by myself, and Bill Dziadyk.
Among the unknowns that can now be identified were brother and sister, Terence (6) and Joan Lomas (4), the youngest victims to be washed ashore along Ireland’s west coast during the Battle of the Atlantic.
Also identified, 81 years after his death, is a heroic young Canadian naval officer who in the final terrifying minutes before Nerissa sunk beneath the waves had helped the two children into a lifeboat.
Sub-Lt. Barnett Harvey
Sub-Lieutenant Barnett Harvey was just 20 years old when he drowned along with 206 other passengers and crew aboard S.S. Nerissa, bound from Halifax, Nova Scotia, to Liverpool. The unescorted steamer sunk within four minutes of being hit by two torpedoes fired by the German submarine U-552 at about 10.30pm on April 30 1941.
From Courteney BC, Harvey found his final resting place in Ballycastle graveyard where his unmarked grave has now been identified and efforts are underway to have his burial place suitably marked.
Harvey’s valiant efforts to save the lives of Joan and Terence Lomas were mercilessly dashed when the U-boat commander, Erich Topp, fired a second torpedo into Nerissa which capsized the lifeboat in which the children along with their parents, Joseph and Elizabeth, and sister Margaret, had taken refuge.
Tragically, if the U-boat commander had only fired one torpedo into Nerissa the Lomas family, and many others, would have been saved along with the 84 others who survived the sinking and were rescued by Royal Navy ships which brought them to Derry. But the impact of the second torpedo explosion overturned Lifeboat No. 2 into which the family had scrambled with the help of Sub-Lieutenant Harvey. All the occupants, including Harvey, were thrown into the sea and perished.
Homesick and unable to settle in Canada after escaping the London blitz in 1940, Joseph Lomas, a carpenter, was bringing his wife and young family back to Charlton where Elizabeth’s mother, Ellen, lived. They were only hours away from safety in Liverpool when they perished.
The Lomas family story is one of the great tragedies of the Battle of the Atlantic; their terrible fate is symbolic of today’s refugees and how the innocent suffer most in war.
Like so many victims of the Battle of the Atlantic that were washed ashore, little Joan Lomas had no identification on her body when discovered at Dooyork, a short walk from the beautiful seaside village of Geesala.
Still dressed in the one-piece blue and red pyjama suit which her mother had put her to bed on that fateful night, she was buried in Geesala cemetery where the location of her unmarked grave can be identified following my research.
In one of those strange coincidences, Joan Lomas is buried next to the grave of Robert Mackay Sutherland (27) in Geesala cemetery. Sutherland’s mother’s maiden name was Mackay and it was also the maiden name of Elizabeth Lomas, Joan’s mother.
It is hoped to have a suitable headstone erected in remembrance of Joan and her family and their great tragedy.
Joan’s brother Terence (6) was also washed ashore in late May 1941 at scenic Kilgalligan beach located between Rossport and Carrowteigue, along with an adult female and adult male. Terence was almost certainly buried in the nearby Kilgalligan graveyard, the exact location of his grave and that of the two adults remains a mystery; although it is likely all three were buried together as, at the time, it was assumed they were a family.
Extensive research in the National Archives of Ireland has shed new light on the fate of the Nerissa victims whose bodies were washed into coves and onto beaches from Donegal to Clare.
Historic documents in Ireland and Canada have made it possible to identify the remains as Barnett Harvey, but the finding of his grave in Ballycastle in recent months revealed a remarkable story involving three Catholic medals found on his body when it was discovered washed up on the rocks at Doonfeeney Upper on May 25, 1941, by local man, Thomas A. Heffron.
The religious medals may not have saved Harvey from death in the North Atlantic, but in a strange twist of fate had he not carried the medals on that fateful voyage his grave would never have been located.
Barnett Harvey was an Anglican, but because of the medals, it was assumed in Ballycastle that he was a Roman Catholic and, therefore, given a Catholic burial in an unmarked plot alongside the graves of the local dead. His burial was recorded in the Ballycastle Parish Register of Interments for May 1941, including an addendum mentioning that it was believed he was a Canadian naval officer.
Without the medals, likely to have been given to him by a Catholic friend as he embarked on a dangerous voyage to war, Barnett Harvey would have been buried in the so-called “Strangers’ Plot”, a corner reserved in cemeteries in times past for the burial of unknown persons and the unbaptised. This is where most of the bodies washed ashore in Ireland during the war were buried. The location of these unmarked graves in most cases is no longer known. Those who could be identified, mainly military and a small number of merchant seamen, were later given Commonwealth War Graves Commission headstones.
In Achill Sound Church of Ireland’s graveyard, there are 13 Commonwealth War graves, all related to the Battle of the Atlantic. One of those headstones remembers “A Master”, whose body was washed ashore at Ashleam, Achill Sound, on June 28 1941. As a result of our investigations, we believe that compelling evidence exists to suggest that this was Gilbert Ratcliffe Watson (57), from Kendal, Westmoreland, England, the Master of S.S. Nerissa.
Our research has also established that the bodies of two of the four adult females on Nerissa came ashore in Mayo. As stated earlier, an adult female was found near the body of Terence Lomas at Kilgalligan. I have been unable to locate documents in relation to these bodies in the National Archives which might have indicated whether or not this was Elizabeth Lomas.
But I believe it is not unreasonable to imagine that in those final terrifying moments a mother would have held onto one of her children; and for any surviving relatives, there is some small consolation in believing that she was buried with her child in the remote Mayo cemetery overlooking the beach where they came ashore.
There is only a remote possibility that the remains of the man washed ashore at Kilgalligan were those of Joseph Lomas as the great majority of the 207 Nerissa casualties were adult males.
The body of a second adult female was washed ashore at Grangehill, Barnatra, (near Belmullet) at about the same time, May 23 1941. There is not enough evidence to make a positive identification, but from what facts are still available I believe it suggests that the body was that of recently married Joy Stuart-French (35), born Vida Joyce Jones, from Warracknabeal, Victoria, Australia. She was the wife of Major Robert Stuart-French (11th Hussars), a native of Cobh, Co. Cork, where his ancestral home was the Marino House Estate, between Rushbrooke and Fota in Cork Harbour. The woman’s remains were buried in Termoncarragh cemetery on the Mullet peninsula where I have been able to identify the location of her unmarked grave.
There were just two stewardesses on Nerissa – Florence Jones (50) and Hilda Lynch (34), both from the Liverpool area. The courage and sacrifice of the two stewardesses on Nerissa should not be forgotten.
In the madness and panic as Nerissa began to sink quickly, the two brave women gave their lifebelts to the two older Lomas children and unflinchingly accepted their fate.
Lifebelts were found near both the bodies of Joan and Terence Lomas when they were found three weeks later at Dooyork and Kilgalligan; almost as if the spirits of the two brave women had brought their little bodies ashore.
In our analysis, we were able to consider the wartime context when the bodies washed ashore, which 80 years earlier, the Gardai did not know because of wartime secrecy.
For those who would like to read Bill Dziadyk’s meticulous presentation of the new facts please click on this link where you will find a copy of a related Addendum which was published with his book, S.S. Nerissa, the Final Crossing: The Amazing True Story of the Loss of a Canadian Troopship in the North Atlantic, detailing what we have uncovered and the analysis and evidence provided by Bill Dziadyk and myself,
Work continues to try and find surviving relatives of the Nerissa victims buried in Mayo, but so far we have been unsuccessful.
Therefore, all publicity arising out of this story is welcome as it might alert any surviving family members to the new information regarding their relatives.
Buried in Belmullet
S.S. Nerissa was the only transport carrying Canadian troops to be lost during World War II.
The sinking of Nerissa resulted in 207 casualties, including military and civilian; and British and Canadian diplomats. This was the third-largest loss of life for a ship sunk by U-boats in the approaches to Ireland and Britain. The 84 Nerissa survivors were transferred from their lifeboats to HMS Kingcup, which took them to Derry.
In war, some families seem to bear an unbearable price for their service and sacrifice as we have already read concerning the MacHale family from Belmullet.
Buried in Kilcommon Church of Ireland Churchyard, Belmullet, is Archibald Graham Weir (55), Pensbury House, Shaftsbury, Dorset, a Wing Commander Royal Air Force and a victim of the Nerissa sinking. His body was found washed ashore at Corraun Point, Cross beach, near Binghamstown, on July 4, 1941. An Oxford graduate, he was a veteran of World War 1 and, at the time of his death, Wing Commander Weir (with a staff of 11) served as Officer Commanding Royal Air Force personnel bound to and from the UK.
His two sons also died in the war. Flying Officer Archibald Nigel Charles Weir D.F.C., died when his Hurricane crashed into the sea on November 7, 1940, after combat with a Messerschmitt Bf 109 off the Isle of Wight. Nigel’s younger brother, Adrian John Anthony (23) was serving as a Major in the 1st Battalion The Scots Guards when he was killed on February 2, 1944, at Anzio, Italy. He was awarded the Military Cross. 48
When Wing Commander Weir’s body was found washed ashore at Corraun Point, among his possessions was his personal diary; his last entry on the morning of that fateful day was a tender final message to his wife, Mary, looking forward to seeing her within hours.
“Have just come off watch on the bridge from 4-8 am. Marvellous sunrise, squalls, and an upturned lifeboat dead ahead, which gave us a fright until we saw what it was. We are 250 miles from the north of Ireland, and the waters sufficiently dangerous; however, by tomorrow morning all should be well, and one will be able to take a bath without wondering whether the alarm will go into the middle of it. Á bientôt, must go and have breakfast.” 49
In late September after being informed that her husband’s body had been recovered, Mary Weir, widow of Wing Commander A.G. Weir, wrote a letter 49b to Rev. Rodgers, Rector of Belmullet, wondering “…if his signet ring was still there, which is of great sentimental value to us, being made of his parents’ wedding and engagement rings.”
“He just vanished ‘into the blue’, somewhere overseas early in the year, and was almost home again when the disaster happened, and I know nothing at all about his end and very little of his voyagings. Just a few little details pieced together from survivors’ stories.”
Concluding her letter, Mrs Weir mentioned that her “elder son was killed in action flying six months before his father, but I have a son in the Army and two daughters left.”
Tragically, her second son was taken from her, too. Mrs Weir died in 1972. A ring was not listed among the personal items found on Commander Weir’s body.
Canadian Officer, Thomas Elvin Mitchell (20), a Lieutenant in the Carleton and York Regiment, R.C.I.C., was one of 108 Canadian Army personnel to perish on the Nerissa. His remains were washed ashore at Aughadoon, Belmullet, on May 23, 1941, and found by Anthony Dixon, Aughadoon, during an early morning walk along the shore. Son of Thomas and Bessie Irene Mitchell, he came from St. Stephen, New Brunswick, Canada. He is buried in Kilcommon Church of Ireland Cemetery, Belmullet.
S.S. Homeside Mystery
In January 1941, the British cargo ship S.S. Homeside, on a voyage from Pepel, Sierra Leone, for the Tees, in convoy SL-62 with a cargo of iron ore, went missing. The only evidence of her fate came on July 8, 1941, when a body washed ashore at Ashleam, Achill, was identified as that of John Murphy lost when S.S. Homeside went missing, presumed sunk, on January 28th 1941. Thirty-five crew were never heard of again. | <urn:uuid:f02549b4-8f4e-437a-b0c4-6e78199e4539> | CC-MAIN-2022-33 | https://mayo.me/2021/08/30/the-tides-of-war/5/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00497.warc.gz | en | 0.980429 | 6,044 | 2.53125 | 3 |
The classical Carnot heat engine
In thermodynamics, work performed by a system is the energy transferred by the system to its surroundings, that is fully accounted for solely by macroscopic forces exerted on the system by factors external to it, that is to say, factors in its surroundings. Thermodynamic work is a version of the concept of work in physics.
The external factors may be electromagnetic, gravitational, or pressure/volume or other simply mechanical constraints. Thermodynamic work is defined to be measurable solely from knowledge of such external macroscopic forces. These forces are associated with macroscopic state variables of the system that always occur in conjugate pairs, for example pressure and volume, magnetic flux density and magnetization. In the SI system of measurement, work is measured in joules (symbol: J). The rate at which work is performed is power.
- 1 History
- 2 Overview
- 3 Formal definition
- 4 Pressure-volume work
- 5 Other mechanical forms of work
- 6 Free energy and exergy
- 7 Non-mechanical forms of work
- 8 See also
- 9 References
Work, i.e. "weight lifted through a height", was originally defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire, he used the term motive power. Specifically, according to Carnot:
- We use here motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.
In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge. In this paper, he reported his best-known experiment, in which the mechanical power released through the action of a "weight falling through a height" was used to turn a paddle-wheel in an insulated barrel of water.
In this experiment, the friction and agitation of the paddle-wheel on the body of water caused heat to be generated which, in turn, increased the temperature of water. Both the temperature change ∆T of the water and the height of the fall ∆h of the weight mg were recorded. Using these values, Joule was able to determine the mechanical equivalent of heat. Joule estimated a mechanical equivalent of heat to be 819 ft•lbf/Btu (4.41 J/cal). The modern day definitions of heat, work, temperature, and energy all have connection to this experiment.
Mechanical thermodynamic work is performed by actions such as compression, and including shaft work, stirring, and rubbing. In the simplest cases, for example, there are work of change of volume against a resisting pressure, and work without change of volume, known as isochoric work. An example of isochoric work is when an outside agency, in the surrounds of the system, drives a frictional action on the surface of the system. In this case the dissipation is usually not confined to the system, and the quantity of energy so transferred as work must be estimated through the overall change of state of the system as measured by both its mechanically and externally measurable deformation variables (such as its volume), and its corresponding non-deformation variable (such as its pressure). In a process of transfer of energy as work, the change of internal energy of the system is then defined in theory by the amount of adiabatic work that would have been necessary to reach the final from the initial state, such adiabatic work being measurable only through the externally measurable mechanical or deformation variables of the system, that provide full information about the forces exerted by the surroundings on the system during the process. In the case of some of Joule's measurements, the process was so arranged that heat produced outside the system (in the paddles) by the frictional process was practically entirely transferred into the system during the process, so that the quantity of work done by the surrounds on the system could be calculated as shaft work, an external mechanical variable.
The amount of energy transferred as work is measured through quantities defined externally to the system of interest, and thus belonging to its surroundings. In an important sign convention, work that adds to the internal energy of the system is counted as positive. Nevertheless, on the other hand, for historical reasons, an oft-encountered sign convention is to consider work done by the system on its surroundings as positive. Although all real physical processes entail some dissipation of kinetic energy, it is a matter of definition in thermodynamics that the dissipation that results from transfer of energy as work occurs only inside the system. Energy dissipated outside the system, in the process of transfer of energy, is not counted as thermodynamic work, because it is not fully accounted for by macroscopic forces exerted on the system by external factors. Thermodynamic work does not account for any energy transferred between systems as heat or though transfer of matter.
All the various mechanical and non-mechanical forms of work can be converted into each other with no fundamental limitation due to the laws of thermodynamics, so that the energy conversion efficiency can approach 100% in some cases. In particular, all forms of work can be converted into the mechanical work of lifting a weight, which was the original form of thermodynamic work considered by Carnot and Joule (see History section above). Some authors have considered this equivalence to the lifting of a weight as a defining characteristic of work. In contrast, the conversion of heat into work in a heat engine can never exceed the Carnot efficiency, as a consequence of the second law of thermodynamics.
For a closed thermodynamic system, the first law of thermodynamics relates changes in the internal energy to two forms of energy transfer, as heat and as work. In theory, heat is properly defined for a process in a closed system (no transfer of matter) by the amount of adiabatic work that would be needed to effect the change occasioned by the process. In practice it is often estimated calorimetrically, through change of temperature of a known quantity of calorimetric material substance; it is of the essence of heat transfer that it is not mediated by the externally defined forces variables that define work. This distinction between work and heat is essential to thermodynamics.
Beyond the conceptual scope of thermodynamics proper, heat is transferred by the microscopic thermal motions of particles and their associated inter-molecular potential energies, or by radiation. There are two forms of macroscopic heat transfer by direct contact between a closed system and its surroundings: conduction, and radiation. There are several forms of dissipative transduction of energy that can occur internally within a system at a microscopic level, such as friction including bulk and shear viscosity, chemical reaction, unconstrained expansion as in Joule expansion and in diffusion, and phase change; these are not transfers of heat between systems.
Convection of internal energy is a form a transport of energy but is in general not, as sometimes mistakenly supposed (a relic of the caloric theory of heat), a form of transfer of energy as heat, because convection is not in itself a microscopic motion of microscopic particles or their intermolecular potential energies, or photons; nor is it a of transfer of energy as work. Nevertheless, if the wall between the system and its surroundings is thick and contains fluid, in the presence of a gravitational field, convective circulation within the wall can be considered as indirectly mediating transfer of energy as heat between the system and its surroundings, though they are not in direct contact.
For an open system, the first law of thermodynamics admits three forms of energy transfer, as work, as heat, and as energy associated with matter that is transferred. The latter cannot be split uniquely into heat and work components.
In thermodynamics, the quantity of work done by a closed system on its surroundings is defined by factors strictly confined to the interface of the surroundings with the system and to the surroundings of the system, for example an extended gravitational field in which the system sits, that is to say, to things external to the system. There are a few especially important kinds of thermodynamic work.
A simple example of one of those important kinds is pressure-volume work. The pressure of concern is that exerted by the system on the surroundings, and the volume of interest is the increment of volume gained by the system from the surroundings. The analysis requires that the pressure exerted by the system on the surroundings is well defined. In the presence of friction, the work done by the system (system-based work) and work received by the surroundings (surroundings-based work) are not equal in magnitude. Most analyses in the following refer to system-based work.
Transfer of energy as work can be varied in a particular way that depends on the strictly mechanical nature of pressure-volume work. The variation consists in letting the coupling between the system and surroundings be through a rigid rod that links pistons of different areas for the system and surroundings. Then for a given amount of work transferred, the exchange of volumes involves different pressures, inversely with the piston areas, for mechanical equilibrium. This cannot be done for the transfer of energy as heat because of its non-mechanical nature.
Another important kind of work is isochoric work, that is to say work that involves no eventual overall change of volume of the system between the initial and the final states of the process. Examples are friction on the surface of the system as in Rumford's experiment; shaft work such as in Joule's experiments; and slow vibrational action on the system that leaves its eventual volume unchanged, but involves friction within the system. Isochoric work for a system in its own state of internal thermodynamic equilibrium is done only by the surroundings on the system, not by the system on the surroundings, so it is surroundings-, not system-based work.
When work is done by a closed system that cannot pass heat in or out because it is adiabatically isolated, the work is referred to as being adiabatic in character.
According to the first law of thermodynamics for a closed system, any net increase in the internal energy U must be fully accounted for, in terms of heat δQ entering the system and the work δW done by the system:
The letter d indicates an exact differential, expressing that internal energy U is a function of the state of the system; changes in U depend only on the original state and the final state, and not upon the path taken. In contrast, the Greek deltas (δ's) in this equation reflect the fact that the heat transfer and the work transfer are not properties of the final state of the system. Given only the initial state and the final state of the system, one can only say what the total change in internal energy was, not how much of the energy went out as heat, and how much as work. This can be summarized by saying that heat and work are not state functions of the system.
The minus sign in front of indicates that a positive amount of work done by the system leads to energy being lost from the system. This is the sign convention for work in many textbooks on physics. This sign convention entails that a non-zero quantity of isochoric work always has a negative sign, because of the second law of thermodynamics. This is the sign convention used in the present article except where otherwise noted.
An alternate sign convention is to consider the work performed on the system by its surroundings as positive. This leads to a change in sign of the work, so that . This is the convention adopted by many modern textbooks.
Pressure-volume work (or PV work) occurs when the volume V of a system changes. PV work is often measured in units of litre-atmospheres where 1L·atm = 101.325J. However, the litre-atmosphere is not a recognised unit in the SI system of units, which measures P in Pascal (Pa), V in m3, and PV in Joule (J), where 1 J = 1 Pa·m3. PV work is an important topic in chemical thermodynamics.
For a process in a closed system occurring slowly enough for accurate definition of the pressure on the inside of the system's wall that moves and transmits force to the surroundings, described as quasi-static, pressure-volume work is represented by the following equation between differentials:
denotes an infinitesimal increment of work done by the system, transferring energy to the surroundings;
denotes the pressure exerted by the system on the moving wall that transmits force to the surroundings, which in the quasistatic limit equals the regular system pressure. In the alternate sign convention, the right hand side has a negative sign.
denotes the infinitesimal increment of the volume of the system.
denotes the work done by the system during the whole of the quasistatic process.
For quasistatic processes, the first law of thermodynamics can then be expressed as
(In the alternate sign convention where W = work done on the system, . However, is unchanged.)
As for all kinds of work, in general PV work is path-dependent and is therefore a thermodynamic process function. In general, the term P dV is not an exact differential. The statement that a process is reversible and adiabatic gives important information about the process, but does not determine the path uniquely, because the path can include several slow goings backward and forward in volume, as long as there is no transfer of energy as heat. The first law of thermodynamics states . For an adiabatic process, and thus the integral amount of work done is equal to minus the change in internal energy and depends only on the initial and final states of the process; it is one and the same for every intermediate path.
If the process took a path other than an adiabatic path, the work would be different. This would only be possible if heat flowed into/out of the system. In a non-adiabatic process, there are indefinitely many paths between the initial and final states.
In another notation, δW is written đW (with a line through the d). This notation indicates that đW is not an exact one-form. The line-through is merely a flag to warn us there is actually no function (0-form) W which is the potential of đW. If there were, indeed, this function W, we should be able to just use Stokes Theorem to evaluate this putative function, the potential of đW, at the boundary of the path, that is, the initial and final points, and therefore the work would be a state function. This impossibility is consistent with the fact that it does not make sense to refer to the work on a point in the PV diagram; work presupposes a path.
Other mechanical forms of work
There are several ways of doing work, each in some way related to a force acting through a distance. In basic mechanics,the work done by a constant force F on a body displaced a distance s in the direction of the force is given by
If the force is not constant, the work done is obtained by integrating the differential amount of work,
Energy transmission with a rotating shaft is very common in engineering practice. Often the torque T applied to the shaft is constant which means that the force F applied is constant. For a specified constant torque, the work done during n revolutions is determined as follows: A force F acting through a moment arm r generates a torque T
This force acts through a distance s, which is related to the radius r by
The shaft work is then determined from:
The power transmitted through the shaft is the shaft work done per unit time, which is expressed as
When a force is applied on a spring, and the length of the spring changes by a differential amount dx, the work done is
For linear elastic springs, the displacement x is proportional to the force applied
where K is the spring constant and has the unit of N/m. The displacement x is measured from the undisturbed position of the spring (that is, X=0 when F=0). Substituting the two equations
where x1 and x2 are the initial and the final displacement of the spring respectively, measured from the undisturbed position of the spring.
Work done on elastic solid bars
Solids are often modeled as linear springs because under the action of a force they contract or elongate, and when the force is lifted, they return to their original lengths, like a spring. This is true as long as the force is in the elastic range, that is, not large enough to cause permanent or plastic deformation. Therefore, the equations given for a linear spring can also be used for elastic solid bars. Alternately, we can determine the work associated with the expansion or contraction of an elastic solid bar by replacing the pressure P by its counterpart in solids, normal stress σ=F/A in the work expansion
where A is the cross sectional area of the bar.
Work associated with the stretching of liquid film
Consider a liquid film such as a soap film suspended on a wire frame. Some force is required to stretch this film by the movable portion of the wire frame. This force is used to overcome the microscopic forces between molecules at the liquid-air interface. These microscopic forces are perpendicular to any line in the surface and the force generated by these forces per unit length is called the surface tension σ whose unit is N/m. Therefore, the work associated with the stretching of a film is called surface tension work, and is determined from
where dA=2b dx is the change in the surface area of the film. The factor 2 is due to the fact that the film has two surfaces in contact with air. The force acting on the moveable wire as a result of surface tension effects is F=2b σ, where σ is the surface tension force per unit length.
Free energy and exergy
The amount of useful work which may be extracted from a thermodynamic system is determined by the second law of thermodynamics. Under many practical situations this can be represented by the thermodynamic availability, or Exergy, function. Two important cases are: in thermodynamic systems where the temperature and volume are held constant, the measure of useful work attainable is the Helmholtz free energy function; and in systems where the temperature and pressure are held constant, the measure of useful work attainable is the Gibbs free energy.
Non-mechanical forms of work
Non-mechanical work in thermodynamics is work determined by long-range forces penetrating into the system as force fields. The action of such forces can be initiated by events in the surroundings of the system, or by thermodynamic operations on the shielding walls of the system. The long-range forces are forces in the ordinary physical sense of the word, not the so-called 'thermodynamic forces' of non-equilibrium thermodynamic terminology.
The non-mechanical work of long-range forces can have either positive or negative sign, work being done by the system on the surroundings, or vice versa. Work done by long-range forces can be done indefinitely slowly, so as to approach the fictive reversible quasi-static ideal, in which entropy is not created in the system by the process.
In thermodynamics, non-mechanical work is to be contrasted with mechanical work that is done by forces in immediate contact between the system and its surroundings. If the putative 'work' of a process cannot be defined as either long-range work or else as contact work, then sometimes it cannot be described by the thermodynamic formalism as work at all. Nevertheless, the thermodynamic formalism allows that energy can be transferred between an open system and its surroundings by processes for which work is not defined. An example is when the wall between the system and its surrounds is not considered as idealized and vanishingly thin, so that processes can occur within the wall, such as friction affecting the transfer of matter across the wall; in this case, the forces of transfer are neither strictly long-range nor strictly due to contact between the system and its surrounds; the transfer of energy can then be considered as by convection, and assessed in sum just as transfer of internal energy. This is conceptually different from transfer of energy as heat through a thick fluid-filled wall in the presence of a gravitational field, between a closed system and its surroundings; in this case there may convective circulation within the wall but the process may still be considered as transfer of energy as heat between the system and its surroundings; if the whole wall is moved by the application of force from the surroundings, without change of volume of the wall, so as to change the volume of the system, then it is also at the same time transferring energy as work. A chemical reaction within a system can lead to electrical long-range forces and to electric current flow, which transfer energy as work between system and surroundings, though the system's chemical reactions themselves (except for the special limiting case in which in they are driven through devices in the surroundings so as to occur along a line of thermodynamic equilibrium) are always irreversible and do not directly interact with the surroundings of the system.
Non-mechanical work contrasts with pressure-volume work. Pressure-volume work is one of the two mainly considered kinds of mechanical contact work. A force acts on the interfacing wall between system and surroundings. The force is that due to the pressure exerted on the interfacing wall by the material inside the system; in the quasistatic limit, that pressure is an internal state variable of the system, but is properly measured by external devices at the wall. The work is due to change of system volume by expansion or contraction of the system. If the system expands, in the present article it is said to do positive work on the surroundings. If the system contracts, in the present article it is said to do negative work on the surroundings. Pressure-volume work is a kind of contact work, because it occurs through direct material contact with the surrounding wall or matter at the boundary of the system. It is accurately described by changes in state variables of the system, such as the time courses of changes in the pressure and volume of the system. The volume of the system is classified as a "deformation variable", and is properly measured externally to the system, in the surroundings. Pressure-volume work can have either positive or negative sign. Pressure-volume work, performed slowly enough, can be made to approach the fictive quasi-static or reversible ideals.
Non-mechanical work also contrasts with shaft work. Shaft work is the other of the two mainly considered kinds of mechanical contact work. It transfers energy by rotation, but it does not eventually change the shape or volume of the system. Because it does not change the volume of the system it is not measured as pressure-volume work, and it is called isochoric work. Considered solely in terms of the eventual difference between initial and final shapes and volumes of the system, shaft work does not make a change. During the process of shaft work, for example the rotation of a paddle, the shape of the system changes cyclically, but this does not make an eventual change in the shape or volume of the system. Shaft work is a kind of contact work, because it occurs through direct material contact with the surrounding matter at the boundary of the system. A system that is initially in a state of thermodynamic equilibrium cannot initiate any change in its internal energy. In particular, it cannot initiate shaft work. This explains the curious use of the phrase "inanimate material agency" by Kelvin in one of his statements of the second law of thermodynamics. Thermodynamic operations or changes in the surroundings are considered to be able to create elaborate changes such as indefinitely prolonged, varied, or ceased rotation of a driving shaft, while a system that starts in a state of thermodynamic equilibrium is inanimate and cannot spontaneously do that. Thus the sign of shaft work is always negative, work being done on the system by the surroundings. Shaft work can hardly be done indefinitely slowly; consequently it always produces entropy within the system, because it relies on friction or viscosity within the system for its transfer. The foregoing comments about shaft work apply only when one ignores that the system can store angular momentum and its related energy.
Examples of non-mechanical work modes include
- Electrical work – where the force is defined by the surroundings' voltage (the electrical potential) and the generalized displacement is change of spatial distribution of electrical charge
- Magnetic work – where the force is defined by the surroundings' magnetic field strength and the generalized displacement is change of total magnetic dipole moment
- Electrical polarization work – where the force is defined by the surroundings' electric field strength and the generalized displacement is change of the polarization of the medium (the sum of the electric dipole moments of the molecules)
- Gravitational work – where the force is defined by the surroundings' gravitational field and the generalized displacement is change of the spatial distribution of the matter within the system.
- Electrical work
- Chemical reactions
- Microstate (statistical mechanics) - includes Microscopic definition of work
- Guggenheim, E.A. (1985). Thermodynamics. An Advanced Treatment for Chemists and Physicists, seventh edition, North Holland, Amsterdam, ISBN 0444869514.
- Jackson, J.D. (1975). Classical Electrodynamics, second edition, John Wiley and Sons, New York, ISBN 978-0-471-43132-9.
- Konopinski, E.J. (1981). Electromagnetic Fields and Relativistic Particles, McGraw-Hill, New York, ISBN 007035264X.
- North, G.R., Erukhimova, T.L. (2009). Atmospheric Thermodynamics. Elementary Physics and Chemistry, Cambridge University Press, Cambridge (UK), ISBN 9780521899635.
- Kittel, C. Kroemer, H. (1980). Thermal Physics, second edition, W.H. Freeman, San Francisco, ISBN 0716710889.
- Joule, J.P. (1845) "On the Mechanical Equivalent of Heat", Brit. Assoc. Rep., trans. Chemical Sect, p.31, which was read before the British Association at Cambridge, June
- Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, London, p. 40.
- Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, pp. 35–36.
- F.C.Andrews Thermodynamics: Principles and Applications (Wiley-Interscience 1971), ISBN 0-471-03183-6, p.17-18.
- Silbey, R.J., Alberty, R.A., Bawendi, M.G. (2005). Physical Chemistry, 4th edition, Wiley, Hoboken NJ., ISBN 978-0-471-65802-3, p.31
- K.Denbigh The Principles of Chemical Equilibrium (Cambridge University Press 1st ed. 1955, reprinted 1964), p.14.
- J.Kestin A Course in Thermodynamics (Blaisdell Publishing 1966), p.121.
- M.A.Saad Thermodynamics for Engineers (Prentice-Hall 1966) p.45-46.
- G.J. Van Wylen and R.E. Sonntag, Fundamentals of Classical Thermodynamics, Chapter 4 - Work and heat, (3rd edition)
- Prevost, P. (1791). Mémoire sur l'equilibre du feu. Journal de Physique (Paris), vol 38 pp. 314-322.
- Planck, M. (1914). The Theory of Heat Radiation, second edition translated by M. Masius, P. Blakiston's Son and Co., Philadelphia, 1914.
- Kondepudi, D. (2008). Introduction to Modern Thermodynamics, John Wiley and Sons, Chichester, ISBN 978-0-470-01598-8.
- Rayleigh, J.W.S (1878/1896/1945). The Theory of Sound, volume 2, Dover, New York,
- Schmidt-Rohr, K. (2014). "Expansion Work without the External Pressure, and Thermodynamics in Terms of Quasistatic Irreversible Processes" J. Chem. Educ. 91: 402-409. http://dx.doi.org/10.1021/ed3008704
- Gislason, E. A.; Craig, N. C. (2007). "Cementing the foundations of thermodynamics: Comparison of system-based and surroundings-based definitions of work and heat." J. Chem. Thermodynamics 37, 954-966.
- Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA, p. 37.
- Freedman, Roger A., and Young, Hugh D. (2008). 12th Edition. Chapter 19: First Law of Thermodynamics, page 656. Pearson Addison-Wesley, San Francisco.
- Quantities, Units and Symbols in Physical Chemistry (IUPAC Green Book) See Sec. 2.11 Chemical Thermodynamics, p. 56.
- Planck, M. (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London., p. 43.
- Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, ISBN 0-521-25445-0, pp. 35–36.
- Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0–471–86256–8, p. 19.
- Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6, p. 24.
- Borgnakke, C., Sontag, R.E. (2009). Fundamentals of Thermodynamics, seventh edition, Wiley, ISBN 978-0-470-04192-5, p. 94.
- Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081, p. 21.
- Yunus A. Cengel and Michael A. Boles,Thermodynamics: An Engineering Approach 7th Edition, , McGraw-Hill, 2010,ISBN 007-352932-X
- Prigogine, I., Defay, R. (1954). Chemical Thermodynamics, translation by D.H. Everett of the 1950 edition of Thermodynamique Chimique, Longmans, Green & Co., London, p. 43.
- Thomson, W. (March 1851). "On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule's equivalent of a Thermal Unit, and M. Regnault's Observations on Steam". Transactions of the Royal Society of Edinburgh. XX (part II): 261–268, 289–298.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Also published in Thomson, W. (December 1852). "On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule's equivalent of a Thermal Unit, and M. Regnault's Observations on Steam". Philos. Mag. 4. IV (22): 8–21. Retrieved 25 June 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6, p. 45. | <urn:uuid:b77c2604-4248-42b3-88c6-647126f4c801> | CC-MAIN-2022-33 | https://www.infogalactic.com/info/Thermodynamic_work | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00296.warc.gz | en | 0.927058 | 7,013 | 3.6875 | 4 |
Soconusco is a region in the southwest corner of the state of Chiapas in Mexico along its border with Guatemala. It is a narrow strip of land wedged between the Sierra Madre de Chiapas mountains and the Pacific Ocean. It is the southernmost part of the Chiapas coast extending south from the Ulapa River to the Suchiate River, distinguished by its history and economic production. Abundant moisture and volcanic soil has always made it rich for agriculture, contributing to the flowering of the Mokaya and Olmec cultures, that were based on Theobroma cacao and rubber of Castilla elastica.
In the 19th century, the area was disputed between Mexico and Guatemala until a treaty signed in 1882 fixed the modern border, dividing the area's historical extension with most going to Mexico and a smaller portion east of the Suchiate to Guatemala. In 1890, Porfirio Díaz and Otto von Bismarck collaborated to take advantage of southern Mexico's agricultural potential by sending 450 German families to Soconusco near Tapachula in the southern state of Chiapas. Extensive coffee cultivation quickly made Soconusco one of the most successful German colonies, and between 1895 and 1900, 11.5 million kg of coffee had been harvested. Fincas (estates) were erected in the Chiapaneco jungle and given German names such as Hamburgo, Bremen, Lübeck, Argovia, Bismarck, Prussia, and Hanover.
This area has experienced a boom-and-bust economy with well-studied migration patterns of agricultural workers. After exporting cacao to central Mexico for thousands of years, the first modern crop for export was coffee. Since then other crops such as tropical fruits, flowers and more have been introduced. The most recent addition is the rambutan, a southeast Asian fruit.
Soconusco is geographically isolated from the political and economic center of Mexico, and it is relatively little known among the rest of the Mexican population. Geographically, it is part of the Chiapas coast, but it has had a distinct cultural and living identity from the rest of Chiapas since Mesoamerican times and remains so to this day.
Soconusco lies on the border between Mexico and Central America, but it has had connections with what is now central Mexico since the Mesoamerican period, primarily because of trade routes into Central America and its production of cacao, achiote, and other products. The name is derived from 3 words in Nahuatl Xococ (Sour) + Nochtli (Prickly pear cactus) + có (Place) “Xoconochco” means (Place of sour cactus) as noted in the Mendoza Codex. The Mayan name for the area was Zaklohpakab. The area was originally defined as far south as the Tilapa River in what is now Guatemala, but when the final border between Mexico and Guatemala was set in 1882, the Suchiate became the southern boundary.
The earliest population of Soconusco region were the coastal Chantuto peoples, going back to 5500 BC. This was the oldest Mesoamerican culture discovered to date.
In what is now the municipality of Mazatán, another culture arose. The culture is called Mokaya (people of the corn in Mixe-Zoque) and it is dated to about 4,000 years ago when cacao and ballcourts appear.
It is thought that migrations out of this area east gave rise to the Olmec civilization. A later, but also important Mesoamerican culture, was centered on the site of Izapa, considered the most important on the Chiapas coast. It dates to about 1500 BC and is classified as Mixe-Zoque but it is also considered to be the link between the older Olmec civilization and the later Mayan ones.
The site was important for about 1000 years as a civil and religious structure. While most of its ruins now consist of earthen mounds, its importance lies in the information which has been gathered from its steles and other sculpted stone works. Izapa is considered to be where the Mesoamerican ceremonial 260-day calendar was developed.
Before the Aztecs, the area was a restless tribute region of Tehuantepec, with the dominant ethnicity being Mame (Modern linguistic corrected to "Mam" in Guatemala because a Spanish word that is vulgar).
Tapachultec language was an old language in the area that became extinct in 1930s.
In 1486, Aztec emperor Ahuitzotl conquered it; the area was then required to send cotton clothing, bird feathers, jaguar skins, and cacao as tributes. However, rebellions against the Aztecs continued with Moctezuma Xocoyotzin sending troops to pacify the area in 1502 and 1505.
The first Spanish arrived in the area in 1522. According to chronicler Bernal Díaz del Castillo the area had a native population of about 15,000 inhabitants although other estimates have put that number as high as 75,000 in the 1520s. Pedro de Alvarado is credited with the conquest of the Soconusco as he headed down into Central America from the Spanish stronghold in southern Veracruz in 1524. The conquest caused a steady depopulation of the area with the disappearance of much of the native population either due to migration out of the area or death from the diseases the Europeans brought with them.
Soconusco was declared a province by the Spanish Crown in 1526 with its original extension down into what is now Guatemala. Its first governor was Pedro de Montejo Sr. as part of Chiapas. This lasted only a short time and then it was governed from Mexico City. In the late 16th century Miguel de Cervantes, future author of Don Quixote, requested from the Spanish king the right to govern Soconusco because of its well-known cacao.
The first evangelists came to the area in 1545, sent by Bartolomé de las Casas. These monks were Dominicans, who divided the area into six parishes. Soconusco was originally under the religious jurisdiction of Tlaxcala before it was moved to that to Guatemala. In 1538, the pope created the bishopric of Chiapas and Soconusco became part of that.
In 1543, Chiapas became governed by the Audencia de los Confines (Real Audiencia of Guatemala, which included much of what is now southern Mexico and Central America, with its capital in Guatemala City. In 1564, the capital of this Audencia was moved to Panama City and governance of Soconusco switched back to Mexico City. However, in 1568, the Captaincy General of Guatemala was established and Soconusco and Chiapas was made part of it. This would remain the case until the end of the Mexican War of Independence in 1821. However, the area maintained important political and economic ties with Mexico City throughout the period. Huehuetán was established as the capital of Soconusco in 1540. It remained so until 1700 when it was moved to Escuintla. In 1794, a hurricane devastated this town, so it was moved again to Tapachula. Mistreatment by encomienda and hacienda owners also caused population loss and various uprisings such as one in 1712 by the Tzendals, which involved 32 villages centered on Cancuc. To counter the mass depopulation of the area, the Spanish Crown created two alcaldes mayores in 1760 in order to give more protection to the native population. However, this proved insufficient and by 1790 the area lost its status as a province and became part of Chiapas.
Soconusco and highland Chiapas were one of the first areas in the Central American region to support independence from Spain. They supported the Three Guarantees of Agustín de Iturbide, but declared their own separate independence from the Spanish Crown in 1821. There was division in the state as to whether ally or unify with Mexico or the new Central American Republic when it split from Mexico in 1823. Those in the highlands preferred union with Mexico but the lowlands, including the Soconusco, preferred Central America.
Soconusco's political status would be undecided for most of the rest of the 19th century. Mexico annexed Chiapas formally in 1824 and made its first formal claim to the Soconusco in 1825. This was not accepted by Guatemala nor the ruling elite in Soconusco. In 1830, Central American troops entered the Soconusco to pursue a political dissident, but troops from highland Chiapas countered their entrance. The first attempt to settle a border in the area occurred in 1831 with several others in the decades that follow but without success, in part due to political instability in Mexico. The Central American Republic dissolved between 1838 and 1840, leaving Guatemala to pursue claims to the Soconusco, which it did by declaring it part of the Quetzaltenango District in 1840. The population here favored inclusion into Guatemala. However, Antonio López de Santa Anna sent troops into Soconusco and the rest of Chiapas to press the population to formally unite with Mexico, which Soconusco did in 1842.
More negotiations were attempted in 1877 and 1879 but a treaty was not signed by Mexico and Guatemala until 1882 to formalize a border. This border split the Soconusco region with the greater part going to Mexico. One reason behind the reaching of the agreement at this time was concerns about the United States taking advantage of the continued instability between Mexico and the Central American states. However, Guatemala still made claims to the Soconusco until 1895.
Since finalization of the border with Guatemala, Soconusco's history has been dominated by the development of its economy. In the last quarter of the 19th century, the migration of foreigners from the United States and Europe was encouraged by the Porfirio Díaz regime with land grants and the construction of a rail line. The goal of the government was to bring in foreign capital to the country to develop its natural resources and modernize it. For the Soconusco region, this meant the development of coffee plantations. In less than 20 years, between 1890 and 1910, the region became the main producer and exporter of coffee for Mexico. However, these coffee plantations needed large numbers of workers. Recruitment of indigenous peoples from other parts of Chiapas, often through deceit and entrapment was widespread. The need for workers also brought in people from Guatemala as well.
The mistreatment of workers here and other parts of Mexico led to the outbreak of the Mexican Revolution in 1910. However, plantation owners initially resisted the effects of the war and the economic changed that rebels demanded. Coffee was more difficult to convert to cash than cattle, so these farms suffered less pillaging by the various factions than in other places. As their markets were abroad, they were better able to resist the economic reforms of Venustiano Carranza as well. However, export agriculture is subject to boom and bust periods and World War I precipitated a drop in demand for coffee, weakening the plantain owners’ position somewhat. Struggles between workers’ groups and plantation owners began in earnest in the 1920s, leading to the establishment of the Partido Socialista Chiapaneco. The amount of land under the control of these owners actually increased during the same period despite some successful attempts at land redistribution.
Mexico's coffee production came under competition from abroad and world coffee prices fell again in the 1930s. Land reform in the area began in earnest during this decade under President Lázaro Cárdenas, with much of the coffee growing land becoming communally-held in “ejidos.” By 1946, about half of all coffee growing land in the area was owned by about one hundred ejidos. However, World War II had coffee prices drop again and growth of coffee production in the area would not increase again until the 1950s, when international financing became available.
Since then, coffee has remained the area's main cash crop although it has been supplemented by the growing of tropical fruit and other crops, keeping the Soconusco an important agricultural center for Mexico. There are still booms and busts in demand, such as a major fall of coffee prices in 1989 (caused by the disappearance of the International Coffee Organization) and the steady decline of cacao production. In 2005, Hurricane Stan caused the region to lose 65% of its banana crop, mostly in Suchiate and Mazatán. This caused a temporary but significant economic crisis. Another significant effect was a decrease in the number of seasonal migrant workers who come to harvest and process crops. Most of these migrants come from Guatemala although this migration has lessened due to economic reasons. The number of Guatemalans migrating legally into Mexico dropped from a high of 79,253 in 1999 to 27,840 in 2007. Most Guatemalan migrants in Mexico are on their way now to the United States. Illegal immigration in the region is still a problem, with most illegals coming in from Central America. Most illegal immigrants work in agriculture and as domestics. Women among these are vulnerable to sexual exploitation, especially those from Honduras and El Salvador.
The Soconusco is divided into fifteen municipalities which are Acacoyagua, Acapetahua, Cacahoatán, Escuintla, Frontera Hidalgo, Huehuetán, Huixtla, Mazatán, Metapa, Pueblo Nuevo Comaltitlán, Ciudad Hidalgo, Tapachula, Tuxtla Chico, Unión Juárez and Tuzantán.
The region's capital and Mexico's main border city for this area is Tapachula. Tapachula was founded by the Spanish as an Indian town in 1590, with the name coming from the local language meaning “place of the conquered.” Most of the city's monumental structures are in Art Deco style from the early 20th century, when the coffee plantations brought wealth to the area. The main monumental building is the old municipal palace, which is next to the San Agustín parish, both of which face Hidalgo Plaza. Other important sites in the city include the Caimanes Zoo, which has over 1000 alligators and the first to specialize in this species in Mexico. It is also the starting point for tourist routes such as the Ruta del Café (Coffee Route) and to nearby ocean areas with beaches and mangroves.
Puerto Chiapas, in the municipality of Tapachula, is one of Mexico's newest port facilities. The port receives various types of ships including cruise liners. It has a beach with dark gray sand, nine km long, a rough surf and a semi humid climate with rains in the summer. Activities for tourists include sports fishing, boating and ATV riding.
Unión Juárez is located on the slopes of the Tacaná Volcano. The town is noted for its surrounding coffee plantations and houses made of wood in a style similar to that of Switzerland, giving it the nickname of “La Suiza Chiapaneca.” It has a relatively cold climate with rains in the summer and is the primarily mountain tourist attraction of the state. Tuxtla Chico is a town surrounded by dense tropical vegetation. It conserves much of its distinct architecture such as the Casa del Portal de Madera (House of the Wood Portal and the Candelaria Church, which is one of the main colonial era structures of the Chiapas coast region. Another important structure is the fountain in the center of the main plaza. Near the town is the Finca Rosario Izapa, which is field research station for agriculture, mostly concentrated on studies on cacao and tropical fruits.
Cacahoatán is noted for its colorful wooden structures on the border from plantations dedicated to tropical fruit and those dedicated to coffee production in the higher altitudes. Some of these plantations, such as La Palmira offered guided tours.
Huehuetán is a town founded during the colonial period as the capital of the Soconusco. Its main church is from this time, today known as the San Pedro Parish, constructed in an austere fashion of adobe.
Tuzantán has conserved much of its indigenous identity, populated by ethnic Tuzantecos and Mochos, which are the westernmost Mayan peoples. Their main religious festival is dedicated to the Archangel Michael celebrated in September.
Huixtla is the only town on the Chiapas coast which conserves its old train station. The town is surrounded by fields of wild reeds and “La Piedra,” a 100-meter-high (330 ft) granite formation, is used to symbolize the area.
Acacoyagua was a settlement populated by Japanese immigrants. Nearby is the El Chicol waterfall, which forms deep pools of fresh water surrounded by dense vegetation.
Geography and environment
Soconusco is a strip of land wedged between the Pacific Ocean and the Sierra Madre de Chiapas, formed by sediment flowing from the mountains, similar to the rest of the Chiapas coast. However, this area is distinguished because of its history and its economic production. It is the southern section of this state's coast, with its northern border at the Ulapa River. Its southern border used to be the Tilapa River in what is now Guatemala, but when the final border between Mexico and this country was set in 1882, the new southern boundary marker became the Suchiate River. The region has a territory of 5,827 km².
The climate and ecology of the region varies with elevation. The coastal lowlands are seasonally dry. They are part of the Central American dry forests ecoregion, characterized by open woodlands and thorn scrub of dry-season deciduous trees and shrubs. At higher elevations the mountains intercept winds from the Pacific, creating clouds, fog, and orographic precipitation. The cooler and humid mountain climate supports the tropical evergreen montane moist forests of the Sierra Madre de Chiapas moist forests ecoregion. Higher-elevation areas of the mountains are part the Central American pine-oak forests ecoregion.
The entire Chiapas coast runs for over 200 km as a series of beaches uninterrupted except for estuaries and lagoons created by the various small rivers that run from the Sierra Madre de Chiapas to the Pacific Ocean. These areas are known for their abundance of mangroves and water lilies as well as aquatic birds.
The most important estuary area is the La Encrucijada Ecological Reserve which extends over 144,868 hectares in the municipalities of Pijijiapan, Mapastepec, Acapetahua, Huixtla, Villa Comaltitlán and Mazatán, consisting of lagoons and estuaries that interconnect along the Pacific Ocean. It has the tallest mangroves on the Pacific coast of the Americas, and the La Concepción turtle sanctuary which works to protect several species of marine turtles. These mangroves support a variety of wildlife including oysters, iguanas, alligators, and ocelots. The reserve includes the Hueyate Marsh formed by the Huixtla and Despoblado rivers. The main vegetation here is reeds and palm trees with a large number of turtles.
To one side of the reserve is the Chantuto Archaeological Site, one of the first human settlements in the Chiapas area. The site includes large mounds of discarded shells. Visitors require permits from the Reserve's administration.
Between La Encrucijada and the Guatemalan border, there are a series of lesser known beaches. Barra Cahoacán is located 33 km from the city of Tapachula on the highway to Puerto Chiapas. It contains about thirty km of beaches and a semi humid climate with rains in the summer. Barra San José is a beach area bordered on one with by the estuary of the Huehuetán River. This area has cabins and areas with fish farming, mostly raising shrimp. Tourist attractions include jet-ski, water skiing, boating and horseback riding. It has a semi humid climate with rains in the summer, located in the Mazatán municipality. The Barra Zacapulco is a tourist center with large extension of estuaries with mangroves as it is within the La Encrucijada Reserve. It has a humid climate with rains year round in the municipality of Acapetahua. Playa El Gancho is a beach in the Suchiate municipality open to the ocean with an estuary with mangroves. It has a humid climate with rains from May to October. It has sports fishing, boating and camping. Barra de San Simón is near the town of Mazatán. In addition to wide beaches with estuaries, birds and exuberant vegetation, it is home to a sanctuary dedicated to the Virgin of the Conception. The waters of the open ocean are rough but the estuaries have little to no wave action. The large expanses of mangroves here are part of the La Encrucijada Reserve in the Las Palmas zone. Playa Linda is in the Tapachula municipality. It has beaches of fine grayish sand and a rough surf and a semi humid climate with rains in the summer. Activities here include sports fishing, boating and camping. Playa San Benito has beaches with dark grey sand and a rough surf. The beach is lined with restaurants in palapas serving seafood. Some of these have swimming pools. Costa Maya or Playa Maya is in the Tapachula municipality in the Las Gaviotas community. It has open beach and estuaries. Playa Grande are expanses of near-virgin beaches with no constructions on them. It is a recommended place to see the natural flora and fauna of the area.
The main nature reserve in the mountain areas of the region is the El Triunfo Biosphere Reserve, near Mapastepec at the north end of the Soconusco. The area is wet with fairly low temperatures and exuberant evergreen tropical vegetation of the cloud forest type. It is home to species such as the quetzal. Visitors are allowed only with permission of the Comisión Nacional de Áreas Naturales Protegidas (CONANP) . At the southern end, the Tacaná Volcano reaches a height of 4,100 meters with all of the climates and vegetation that is found in the Sierra Madres de Chiapas. It forms part of Mexico's border with Guatemala as set by the 1882 Treaty. Tacaná is one of a series of volcanoes that extend through Central America. Despite the fact that it is an active volcano, the area around it is densely populated. At the lower levels the average temperature is around 20 C and at the highest elevation it is about 10C.
Soconusco is distinguished in Mexico as an agricultural center, mostly cash crops destined for export. The agricultural richness of the area and its function as a corridor to Central America helps to counter the negative economic effect of the region's remoteness from economic center of Mexico. The mainstay of the region's economy is agriculture for export, which makes it relatively economically independent from the rest of the country, despite the recent introduction of some industry and tourism. The dependence on the export of cash crops means that the economy is subject to boom-and-bust periods. These produce times of abundance and economic development but also major economic crisis such as the 1989 fall in coffee prices and a hurricane which wiped out 65% of the banana crop in 2005. These cycles, along with normal yearly harvest period have resulted in migrations of seasonal workers in and out of the area. Most of these workers come from Guatemala for harvests but bust periods can send parts of the local population out of area to find work.
The Soconusco is one of the most fertile areas of the country for agriculture with plentiful rain and soil enriched by volcanic ash. Large scale cultivation in the area began in the Mesoamerican period with the growing of cacao. Today, crops include coffee, bananas, papaya, mango, kiwi, passion fruit, carambola and African palm . Over half of the area's working population is dedicated to agriculture and livestock. Coffee, bananas and mangos represent over eighty percent of the areas agricultural production. Thirty seven percent of the area's arable land is dedicated to coffee and another 25% to corn accounting for 62% of all cropland. The rest is mostly dedicated to mangos, cacao and sesame seed. The most exported crops are coffee, mangos, papayas and bananas with over 90% going to the United States. One new tropical fruit being grown in the area is the rambutan (Nephelium lappaceum L.) native to Malaysia and Indonesia but become increasingly popular in Central America and Mexico. One important grower is the Rancho San Alberto in the Cacahoatán municipality which pioneered the growing of this fruit in the country in the mid 20th century. Overall it is estimated that there are 50,000 producing trees over 500 hectares. Growing sectors of the agriculture include organic produce and African palm. The production of ornamental plants began in the early 2000s in the coffee plantations as did “agrotourism” or the opening of plantations to visitors with accommodations and sometimes spa facilities.
Mexico's coffee production has been eclipsed by other growers in the world but it remains an important product. The various major coffee farms are linked by the Ruta del Café or Coffee Route, up in the high slopes and deep valleys of the Sierra near Tapachula. The introduction of this crop by European immigrants not only had an effect on the economy but also the local architecture. As coffee plants need the shade of higher trees, this has conserved much of the original vegetation of the area although the cultivation has had a negative effect on natural water supplies north of Tapachula. Among these plantations is the rainiest place in Mexico, the Finca Covadonga plantation, which receives about 5,000 mm of rain each year. This amount is surpassed only by places in Hawaii and India. The Santo Domingo coffee plantation is noted for its location on the slopes of the Tacaná Volcano. It maintains the look it had during the early 20th century. Visitors can see the main house, housing for workers, chapel, hospital and the installations dedicated to the processing of coffee beans. The main house was constructed in the 1920s by the original owner, who was German.
Chiapas is the main producer of bananas in Mexico. Rancho Guadalupe is a banana plantation open to the public for tours. It is located near Puerto Chiapas.
Tourism is not the major industry for the region although the state promotes it along with the rest of the Chiapas coast. In addition to the Ruta del Café, there are two others based on the area's natural scenery. Much of the Chiapas’ mangrove areas are linked in a tourist route called the “Ruta del Manglar.” The center is the La Encruciajada Reserve, which was created as part of the North American Waterbird Conservation Plan signed in 1986 by Mexico, the U.S. and Canada. Another feature of the route is the Caimanes y Crocodriles de Chiapas sanctuary, located outside of Tapachula. The mangroves are home to various reptiles such as alligators, the iguana and the ocelot. The Barra de Zacapulco Tourism Center features a marine turtle sanctuary which monitors the population of various species. La Encrucijada also include the Huayate Marsh formed by the Huixtla and Despoblado Rivers. The Volcano Route is centered on the Tacaná Volcano. It was the object of veneration for the Izapa civilization. Its name is from the Mame language and means “house of fire.”
Tapachula is known for its Chinese food, especially Cantonese. More indigenous dishes include tamales, pozol, sopa de chipilín and other dishes shared with the rest of Chiapas. These dishes are often prepared with beef and pork and prepared with ingredients such as peaches, apples, bananas, quince, chayote, and carrots. Chanfaina is cooked ground beef liver with seasonings. Seafood dishes include various fish dishes and stews along with those made of river snails and turtles. The coast area of the region has a cuisine dominated by seafood, tropical fruit, cacao and coffee. It is influenced by that of the Isthmus of Tehuantepec which can be seen in dishes such as “pollo juchi,” a chicken dish.
Traditional handcrafts include leather good for horsemanship made in Tapachula which includes saddles and saddlebags. Ceramics are made in San Felipe Tizapa and wood items in Unión Juárez. Items related to fishing such as nets are made along the entire coast.
- Perez de los Reyes, Marco Antonio (January 1980). "El Soconusco y su Mexicanidad (Breves Consideraciones)" [Soconusco and its Mexicanness (Brief Considerations)] (PDF). Jurídicas (in Spanish). Mexico: UNAM. 12 (12). Retrieved January 27, 2012.
- Santacruz de León, Eugenio Eliseo; Elba Pérez Villalba (May–August 2009). "Atraso económico, migración y remesas: el caso del Soconusco, Chiapas, México Convergencia: Revista de Ciencias Sociales" [Economic backwardness, migration and remittances] (PDF). Convergenica (in Spanish). State of Mexico: UAEM. 50: 57–77. ISSN 1405-1435. Archived from the original (PDF) on January 4, 2014. Retrieved January 27, 2012.
- "Ruta Costa – Soconusco" [Coast-Soconusco Route] (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Z.R. de Rosario Izapa" [Rosario Izapa Reserve Area] (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- Daniela, Spenser. "La Economía Cafetalera en Chiapas y los Finqueros Alemanes (1890-1950)" [The coffee plantation economy in Chiapas and German plantation owners (1890-1950)] (PDF). Diccionario Temático CIESAS (in Spanish). Mexico City: CIESAS. Archived from the original (PDF) on 2013-09-27.
- "En la región del Soconusco, Chiapas, la desigualdad es absolutamente femenina" [In the Soconusco, Chiapas region, inequality is absolutely feminine]. Milenio (in Spanish). Mexico City. July 18, 2011. Retrieved January 27, 2012.
- "Ruta Costa Soconusco" [Coast Soconusco Route] (in Spanish). Mexico: State of Chiapas. Retrieved January 27, 2012.
- "Playa Chiapas" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Unión Juárez" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- Corrales, Lenin & Bouroncle, Claudia & Zamora Pereira, Juan. (2015). An overview of forest biomes and ecoregions of Central America.
- "Reserva la Encrucijada" [La Encrucijada Reserve] (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Barra Cahoacán" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Barra de San José" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Barra Zacapulco" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Playa El Gancho" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Playa Linda" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Reserva El Triunfo" [El Triunfo Reserve] (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- "Volcán Tacaná" (in Spanish). Mexico: Secretaría de Turismo Chiapas. Retrieved January 27, 2012.
- Perez Romero, Alfonso; H. Alfred Jürgen Pohlan (December 2004). "Prácticas de cosecha y poscosecha del rambután en Soconusco, Chiapas, México" [Harvest and post harvest practices with rambután in Soconusco, Chiapas, Mexico]. LEISA Revista de Agroecología (in Spanish). Mexico. 20 (3). Retrieved January 27, 2012.
- "Tapachula" (in Spanish). Chiapas: La Region Editorial Cibernética de México Chiapas. 2003. Archived from the original on August 20, 2012. Retrieved January 27, 2012. | <urn:uuid:f670ac04-c2c0-445b-bd47-b55aa49bce16> | CC-MAIN-2022-33 | https://en.wikipedia.org/wiki/Soconusco | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00296.warc.gz | en | 0.945476 | 7,427 | 3.421875 | 3 |
INTRODUCTION AND PURPOSE
The National Cancer Institute estimates that 15,780 children and adolescents in the United States aged 0 to 19 years will be diagnosed with cancer each year.1 With advances in treatment over the past 5 decades, the 5-year relative survival rate for children with cancers of the central nervous system has reached 73.1%, resulting in an increased focus on the child's posttreatment status and quality of life.2 Pediatric cancer and its treatment (eg, chemotherapy, radiation therapy, and surgery) may result in a wide range of negative sequelae, including psychosocial dysfunction, peripheral neuropathy, cognitive limitations, and decreases in strength, coordination, and balance. One potential neurological impairment in children treated for a brain tumor is hemiparesis,3 prevalent in up to 21% of survivors.4
Constraint-induced movement therapy (CIMT) is a rehabilitation intervention that improves function in extremities affected by neurological injury. CIMT is based on research demonstrating that cortical signaling from the damaged cortex is absent or minimal immediately following a neurological insult (stroke, brain injury, etc), resulting in limb movements that are difficult and inefficient.5 Attempts to move the affected extremity perpetuate the initial insult because of learned nonuse, or in very young children, because of developmental disregard. Learned “nonuse” describes a pattern of disuse of the hemiplegic arm after a neurological injury, in which repeated failures when the child attempts to use the affected limb result in the child relying on the nonaffected limb for function. Nonuse causes contraction of the cortical representation of the hand, further limiting use of the affected extremity.6,7 Developmental disregard occurs in very young children with hemiplegia when neurological injury interferes with development of neural pathways necessary for controlled volitional movement and proprioceptive feedback. Motor skills are delayed or fail to develop because the child is unaware of the existence of the affected extremity.8,9
Fortunately, research demonstrates that with repetitive practice, cortical reorganization and improved limb function are possible among individuals with neurological injury.5 CIMT in pediatric populations constrains the unaffected arm in a splint or cast to encourage use of the affected extremity. In addition, the high-dose therapy engages the affected extremity in repetitive practice of developmentally appropriate movement strategies and employs the behavioral technique of shaping to achieve motor goals during play.8,9 Neuroimaging from CIMT studies in adults and children with hemiplegia indicates that facilitated cortical reorganization overcomes learned nonuse or developmental disregard, leading to improved function of the limb.10,11 CIMT yields results in short-term and long-term improvements in upper extremity function that are superior to those of standard therapies in children with cerebral palsy8,12–15 and can improve upper extremity function in children with brachial plexus injury,16 hemispherectomy,17 and acquired brain injury.18,19 Although a few children with brain tumors have participated in CIMT trials,20 the effectiveness of CIMT among children with neurological injury related to their history of cancer or its treatment has not been systematically evaluated.
The purpose of this pilot study was to investigate the feasibility effect on quality of life of a 3-week CIMT program in children with brain tumors and upper extremity hemiplegia and to describe the change in amount and quality of extremity use that resulted from participation in the CIMT program.
Participants were a convenience sample of English-speaking children aged 2 to 12 years who had been diagnosed with a brain tumor and hemiplegia after completion of therapy. Study exclusion criteria were as follows: uncontrolled seizures, significant pain (at least 5/10 on FACES of Numeric Pain Scale), and 30 degrees or less of active shoulder flexion or abduction, inability to initiate elbow flexion, extension, or movement of the wrist, or digits in the affected extremity. Institutional Review Board approval was obtained for the study. Written, informed consent was obtained by parents or guardians, and assent was obtained per institutional policy.
The intervention included 15 three-hour therapy sessions in the rehabilitation services clinic (5 days per week for 3 weeks), during which participants were engaged in activities to improve strength, motor skills, range-of-motion, and functional use of the affected upper extremity. The 3-week program was scheduled at the convenience of the participant. The intervention was provided by 1 clinician who was trained through the University of Alabama at Birmingham (UAB) Constraint Induced (CI) Therapy Pediatric Training Program. Play and age-appropriate functional activities such as self-feeding and dressing were used to employ the core CIMT principles of “shaping” and repetitive task practice to motivate the participants. Shaping is a behavioral technique in which small, successive approximations or progressive increases in the difficulty level of tasks are used to achieve a motor or behavioral goal. (See Examples of Shaping Activities, Supplemental Digital Content 1, available at https://links.lww.com/PPT/A121.)
During the program, the unaffected upper extremity was restrained in a long arm removable cast to overcome the tendency to use the more functional arm. Every other day, the cast was removed and the skin assessed for redness, irritation, and breakdown. The cast remained in place daily, restraining the unaffected extremity until 2 days prior to program completion, at which time bimanual upper extremity training occurred.
After completing the 3-week program, children and their parents/caregivers were instructed in an individualized home program that included activities and exercises to facilitate maintenance of gained skills. The month following program completion, weekly phone conversations were conducted with the parents/caregivers. The phone assessment asked specific questions regarding the child's progress and provided recommendations for transferring skills learned from the program into their real-world environment.
Feasibility was defined as successful completion of 12 of the 15 sessions during the 3-week intensive intervention program, as well as completion of preintervention, postintervention, and 3-month follow-up assessments. (See Schedule of Assessments, Supplemental Digital Content 2, available at https://links.lww.com/PPT/A122.)
The Pediatric Grading System for Severity of Motor Deficit (Pediatric Model)21 was used to assess each child's range of motion and severity of motor deficit of the more impaired upper extremity. Range of motion is graded on a 4-point scale, with Grade 2 indicating mild to moderate limitation; Grade 3, moderate limitation; Grade 4, moderately severe limitation; and Grade 5, severe or very severe limitation (see the Appendix, Supplemental Digital Content 3, https://links.lww.com/PPT/A123.) The overall grade used to characterize a child's deficit is dictated by the joint (shoulder, elbow, wrist, fingers, and thumb) at which the deficit is greatest.
The Pediatric Motor Activity Log (PMAL) was used to assess the frequency and quality of use of the affected extremity in the real-world environment. The parent/caregiver used a 5-point Likert scale to rate “how often” and “how well” their child used the affected extremity during 22 arm/hand functional activities.22 The Motor Activity Log, from which the PMAL was adapted, is a valid and reliable tool with high internal consistency and high interrater and test/retest reliability. The minimal detectable change for the PMAL is 0.42.
The Inventory of New Motor Activities and Programs23 (INMAP), a modified version of the Emerging Behaviors Scale,24 was used to record 32 motor patterns and functional activities that emerged during the intervention. Examples of patterns or activities include reaching, release of objects, pincer grasp, crawling, and using feeding or writing utensils. New motor patterns were recorded during the program if verified by at least 2 sources, including a therapist, parent/caregiver, or video recording.
The Pediatric Arm Function Test25 (PAFT) was administered to assess the frequency and quality of affected upper extremity function. The assessment consists of 26 unilateral and bilateral upper extremity tasks completed by the child and videotaped for later scoring (Functional Ability score). The percentage of items completed spontaneously with the affected upper extremity is also recorded. The PAFT has high internal consistency (Cronbach α = 0.96), and test/retest reliability was adequate (intraclass correlation coefficient = 0.74).21
The Pediatric Grading System for Severity of Motor Deficit (Pediatric Model), the PMAL, the INMAP, and the PAFT were chosen because they are a part of the UAB Pediatric CI Therapy Program.
Health-related quality-of-life (HRQOL) was assessed by using the Pediatric Quality of Life Inventory SF-15 and the Pediatric Quality of Life Inventory (PedsQL) Acute Version.26,27 Parents completed the PedsQL SF-15 before the intervention and at the 3-month follow-up visit and the PedsQL acute version at the end of weeks 1, 2, and 3. Items were rated from 0 to 4, with a score of 0 indicating “never a problem” and 4 indicating “always a problem.” Scale scores were computed as the sum of the item scores divided by the number of items completed, reflecting a summary measure of HRQOL. Those with scores 1 standard deviation (SD) below established population means were classified as having poor HRQOL.
The Feasibility Questionnaire was completed by the child's parent at the end of the 3-week intervention period to assess the feasibility of participating in and completing the program. It asked parents to rate levels of difficulty related to adhering to the cast-wearing schedule and participating in routine daily and home activities and to rate the child's frustration level during the program. (See Feasibility Questionnaire, Supplemental Digital Content 4, available at https://links.lww.com/PPT/A124.)
Descriptive statistics were used to characterize the study participants and results of the Feasibility Questionnaire. Mean values (with SD) were calculated preintervention, postintervention, and at the 3-month follow-up for the PMAL, PAFT, INMAP, and the PedsQL and compared between periods by using generalized estimating equations to account for within-person correlation.28 The data analysis was performed by using SAS version 9.3 (SAS Institute, Cary, North Carolina).
TABLE 1 -
||Age at CIMT, y
||Years Since Diagnosis
||Treatment Completed and Functional Deficits
||Months Since Last Treatment
||Juvenile pilocytic astrocytoma, suprasellar hypothalamus
||Tumor resection, chemotherapy. Seizure activity, left hemiplegia, optic atrophy, and related visual acuity and field deficits
||Choroid plexus carcinoma, right hemisphere
||Tumor resection, chemotherapy. Left hemiplegia, left homonymous hemianopia
||Multiple recurrent juvenile pilocytic astrocytoma, temporal lobe
||Tumor resection. Right hemiplegia, right visual field deficit
||High-grade glioma, left frontoparietal
||Tumor resection, chemotherapy, radiation therapy. Right hemiplegia, right incomplete homonymous hemianopia. Dysarthria
||High-grade infantile glioneuronal tumor, frontoparietal
||Tumor resection resulting in intracranial hemorrhage, shunt placement, seizure activity. Right hemiplegia, right visual field cut
||Anaplastic astrocytoma, thalamus
||Tumor resection, shunt placement, chemotherapy, seizure activity. Right hemiplegia, visual deficits
||Atypical teratoid rhabdoid tumor, midbrain and thalamus
||Tumor resection, radiation therapy, chemotherapy. Left hemiplegia, visual deficits (Left cranial nerve VI impairment)
||Juvenile pilocytic astrocytoma, medullary
||Tumor resection. Right hemiplegia, visual deficits (mild-nystagmus, strabismus, Horner's syndrome)
||Juvenile pilocytic astrocytoma, thalamus.
||Tumor resection. Left hemiplegia. Optic atrophy and left hemianopia
||Mean (SD) = 7.3 (3.6)
||Mean (SD) = 4.2 (3)
||Mean (SD) = 31.3 (12.7)
||Mean (SD) = 4.1 (1.1)
Abbreviations: CIMT, constraint-induced movement therapy; SD, standard deviation.
TABLE 2 -
Scores on Motor Activity Measures at Each Time Point
||Preintervention, Mean (SD)
||Postintervention, Mean (SD)
||3-mo Follow-Up, Mean (SD)
P, Preintervention to Postintervention
P, Preintervention to 3-mo Follow-Up
P, Postintervention to 3-mo Follow-Up
|Pediatric Motor Activity Log (parent-reported)
|Frequency of use
|Quality of use
|Pediatric Arm Function Test (measured)
|Frequency of use
|Quality of use
|Inventory of New Motor Activities and Programs
|Activities of daily living
|Pediatric Quality of Life Inventory Total score
Abbreviation: SD, standard deviation.
Ten children were recruited and 9 (3 boys, 6 girls) consented to participate in the study. All participants completed each phase of the study, including pretesting, 15 intervention sessions, and all follow-up assessments. None of the participants had previously participated in a CIMT or other intensive therapy program. Participant characteristics are shown in Table 1. The mean (SD) age of participants was 7.3 (3.6) years, with a mean (SD) time since diagnosis of 4.2 (3) years. Juvenile pilocytic astrocytoma was the most common brain tumor diagnosis (n = 4), with other participants having a variety of brain tumor types. All participants experienced additional neurological impairments; the most common were visual deficits (n = 9), seizure activity that did not interfere with daily activities (n = 3), and dysarthria (n = 1). All participants were ambulatory. All participants underwent resection of the tumor as a component of treatment. Five also received chemotherapy and 2 received radiation therapy. None of the participants were receiving active cancer therapy at the time of the intervention. However, 1 participant was hospitalized for a shunt infection during the long-term follow-up phase of the study. This hospitalization was lengthy, and her course of care included a period of significant activity restriction due to externalization of her shunt. She was unable to participate in follow-up home activities/exercises to the extent prescribed by the program for a period of approximately 3 weeks but was able to return for her long-term follow-up assessment. Seven participants had a range-of-motion/motor severity score of 4 or greater (mean = 4.1), indicating moderately severe movement impairment (ie, limitation of between and ½ of normal range of motion) at the most limited joint.
Table 2 provides the mean pretreatment, posttreatment, and 3-month follow-up scores of the PMAL, PAFT, and INMAP measures and comparison of score between periods. After intervention, participants demonstrated significant improvements in the amount and quality of use on clinical measures (PAFT) and parent-reported measures of real-world function (PMAL). These results were maintained at the 3-month follow-up visit, with the exception of the measured frequency of use on the PAFT. All participants gained at least 1 new motor pattern in the affected arm during the program and maintained these skills at the 3-month follow-up visit. The mean (SD) number of new motor patterns gained from pretreatment to the 3-month follow-up visit was 5.6 (3.4) (P < .0001); the maximum was 11. The mean (SD) number of new daily living skills gained and maintained at long-term follow-up assessment was 7.3 (3.6) (P < .0001); the maximum number gained and maintained was 12.
The pretest mean (SD) PedsQL summary score was 65.1 (14.5), with scale scores of 59.4 (17.9) for physical and 66.9 (17.4) for psychosocial function. The percentage of children with an overall PedsQL score at least 1 SD below the expected population mean was 66%. Nearly all (7/9) of the parent-reported child quality-of-life scores improved or remained stable over the study period. There were no significant changes in quality of life from pre- to postassessment or from the postassessment to 3-month follow-up time point. Of the 2 participants whose parent reported a decline in quality of life, 1 was clinically significant, with a change score of −40. This participant's parent provided additional information, indicating that other factors, unrelated to the program influenced their child's score: (1) the need for a new lower extremity orthotic to improve independent mobility and (2) difficulties adjusting to living in a new city and school.
No significant associations were found between the child's age at diagnosis, severity score, sex, and changes on the PedsQL, INMAP, PAFT or PMAL.
One parent reported that participation in the program was easy. Of the other participants, 44% felt that their child's participation in the program was difficult and 44% reported a neutral feeling about program difficulty. Seven of 9 parents reported that their child wore the cast at least 90% of the prescribed time during the intensive phase of the program; 8 of 9 parents reported that enforcing that the child wore the cast as instructed was easy or very easy. One-third of parents rated participating in the P-CIMT program as frustrating, and one-third rated it as not frustrating, with the remaining third reporting a neutral level of frustration. All parents reported satisfaction with the program. Bathing, dressing, and eating were the activities of daily living most frequently reported by parents as being difficult or very difficult for their children (7/9). Detailed results of the Feasibility Questionnaire are provided in Feasibility Questionnaire Results, Supplemental Digital Content 5 (available at https://links.lww.com/PPT/A125).
Applications of evidence-based rehabilitation interventions among children with brain tumors and hemiplegia are limited. This study indicates that children who are brain tumor survivors with hemiplegia can successfully complete a 3-week CIMT program and that such a program has potential to significantly improve the amount and quality of affected upper extremity use in this population.
Although the pediatric literature describing the efficacy of CIMT is primarily focused on children with hemiplegia due to cerebral palsy, one small pilot study that employed CIMT for children with acquired brain injury yielded findings similar to ours. Karman et al18 evaluated outcomes following CIMT among 7 children aged 7 to 17 years: 3 with traumatic head injury, 2 with cerebrovascular events from arteriovenous malformations, and 2 with stroke. Five demonstrated significant improvements in at least 1 measure of upper extremity function. Although this was a 2-week, 6-hour-per-day program and these children tended to be older at the time of participation, participants were similar to those in the present study in terms of time since initial injury and baseline function in the affected limb. However, another CIMT trial in which children with acquired brain injury due to ischemic stroke (N = 8, ages 6-15 years) participated in 2 hours per day of direct treatment over the course of 4 weeks did not find improved sensorimotor function or quality of upper limb movement function.19 On average, the children in this study were older than those in our study, closer to their initial insult, and had more severe impairment in the affected limb at baseline than did the children in our study. More work is needed to determine the effects of age, timing, treatment dosing/duration, and severity of impairment on CIMT efficacy among children with acquired brain injury.
Our study was a nonrandomized feasibility study with a small sample size. Although we were unable to identify patient- or program-specific factors that predicted improvement, this study did demonstrate that children who survive brain tumors with chronic hemiplegia and moderately severe movement impairment can benefit from CIMT intervention. Continued research aimed at identifying potential predictors of CIMT outcomes will assist clinicians in targeting children who will benefit most from the specific therapy. For example, Wolf et al29 reported that, among adult stroke survivors (N = 222), those who were at least 3 months postneurological injury demonstrated improvement with CIMT therapy that was not seen in those in the acute recovery phase (ie, first few weeks after stroke). Others have reported that outcomes are dependent on the severity of the initial impairment, with individuals who initially demonstrated higher functional levels and range of motion abilities improving more than those with greater deficits.30
Although this program was modeled after current evidence that supports a 3-week, 3-hours-per-day program rather than the previously recommended 6-hour-per-day program,31 it remains a time- and resource-demanding intervention for children, families, and therapists. Although it was time-consuming, all 9 participants were able to successfully complete the CIMT program, and although some parents reported participation in the program was difficult, all reported satisfaction with the program at the end of the 3-week intervention. In addition, the children did not report statistically significant declines in HRQOL during the 3-week intervention, indicating that the program was well tolerated. These data are consistent with those of 2 other studies that described perceptions and experiences of parents, therapists, and children who participated in intensive therapy programs, reporting increased stress but perceived benefit including improved motor function and goal attainment.32,33
We found that although there were no significant changes in HRQOL during the CIMT intervention, there was also no significant lasting positive effect on HRQOL. Information describing the effect of CIMT on HRQOL of children is limited. A previous study assessed HRQOL following CIMT in a group of children with cerebral palsy (N = 40; ages, 5-16 years) by using a condition-specific quality-of-life outcome measure (The Cerebral Palsy Quality of Life Assessment for Children) and found significant improvements in reported participation, physical health, psychosocial health, and social well-being, which were maintained at long-term follow-up assessments.34 A randomized controlled trial evaluating the effects of a home-based CIMT program found that although the CIMT group and the control group had significant differences in HRQOL at a 3-month follow-up visit, there were no such differences posttreatment, indicating gains in quality of life are greater over the long term than the short term.35
This intervention was completed in a clinical setting. A recent systematic review of randomized controlled trials of CIMT in children with cerebral palsy concluded that intervention setting was significantly associated with a study's effect size, reporting that effect size was larger in a home-based CIMT program than in a clinic-based one and smallest in a camp-based program.36 Further research is needed to compare the effect of intervention setting on CIMT effectiveness in children who survive brain tumors.
Our findings suggest that a child with hemiplegia as a result of a brain tumor can adhere to and benefit from participation in a CIMT program, including up to 10 years from diagnosis and with other tumor or treatment-related comorbidities. Ongoing investigation into the efficacy of CIMT within the pediatric oncology population, the efficacy of CIMT as compared with other equally intensive interventions, and identification of potential predictors for successful CIMT outcomes are warranted.
1. Ward E, DeSantis C, Robbins A, Kohler B, Jemal A. Childhood and adolescent cancer statistics, 2014. CA Cancer J Clin. 2014;64(2):83–103.
2. Howlader N, Noone AM, Krapcho M, et al eds. SEER Cancer Statistics Review, 1975-2012. http://seer.cancer.gov/csr/1975_2012/
, based on 2014 SEER data submission. Published April 2015. Accessed January 29, 2016.
3. Ness KK, Gurney JG. Adverse late effects of childhood cancer and its treatment on health and performance. Annu Rev Public Health. 2007;28:279–302.
4. Pietilä S, Korpela R, Lenko HL, et al. Neurological outcome of childhood brain tumor survivors. J Neurooncol. 2012;108(1):153–161.
5. Taub EG, Uswatte G, Pidikiti R. Constraint-induced movement therapy: a new family of techniques with broad application to physical rehabilitation—a clinical review. J Rehabil Res Dev. 1999;36(3):237–251.
6. Liepert J, Bauder H, Wolfgang HR, Miltner WH, Taub E, Weiller C. Treatment-induced cortical reorganization after stroke in humans. Stroke. 2000;31(6):1210–1216.
7. Taub E, Uswatte G, Morris DM. Improved motor recovery after stroke and massive cortical reorganization following constraint-induced movement therapy. Phys Med Rehabil Clin N Am. 2003;14(1 suppl):S77–S91.
8. Deluca SC, Echols K, Law CR, Ramey SL. Intensive pediatric
constraint-induced therapy for children with cerebral palsy: randomized, controlled, crossover trial. J Child Neurol. 2006;21(11):931–938.
9. Ramey S, DeLuca S. Pediatric
CIMT: history and definition. In: Ramey S, Corker-Bold P, DeLuca S eds. Handbook of Pediatric
Constraint-Induced Movement Therapy: A Guide for Occupational Therapy and Health Care Clinicians, Researches and Educators. Bethesda, MD: AOTA Press; 2013.
10. Levy CE, Nichols DS, Schmalbrock PM, Keller P, Chakeres DW. Functional MRI evidence of cortical reorganization in upper-limb stroke hemiplegia treated with constraint-induced movement therapy. Am J Phys Med Rehabil. 2001;80(1):4–12.
11. Juenger H, Linder-Lucht M, Walther M, Berweck S, Mall V, Staudt M. Cortical neuromodulation by constraint-induced movement therapy in congenital hemiparesis: an FMRI study. Neuropediatrics. 2007;38(3):130–136.
12. Huang HH, Fetters L, Hale J, McBride A. Bound for success: a systematic review of constraint-induced movement therapy in children with cerebral palsy supports improved arm and hand use. Phys Ther. 2009;89(11):1126–1141.
13. Hoare B, Imms C, Carey L, Wasiak J. Constraint-induced movement therapy in the treatment of the upper limb in children with hemiplegic cerebral palsy: a Cochrane systematic review. Clin Rehabil. 2007;21(8):675–685.
14. Eliasson AC, Krumlinde-Sundholm L, Shaw K, Wang C. Effects of constraint-induced movement therapy in young children with hemiplegic cerebral palsy: an adapted model. Dev Med Child Neurol. 2005;47(4):266–275.
15. Taub E, Griffin A, Uswatte G, Gammons K, Nick J, Law CR. Treatment of congenital hemiparesis with pediatric
constraint-induced movement therapy. J Child Neurol. 2011;26(9):1163–1173.
16. Buesch FE, Schlaepfer B, de Bruin ED, Wohlrab G, Ammann-Reiffer C, Meyer-Heim A. Constraint-induced movement therapy for children with obstetric brachial plexus palsy: two single-case series. Int J Rehabil Res. 2010;33(2):187–192.
17. de Bode S, Fritz SL, Weir-Haynes K, Mathern GW. Constraint-induced movement therapy for individuals after cerebral hemispherectomy: a case series. Phys Ther. 2009;89(4):361–369.
18. Karman N, Maryles J, Baker RW, Simpser E, Berger-Gross P. Constraint-induced movement therapy for hemiplegic children with acquired brain injuries. J Head Trauma Rehabil. 2003;18(3):259–267.
19. Gordon A, Connelly A, Neville B, et al. Modified constraint-induced movement therapy after childhood stroke. Dev Med Child Neurol. 2007;49(1):23–27.
20. Reidy TG, Naber E, Viguers E, et al. Outcomes of a clinic-based pediatric
constraint-induced movement therapy program. Phys Occup Ther Pediatr. 2012;32(4):355–367.
21. Uswatte G, Taub E, Griffin A, Rowe J, Vogtle L, Barman J. Pediatric
Arm Function Test: reliability and validity for assessing more-affected arm motor capacity in children with cerebral palsy. Am J Phys Med Rehabil. 2012;91(12):1060–1069.
22. Uswatte G, Taub E, Griffin A, Vogtle L, Rowe J, Barman J. The pediatric
motor activity log-revised: assessing real-world arm use in children with cerebral palsy. Rehabil Psychol. 2012;57(2):149–158.
CI Therapy Research Group. Inventory of New Motor Activities and Programs Manual
. Birmingham, AL: University of Alabama at Birmingham and The Children's Hospital of Alabama; 2005.
24. Taub E, Ramey SL, DeLuca S, Echols K. Efficacy of constraint-induced movement therapy for children with cerebral palsy with asymmetric motor impairment. Pediatrics. 2004;113(2):305–312.
25. CI Therapy Research Group. Pediatric
Arm Function Test Manual. Birmingham, AL: University of Alabama at Birmingham and The Children's Hospital of Alabama; 2006.
26. Varni JW, Burwinkle TM, Seid M, Skarr D. The PedsQL 4.0 as a pediatric
population health measure: feasibility, reliability, and validity. Ambul Pediatr. 2003;3(6):329–341.
27. Varni JW, Burwinkle TM, Katz ER, Meeske K, Dickinson P. The PedsQL in pediatric
cancer: reliability and validity of the Pediatric
Quality of Life Inventory Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module. Cancer. 2002;94(7):2090–2106.
28. Fitzmaurice G, Liard N, Ware J. Applied Longitudinal Analysis
. Hoboken, NJ: John Wiley & Sons; 2004:292.
29. Wolf SL, Winstein CJ, Miller JP, et al. Effect of constraint-induced movement therapy on upper extremity function 3 to 9 months after stroke: The EXCITE randomized clinical trial. JAMA. 2006;296(17):2095–2104.
30. Fritz SL, Light KE, Clifford SN, Patterson TS, Behrman AL, Davis SB. Descriptive characteristics as potential predictors of outcomes following constraint-induced movement therapy for people after stroke. Phys Ther. 2006;86(6):825–832.
31. Case-Smith J, DeLuca SC, Stevenson R, Ramey SL. Multicenter randomized controlled trial of pediatric
constraint-induced movement therapy: 6-month follow-up. Am J Occup Ther. 2012;66(1):15–23.
32. Gillot AJ, Holder-Walls A, Kurtz JR, Varley NC. Perceptions and experiences of two survivors of stroke who participated in constraint-induced movement therapy home programs. Am J Occup Ther. 2003;57(2):168–176.
33. Christy JB, Saleem N, Turner PH, Wilson J. Parent and therapist perceptions of an intense model of physical therapy. Pediatr Phys Ther. 2010;22(2):207–213.
34. Sakzewski L, Carlon S, Shields N, Ziviani J, Ware RS, Boyd RN. Impact of intensive upper limb rehabilitation on quality of life: a randomized trial in children with unilateral cerebral palsy. Dev Med Child Neurol. 2012;54(5):415–423.
35. Hsin Y, Chen FC, Lin KC, Kang LJ, Chen CL, Chen CY. Efficacy of constraint-induced therapy on functional performance and health-related quality of life for children with cerebral palsy a randomized controlled trial. J Child Neurol. 2012;27(8):992–999.
36. Chen Y, Pope S, Tyler D, Warren GL. Effectiveness of constraint-induced movement therapy on upper-extremity function in children with cerebral palsy: a systematic review and meta-analysis of randomized controlled trials. Clin Rehabil. 2014;28(10):939–953. | <urn:uuid:ada85195-653e-427b-b47f-ab138eef5bdb> | CC-MAIN-2022-33 | https://journals.lww.com/pedpt/Fulltext/2017/01000/Constraint_Induced_Movement_Therapy_for_Children.16.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00097.warc.gz | en | 0.917043 | 7,767 | 2.984375 | 3 |
Although two thirds of American men and one half of American women drink alcohol,1 three fourths of drinkers experience no serious consequences from alcohol use.2 Among those who abuse alcohol, many reduce their drinking without formal treatment after personal reflection about negative consequences.3 Physicians can help prevent the serious effects of alcohol-related problems by stimulating such reflection and moving patients toward a healthier lifestyle.4 The purpose of this review is to encourage family physicians to prevent serious consequences of alcohol-related problems by using simple screening and brief intervention strategies.
Rationale for Early Screening
Preventive efforts on the part of family physicians are important because: (1) alcohol-related problems are prevalent in patients who visit family practices; (2) heavy alcohol use contributes to many serious health and social problems; and (3) physicians can successfully influence drinking behaviors. In the United States, the one-year prevalence of alcohol-use disorders, including alcohol abuse and alcohol dependence, is about 7.4 percent in the adult population.5 In patients who visit family practices, the prevalence is higher. One study of 17 primary care practices found a 16.5 percent prevalence of “problem drinkers,”4 and another study found a 19.9 percent prevalence of alcohol-use disorders among male patients.6
Heavy alcohol use can affect nearly every organ system and every aspect of a patient's life. Table 1 lists many direct and indirect effects of alcohol-related problems. Alcohol causes diseases such as cirrhosis of the liver and exacerbates symptoms in existing conditions such as diabetes.1,7,8 In addition, alcohol is implicated in many social and psychologic problems, including family conflict, arrests, job instability, injuries related to violence or accidents, and psychologic symptoms related to depression and anxiety.2,8 These problems take an enormous emotional toll on individuals and families, and are a great financial expense to health care systems and society.
|System/category||Early consequences||Late consequences|
|Liver disease||Elevated liver enzyme levels||Fatty liver, alcoholic hepatitis, cirrhosis|
|Pancreatic disease||Acute pancreatitis, chronic pancreatitis|
|Cardiovascular disease||Hypertension||Cardiomyopathy, arrhythmias, stroke|
|Gastrointestinal problems||Gastritis, gastroesophageal reflux disease, diarrhea, peptic ulcer disease||Esophageal varices, Mallory-Weiss tears|
|Neurologic disorders||Headaches, blackouts, peripheral neuropathy||Alcohol withdrawal syndrome, seizures, Wernicke's encephalopathy, dementia, cerebral atrophy, peripheral neuropathy, cognitive deficits, impaired motor functioning|
|Reproductive system disorders||Fetal alcohol effects, fetal alcohol syndrome||Sexual dysfunction, amenorrhea, anovulation, early menopause, spontaneous abortion|
|Cancers||Neoplasm of the liver, neoplasm of the head and neck, neoplasm of the pancreas, neoplasm of the esophagus|
|Psychiatric comorbidities||Depression, anxiety||Affective disorders, anxiety disorders, antisocial personality|
|Legal problems||Traffic violations, driving while intoxicated, public intoxication||Motor vehicle accidents, violent offenses, fires|
|Employment problems||Tardiness, sick days, inability to concentrate, decreased competence||Accidents, injury, job loss, chronic unemployment|
|Family problems||Family conflict, erratic child discipline, neglect of responsibilities, social isolation||Divorce, spouse abuse, child abuse or neglect, loss of child custody|
|Effects on children||Overresponsibility, acting out, withdrawal, inability to concentrate, school problems, social isolation||Learning disorders, behavior problems, emotional disturbance|
Many of these problems may be avoided by early screening and intervention by family physicians. Several studies of early and brief physician interventions have demonstrated a reduction in alcohol consumption and improvement in alcohol-related problems among patients with drinking problems.9,10 A 40 percent reduction in alcohol consumption in nondependent problem drinkers has been demonstrated following physician advice to reduce drinking.4
Tables 2 and 3 list diagnostic criteria for alcohol abuse and dependence specified by the Diagnostic and Statistical Manual of Mental Disorders, 4th ed. (DSM-IV).11 Alcohol abuse is manifested by recurrent alcohol use despite significant adverse consequences of drinking, such as problems with work, law, health or family life.
The diagnosis of alcohol dependence is based on the compulsion to drink. The dependent drinker devotes substantial time to obtaining alcohol, drinking and recovering, and continues to drink despite adverse social, psychologic or medical consequences. A physiologic dependence on alcohol, marked by tolerance or withdrawal symptoms, may or may not be present. Note that quantity and frequency of drinking are not specified in the criteria for either diagnosis; instead, the key elements of these diagnoses include the compulsion to drink and drinking despite adverse consequences.
Alcohol-use disorders are easy to recognize in patients with longstanding problems, because these persons present to the family physician with diseases such as cirrhosis or pancreatitis (Table 1). Patients in the earlier stages of alcohol-related problems may have few or subtle clinical findings, and the physician may not suspect a high consumption of alcohol. Certain medical complaints, such as headache, depression, chronic abdominal or epigastric pain, fatigue and memory loss, should alert the family physician to consider the possibility of alcohol-related problems (Table 1).
The first signs of heavy drinking may be social problems. The compulsion to drink causes persons to neglect social responsibilities and relationships in favor of drinking. Intoxication may lead to accidents, occasional arrest or job loss. Recovering from drinking can decrease job performance or family involvement. Social problems that indicate alcohol-use disorders include family conflict, separation or divorce, employment difficulties or job loss, arrests and motor vehicle accidents.
The most effective tool for diagnosing alcohol-related problems is a thorough history of the drinking behavior and its consequences. The National Institute on Alcohol Abuse and Alcoholism (NIAAA) has published The Physician's Guide to Helping Patients with Alcohol Problems, which presents a brief model for screening and assessing problems with alcohol.12 NIAAA recommends screening for alcohol-related problems during routine health examinations, before prescribing a medication that interacts with alcohol and in response to the discovery of medical problems that may be related to alcohol use (Table 1).
Screening questions are listed in Table 4. The first four questions are related to alcohol consumption. One drink is defined as 12 g of pure alcohol, which is equal to one 12-oz can of beer, one 5-oz glass of wine or 1.5 oz (one jigger) of hard liquor.7,12 NIAAA also recommends using the CAGE13 questionnaire to screen patients for alcohol use (Table 5). The CAGE questions are widely used in primary care settings and have high sensitivity and specificity for identifying alcohol problems.14 Among patients who screen positive for alcohol-related problems, additional questions should include the family history of alcohol abuse as well as family, legal, employment and health problems related to drinking.
|All patients||Use||Do you drink alcohol, including beer, wine or distilled spirits?|
|Current drinkers||Frequency||On average, on how many days per week do you drink alcohol?|
|Quantity||On a typical day when you drink, how many drinks do you have?|
|Heaviest use||What is the maximum number of drinks you had on any given occasion during the past month?|
Other screening questionnaires are available and may perform better than the CAGE questionnaire. A recent study demonstrated the superiority of the AUDIT instrument in a Veterans Administration population (Table 6).15 The TWEAK and AUDIT questionnaires performed better than the CAGE questionnaire in women (Table 7).16
|The following questions pertain to your use of alcoholic beverages during the past year. A “drink” refers to a can or bottle of beer, a glass of wine, a wine cooler, or one cocktail or shot of hard liquor.|
|1. How often do you have a drink containing alcohol? (Never, 0 points; ≤ monthly, 1 point; 2 to 4 times per month, 2 points; 2 to 3 times per week, 3 points; ≥ 4 times per week, 4 points)|
|2. How many drinks containing alcohol do you have on a typical day when you are drinking? (1 to 2 drinks, 0 points; 3 to 4 drinks, 1 point; 5 to 6 drinks, 2 points; 7 to 9 drinks, 3 points; ≥ 10 drinks, 4 points)|
|3. How often do you have 6 or more drinks on 1 occasion? (Never, 0 points; < monthly, 1 point; monthly, 2 points; weekly, 3 points; daily or almost daily, 4 points)|
|4. How often during the past year have you found that you were not able to stop drinking once you had started? (Scoring same as question No. 3)|
|5. How often during the past year have you failed to do what was normally expected from you because of drinking? (Same as question No. 3)|
|6. How often during the past year have you needed a first drink in the morning to get yourself going after a heavy drinking session? (Same as question No. 3)|
|7. How often during the past year have you had a feeling of guilt or remorse after drinking? (Same as question No. 3)|
|8. How often during the past year have you been unable to remember what happened the night before because you were drinking? (Same as question No. 3)|
|9. Have you or someone else been injured as a result of your drinking? (No, 0 points; yes, but not in the past year, 2 points; yes, during the past year, 4 points)|
|10. Has a relative or friend, or a doctor or other health care worker, been concerned about your drinking or suggested you cut down? (Same as question No. 9)|
|Tolerance: How many drinks can you hold (“hold” version; ≥ 6 drinks indicates tolerance), or how many drinks does it take before you begin to feel the first effects of the alcohol? (“high” version; ≥ 3 indicates tolerance)|
|Worried: Have close friends or relatives worried or complained about your drinking in the past year?|
|Eye openers: Do you sometimes take a drink in the morning when you first get up?|
|Amnesia: Has a friend or family member ever told you about things you said or did while you were drinking that you could not remember?|
|Kut down: Do you sometimes feel the need to cut down on your drinking?|
In the early stages of alcohol-related problems, the physical examination provides little evidence to suggest excessive drinking. Patients who abuse alcohol may have mildly elevated blood pressure but few other abnormal physical findings. Later, patients may develop significant and obvious signs of alcohol overuse, including gastrointestinal findings such as an enlarged and sometimes tender liver; cutaneous findings such as spider angiomata, varicosities and jaundice; neurologic signs such as tremor, ataxia or neuropathies; and cardiac arrhythmias. When patients arrive at the doctor's office inebriated, one should suspect a longstanding drinking problem.
Certain chemical markers are indicative but not diagnostic of alcohol-use disorders.1,8,17 Among liver function tests, the γ-glutamyl transferase (GGT) level is usually the first to become elevated, followed by the aspartate aminotransferase (AST) level, which is often twice the level of alanine aminotransferase (ALT).
The complete blood cell count may display a number of abnormalities. In cases of end-stage disease, all cell lines are reduced as a direct toxic effect of alcohol on the bone marrow. The prothrombin time (PT) is elevated because of decreased production of clotting factors by the liver. However, in early disease mean corpuscular volume (MCV) may be slightly elevated as a result of folate deficiency and the direct effects of alcohol on red blood cells. Patients with alcoholic gastritis may lose blood through the gastrointestinal tract, causing anemia and the production of smaller red blood cells, resulting in a low MCV. If both processes occur, the MCV will be normal, but the red cell distribution width will be elevated (around 20). Blood loss in the gastrointestinal tract may also cause iron deficiency.
Diagnosis and Classification
An accurate diagnosis of alcohol abuse or dependence requires a thorough medical history. Medical markers such as gastrointestinal problems or elevated liver enzymes are cause for suspicion but are not diagnostic. For example, using a GGT level higher than 40 to detect alcohol problems in a primary care population results in a sensitivity of 44 to 54 percent and a specificity of 80 to 84 percent.17 In contrast, a CAGE questionnaire with three or more positive responses is 100 percent sensitive and 81 percent specific for current alcohol dependence.18
NIAAA categorizes heavy drinkers into three groups: at-risk drinkers, problem drinkers (parallel to the DSM-IV diagnosis of “alcohol abuse”), and alcohol-dependent drinkers (parallel to the DSM-IV diagnosis of “alcohol dependence”). Table 8 describes the NIAAA assessment of alcohol-related problems.12
In the absence of medical, social or psychologic consequences of drinking, men who have more than 14 drinks per week or more than four drinks per occasion are considered “at risk” for developing problems related to drinking. Similarly, women who have more than 11 drinks per week or more than three drinks per occasion are “at risk.” Because some drinkers significantly underreport their alcohol use, physicians should define patients as “at risk” when they have a positive CAGE score or a personal or family history of alcohol-related problems (Table 8).
Patients who have current alcohol-related medical, family, social, employment, legal or emotional problems are considered “problem drinkers” regardless of their drinking patterns or responses to the CAGE questions (Table 8). Typically, these patients score 1 or 2 on the CAGE questionnaire and drink above “at-risk” levels.
|Severity of problem||Criteria|
|At risk||Men: >14 drinks per week, > 4 drinks per occasion|
|Women: >11 drinks per week, > 3 drinks per occasion, or|
|CAGE score of 1 or higher for past year, or|
|Personal or family history of alcohol problems|
|Current problem||CAGE score of 1 or 2 for past year, or|
|Alcohol-related medical problems, or|
|Alcohol-related family, legal or employment problems|
|Alcohol dependent||CAGE score of 3 or 4 for past year, or|
|Compulsion to drink, or|
|Impaired control over drinking, or|
|Relief drinking, or|
|Withdrawal symptoms, or|
Patients drinking above the “at-risk” level who have CAGE scores of 3 or 4 should be questioned about their drinking compulsions, tolerance to alcohol and withdrawal symptoms (Table 2). Those who display these traits are considered “alcohol dependent.”
Primary Care Interventions
The physician should direct intervention efforts based on consideration of two important factors: the severity of the alcohol problem and the patient's readiness to change the drinking behavior.
SEVERITY OF THE ALCOHOL PROBLEM
In patients who show evidence of alcohol dependence, the therapeutic end points should be abstinence from alcohol and referral to a specialized alcohol treatment program. Decisions about inpatient or outpatient treatment depend on the patient's likelihood of alcohol withdrawal, resources, employment status, family support system, access to treatment programs and motivation. Patients who resist formal treatment may prefer peer-directed groups, such as those offered by Alcoholics Anonymous, in conjunction with physician counseling and support. Al-Anon groups are available for adult family members of alcohol-dependent individuals. Abstinence is also indicated for non–alcohol-dependent patients who are pregnant, have comorbid medical conditions, take medications that interact with alcohol or have a history of repeated failed attempts to reduce their alcohol consumption.12
In patients who are at risk for developing alcohol-related problems or who have evidence of current problems, the therapeutic end point should be drinking at low-risk limits: for men, no more than two drinks with alcohol per day; for women or older persons (over 65) no more than one drink per day.12
READINESS TO CHANGE
A rare patient will present to the physician with the request for help in giving up alcohol. When persons change lifestyle behaviors such as tobacco or alcohol use, they typically move through stages of change: precontemplation (not ready for change), contemplation (ambivalence about change), preparation (planning for change), action (the act of change) and maintenance (maintaining the new behavior).19 This model of change can be pictured as a continuum, with a person moving back and forth among the stages, depending on the personal day-to-day costs and benefits of that behavior. Relapse is common and does not indicate a “failed” intervention. Contemplation (ambivalence) is the most common stage of change. One study found that 29 percent of hospitalized patients with alcohol disorders were uninterested in changing, 45 percent were ambivalent and 26 percent were ready to change their drinking behavior.20
Some experts consider precontemplation to be a synonym for alcoholic denial, that is, a refusal to acknowledge problems. However, others21 do not find the concept of denial useful when working with patients with alcohol disorders. They note that direct or confrontational counseling strategies are likely to evoke resistance in patients, which, in turn, will be labeled “denial.” Furthermore, their work demonstrates that even patients who do not admit to an alcohol problem can change their behaviors. Personal decisions about lifestyle changes evolve slowly over time, requiring much reflection, with repeated attempts at change and repeated setbacks. Patients will not leap from the precontemplation stage into the action stage after one clinic visit, no matter how insightful or aggressive the practitioner. The goal of each visit should be to help the patient move along the continuum of change toward a reduction in alcohol use.
With the stage-of-change continuum in mind, physicians should tailor interviews according to the patient's stage.20 In clinical settings, a good assessment is itself an intervention, stimulating patients to reflect on their drinking behavior. Well-intentioned advice, a familiar tool among physicians, works best with patients who are preparing for change. A physician who tries direct persuasion with an ambivalent patient risks pushing the patient toward resistance. However, at any stage, urgent persuasion is appropriate in patients requiring immediate change: a pregnant woman who drinks heavily or patients with severe medical, psychologic or social problems related to alcohol use. Even in these circumstances, resistance to direct advice is likely. When giving advice, physicians should avoid prescriptive directions. Instead, physicians can educate patients about the consequences in an objective manner: “Drinking affects the fetus in this way....” This information is most effective when it addresses issues that directly concern the patient.
Rollnick and colleagues18 have developed a menu of brief strategies for the primary care-giver, based on a model of counseling called “motivational interviewing” (Table 9).20 In all patients, the physician should begin by directing the interview toward understanding the drinking behavior and how it fits into patients' lives. Among patients in the precontemplation stage, this assessment is the complete intervention. In the contemplation stage, the physician should explore patients' ambivalence toward change, including reasons to quit and reasons to continue drinking. At this point, patients may be receptive to information about the effects of alcohol. In the later stages, the physician may acquaint patients with helpful community resources such as Alcoholics Anonymous or formal treatment programs, and help them anticipate and prepare for temptations and setbacks.
|Strategies||Stage of change||Description|
|Lifestyle, stresses and alcohol use||Precontemplation||Discuss lifestyle and life stresses|
|“Where does your use of alcohol fit in?”|
|Health and alcohol use||Precontemplation||Ask about health in general|
|“What part does your drinking play in your health?”|
|A typical day||Precontemplation||“Describe a typical day, from beginning to end.|
|How does alcohol fit in?”|
|“Good” things and “less good” things||Contemplation||“What are some good things about your use of alcohol?|
|“What are some less good things?”|
|Providing information||Contemplation||Ask permission to provide information|
|Deliver information in a nonpersonal manner|
|“What do you make of all this?”|
|The future and the present||Contemplation||“How would you like things to be different in the future?”|
|Exploring concerns||Preparation or action||Elicit the patient's reasons for concern about alcohol use|
|List concerns about changing behavior|
|Helping with decision-making||Preparation or action||“Given your concerns about drinking, where does this leave you now?”|
The goal of these strategies is to help patients develop their own rationale for change and to nudge them in the direction of a healthier lifestyle. This nondirective approach removes the element of resistance because the patient does the work: the patient reflects on the ways alcohol fits into his or her life, weighs the personal costs and benefits of drinking, provides the arguments for change and makes the decision to quit drinking. The physician's job is simply to elicit information, encourage patients to reflect and support their movement toward healthy change.
Excessive alcohol use can affect every part of a person's life, causing serious medical problems, family conflict, legal difficulties and job loss. Family physicians, with training in biomedical and psychosocial issues and access to family members, are in a good position to recognize problems related to alcohol use and to assist patients with lifestyle change. NIAAA provides simple guidelines for alcohol screening, based on a thorough drinking history and a sound understanding of the pattern of consequences. Physicians who are sensitive to these issues will find alcohol-use disorders easier to diagnose, and physicians who motivate their patients to reflect on their drinking will encourage recovery. | <urn:uuid:77f127f3-b053-4101-8fec-7c01f13fc000> | CC-MAIN-2022-33 | https://www.aafp.org/pubs/afp/issues/1999/0115/p361.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00696.warc.gz | en | 0.922439 | 4,798 | 3.015625 | 3 |
Volume 19, Number 9—September 2013
Enzootic and Epizootic Rabies Associated with Vampire Bats, Peru
During the past decade, incidence of human infection with rabies virus (RABV) spread by the common vampire bat (Desmodus rotundus) increased considerably in South America, especially in remote areas of the Amazon rainforest, where these bats commonly feed on humans. To better understand the epizootiology of rabies associated with vampire bats, we used complete sequences of the nucleoprotein gene to infer phylogenetic relationships among 157 RABV isolates collected from humans, domestic animals, and wildlife, including bats, in Peru during 2002–2007. This analysis revealed distinct geographic structuring that indicates that RABVs spread gradually and involve different vampire bat subpopulations with different transmission cycles. Three putative new RABV lineages were found in 3 non–vampire bat species that may represent new virus reservoirs. Detection of novel RABV variants and accurate identification of reservoir hosts are critically important for the prevention and control of potential virus transmission, especially to humans.
Rabies virus (RABV; family Rhabdoviridae, genus Lyssavirus) is a bullet-shaped, single-stranded, negative-sense RNA virus with a 12-kb genome that encodes 5 structural proteins: nucleoprotein (N), phosphoprotein, matrix protein, glycoprotein, and polymerase (1). Over the course of its evolutionary history, RABV has established independent transmission cycles in diverse species of mesocarnivores and bats. Rabies disease remains a serious public health concern in several countries of Asia, Africa, and the Americas, where it is estimated that >50,000 fatal infections occur annually (2).
In Latin America, rabies diseases are classified into 2 major epidemiologic forms, urban rabies and sylvatic rabies. For the former, dogs are the main viral reservoir host; for the latter, several species of wild carnivores and bats maintain independent rabies enzootics. Because of the widespread control of urban rabies through vaccination of domestic dogs, the common vampire bat (Desmodus rotundus) has emerged as the principal RABV reservoir host along the species’ natural range from Mexico to South America (3,4). The transmission and maintenance of RABV in natural populations of D. rotundus bats remains poorly understood, particularly within ongoing epizootics and enzootics occurring in different regions of the Americas (5,6). Active programs for the control of vampire bat–associated rabies in Latin America rely primarily on reduction of vampire bat populations by culling (7,8). Nonetheless, cross-species transmission to humans and domestic animals persists, even in areas where culling occurs regularly.
In Peru and other countries within the Amazon rainforest region, RABV transmitted by vampire bats has acquired greater epidemiologic importance because of the more frequent detection of human rabies outbreaks. This increase may reflect enhanced laboratory-based surveillance; increased awareness among public health stakeholders; or ecologic changes that promote greater contact between bats and humans, such as depletion of vampire bats’ natural prey community through hunting or habitat fragmentation. During 2002–2007, a total of 293 (77%) of the rabies cases diagnosed by the Instituto Nacional de Salud in Peru were associated with vampire bat RABV variants; the remaining 87 (23%) were attributed to RABV variants associated with dogs. In communities where vampire bats commonly feed on humans, the frequency of outbreaks depends on the transmission dynamics within the local vampire bat populations (9,10). Unfortunately, recent outbreaks in native communities of the Amazon region have been poorly characterized because of cultural constraints and local beliefs that have precluded investigators from obtaining diagnostic specimens (11).
Molecular epidemiology has been extensively used to determine RABV reservoir hosts in a given region or country, define the geographic distribution of the disease associated with those hosts, infer the temporal and spatial spread of the disease, identify spillover infections to nonreservoir species, describe novel RABV variants, and detect putative host shifts (12). The spatiotemporal epidemiology and genetic diversity of vampire bat–associated rabies in Peru have not been explored; a laboratory-based investigation conducted in 1999 addressed the comprehensive characterization of RABV in only 2 humans (11). Given the increasing importance of vampire bat–associated rabies in the Peruvian Amazon, comprehensive surveys of virus diversity and elucidation of geographic boundaries are needed to clarify the frequency and duration of rabies outbreaks. The goals of our study were to 1) determine the genetic diversity and geographic distribution of RABV infection associated with vampire bats; 2) clarify disease dissemination trends among affected areas; 3) detect the origins of spillover infections to other mammals; and 4) identify novel RABV lineages.
During 2002–2007, decentralized units of the Ministry of Health of Peru collected 157 brain samples from multiple species and geographic regions of Peru (Technical Appendix Table 1). Samples were selected on the basis of identification of vampire bat or any other bat-associated rabies virus variant by using a panel of 8 monoclonal antibodies, as described (12). The specimens included samples from 98 cows, 26 bats, 12 humans, 9 horses, 5 goats, 2 dogs, 2 donkeys, 1 kinkajou, 1 pig, and 1 sheep. Most samples (n = 118) originated from the departments of Apurimac, Ayacucho, Cusco, Madre de Dios, and Puno, located in the southern region of the country, which is made up of inter-Andean valleys and Amazon rainforest. Twenty-six samples were from the departments of San Martin, Amazonas, Cajamarca, and Lambayeque, located in the northern region, which comprises the Andean mountains and Amazonian forests. The remaining 13 samples were from the departments of Pasco, Huanuco, and Ucayali in the central Amazon. All samples were submitted to the reference laboratory of the Instituto Nacional de Salud for RABV confirmation by fluorescent antibody testing (13).
PCR and Sequencing
Total RNA was extracted from each sample after a single passage in mouse brains by using TRIzol (Invitrogen, Carlsbad, CA, USA), according to the manufacturer’s specifications. Amplification of the complete N gene was achieved by reverse transcription PCR through 2 overlapping reactions by use of 3 published primers (Lys 001, 550F, and 304) and a modified version of primer 1066degB (14–16). The primer sets were used in the following combinations: Lys001, 5′-ACGCTTAACGAMAAA-3′; 1066degB, 5′-TCYCTGAAGAATCTTCTYTC-3′; 550F, 5′-ATGTGYGCTAAYTGGAGYAC-3′; and 304, 5′-TTGACGAAGATCTTGCTCAT-3′ (14–16). PCR products were visualized on 1.5% agarose gels, and expected size amplicons were purified by using ExoSAP-IT (USB Products Affymetrix, Inc., Cleveland, OH, USA). Cycle sequencing reactions were conducted by using Big Dye Terminator v1.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, USA), according to the manufacturer’s instructions. Products were analyzed on an ABI 3730 DNA analyzer (Applied Biosystems, Grand Island, NY, USA). Chromatograms were edited by using BioEdit (17), and sequences were assembled by using the fixed RABV SADB19 (GenBank accession no. M31046) as a template (18). Multiple alignments were attained by using ClustalW (19)
For phylogenetic reconstructions, we retrieved complete and partial RABV sequences from GenBank that represented historical and ongoing rabies epizootics in the Americas (Technical Appendix Table 2). Other Lyssavirus species, such as European bat lyssavirus (EBLV) 1 (U22845) and EBLV-2 (U22846), were included as outgroups (20). Phylogenetic reconstructions using complete N gene sequences of the 157 isolates from Peru and 83 from GenBank were generated by using the neighbor-joining (NJ) method in MEGA 4.0 (21), assuming the maximum composite likelihood nucleotide substitution model. The statistical significance of branch partitions was assessed with 1,000 bootstrap replicates. We also estimated a time-scaled phylogenetic tree for the dataset comprising RABVs associated with D. rotundus (154 isolates from Peru and 58 from GenBank) by using BEAST version 1.7 (22), which uses a Bayesian coalescent framework to estimate evolutionary parameters from many possible genealogies through Markov chain Monte Carlo sampling. Our analysis used the Bayesian skyline model of population growth as a flexible demographic prior and the relaxed lognormal molecular clock to allow for rate variation among branches of the tree. Substitution models for coding positions 1+2 (CP12) and CP3 were unlinked, and substitution models in each coding position were selected by Akaike Information Criterion in jModeltest (23). The general time reversible model, including invariant sites and Γ distributed site heterogeneity, was applied to CP12, and time reversible model + Γ was applied to CP3. Four replicate Markov chain Monte Carlo analyses were run for 60 million generations each and combined for final estimates and construction of the maximum-clade credibility tree. Convergence across runs, appropriate burn-in periods, and effective sample sizes >200 were assessed by using Tracer (http://beast.bio.ed.ac.uk/Tracer).
Phylogeny of RABV Isolates
Complete N gene sequences (1,350 nt, excluding the stop codon) were obtained from 157 specimens from humans, domesticated animals, and wildlife from 12 of the 24 departments of Peru (Technical Appendix Table 1). Pairwise similarity ranged from 85.9% to 100%, with an average pairwise identity of 97.3%. The NJ phylogenetic analysis demonstrated 2 major RABV clusters, 1 associated with D. rotundus bats and 1 associated with insectivorous bats. The D. rotundus cluster was subsequently subdivided into 4 lineages, I–IV, each with a distinctive geographic distribution within Peru; the RABVs associated with insectivorous bats segregated into 3 independent RABV lineages not previously reported in Peru (Figure 1).
Sequences within lineage I showed a widespread spatiotemporal distribution. Isolates were obtained from the departments of Amazonas, San Martin, Cajamarca, Huanuco, Ucayali, Pasco, Ayacucho, Cusco, and Madre de Dios. Inclusion of the reference sequences from GenBank revealed that lineage I had an extended spatiotemporal distribution over northern regions of South America, encompassing Ecuador and Colombia, during 1997–2007 (Figure 1) (24). Conversely, isolates in lineage II were detected predominately during a human rabies outbreak in the Madre de Dios and Puno departments in 2007. This lineage also grouped with an isolate from a sample found in the Cusco department in 2003 (GenBank accession no. JX648444) and with several RABV sequences reported in Brazil and Uruguay during 2004–2008 (24). These findings indicate that lineage II has been circulating within a larger geographic scale, perhaps reflecting virus dispersion across the Amazon region and southern South America (Figures 1). Two isolates (GenBank accession nos. JX648544 and JX648543) grouped into lineage III as an independent cluster unrelated to any previously described RABV (Figure 1). These samples were collected in 2006 from the Pozuzo district, which is located in the eastern side of the Pasco department in the central Peruvian Amazon.
Lineage IV was the most frequently identified lineage among the isolates collected in Peru, encompassing 98 of the 157 isolates captured during 2002–2007. These results indicate this lineage’s high prevalence in cattle in the Andes. Its geographic distribution comprised the valleys of Ayacucho and Apurimac, located at 1,200–3,500 m above sea level and extended into Cusco and north into San Martin, Lambayeque, and northern Colombia. Although this lineage was predominately collected from livestock, it was also obtained from vampire bats (n = 24).
Evolution of Vampire Bat–Associated RABV in Peru
By applying a Bayesian coalescent analysis to 212 serially sampled partial N sequences (1,275-bp), we inferred the time scale of RABV evolution in lineages associated with vampire bats. Consistent with previous estimates, the median rate of nucleotide substitution of vampire bat–associated RABV was 9.76 × 10−4 substitutions per site per year (95% highest posterior density [HPD] 6.81 × 10−4 to 1.3 × 10−3). These results would place the most recent common ancestor (MRCA) of contemporary vampire bat–associated RABVs as occurring in Peru in 1933 (95% HPD 1889–1962) (25). The maximum clade credibility tree (Figure 2) demonstrated similar topology to the NJ tree (Figure 1) when broader datasets were used, with vampire bat–associated RABVs differentiated into 4 phylogenetic lineages (Figure 2). As in the NJ trees, a deep division at the MRCA of vampire bat–associated RABVs separated lineages I and II from lineages III and IV (posterior probability [PP] 1.0). Major lineages appear to have been circulating for similar periods in Peru, each originating 33–44 years ago (when including the stem branch leading to current viral diversity), with extensive overlap of the 95% HPDs of the time since the MRCA for each lineage. Each RABV lineage in Peru except lineage III shared common ancestors with viruses circulating in other Latin American countries, indicating multiple viral dispersion events into or out of Peru; however, overlap of the 95% HPDs on the age of samples from Peru compared with those from other countries limited direct inference on the directionality of movement between countries. Within lineage I, samples from Ecuador and Colombia were interspersed with contemporary samples from Peru, which suggests a relatively recent spatial spread among countries. In addition, historical introductions of a similar RABV were indicated by strong posterior support (PP 0.99) for an MRCA between lineage I and samples from Colombia, Trinidad, and French Guyana in about 1973. Isolates related to lineage II were detected in Brazil and Uruguay; however, strong spatiotemporal clustering apparently separated distinct epizootics in Brazil in 2004 and Uruguay and Brazil in 2007–2008 from the human outbreak in southern Peru in 2007. A sample from a cow collected in 2003 in Peru was ancestral to samples from the 2007 human outbreak in Peru (PP 1), rather than grouping with the more contemporaneous viruses circulating in Brazil in 2004, indicating that this virus may have circulated in Peru for >4 years before the 2007 outbreak.
As in the NJ tree, lineage III was most closely related to lineage IV (PP 0.99) but was highly divergent, sharing an MRCA in 1963 (95% HPD 1940–1979). The large genetic distance from other lineages indicates that these sporadic cases were not recently introduced from other RABV lineages circulating elsewhere in Peru but rather were part of a previously unknown vampire bat–associated rabies enzootic. No samples from other countries clustered with the lineage IV samples from Peru, suggesting that this virus has been maintained independently as a widespread rabies focus that covers the inter-Andean valleys (Ayacucho, Apurimac) and the rainforest of northern Peru (San Martin). The closest relatives of lineage IV were viruses collected in Colombia during 1994–2008, which diverged from the samples from Peru around 1972 (95% HPD 1957–1984), consistent with the enzootic maintenance of rabies over long periods within Peru.
Novel RABV Associated with Other Wildlife in Peru
We identified 3 novel RABV variants in wildlife other than vampire bats in the southeastern region of Peru. The first variant (GenBank accession no. JX648546) was isolated from a kinkajou (Potus flavus) in Madre de Dios in the Amazon rainforest in southern Peru. This variant was not closely related to any previously described RABV but grouped within the larger diversity of bat-associated RABVs in the Americas (Figure 1). A second RABV variant (GenBank accession no. JX648545) was isolated from a small big-eared brown bat (Histiotus montanus) in Puno in southern Peru. This variant was related to RABVs found in bats of the genera Histiotus, Nyctinomops, and Tadarida from Chile and Brazil but appears to be an independent lineage; its branch length and pairwise average divergence of 5% separate it from its closest relatives. A third variant (GenBank accession no. JX648547) was found during 2008 in Paucartambo, Cusco, which is located in an inter-Andean valley at 2,900 m altitude. This sample clustered with sequences from unidentified bats from Brazil (GenBank accession nos. AB297651 and AB297656). Unfortunately, the bat from which this sample was obtained was not available for taxonomic identification.
Rabies epidemiology has experienced dramatic changes in Latin America during the past 4 decades because of the implementation of highly effective strategies for prevention and control of infection in dogs and the procurement of adequate postexposure prophylaxis for humans. In 2003, human rabies cases transmitted by bats outnumbered cases transmitted by dogs in Latin America (4), and that trend has continued. The increasing detection of RABV infection in humans in the Peruvian Amazon and the persistence of vampire bat–transmitted RABV infection in livestock highlight the need to clarify the diversity of RABV lineages circulating in Peru and the spatiotemporal dynamics of RABVs associated with vampire bats. We completed phylogenetic analysis of bat-associated RABVs collected in Peru, using samples collected from rabies-endemic areas in the Andes, during sporadic human outbreaks in the Amazon, and from previously unsurveyed wildlife host species. Our study revealed that at least 4 phylogenetic lineages of RABV are circulating in vampire bat populations in Peru; these lineages appeared to display distinctive spatiotemporal dynamics across their geographic ranges. Three of the lineages had wide geographic distributions in Peru and recent and historical relationships linked to rabies outbreaks occurring in other parts of South America (24,26–32). Dissemination of vampire bat–associated RABV appears to be gradual rather than involving long-distance dispersal events, as might be expected by the absence of long-distance migration and small home range of the reservoir species (33,34). Spatiotemporal analysis of lineage I, II, and IV RABVs showed that the ample distribution ranges were covered over periods no shorter than 3–4 decades. The specific movement of vampire bat–associated RABVs is difficult to assess, but the phylogenetic and evolutionary analyses we conducted indicate that lineages I and IV spread from north to south, whereas lineage II spread from south to north. Lineage III had restricted distribution in central Peru, which suggests it was part of a long-term vampire rabies enzootic that disappeared from Peru around 2006. Hence, in contrast to lineages I and IV, the local dynamics for lineage 3 were epizootic rather than enzootic. Understanding factors linked to the limited geographic distribution and apparent extinction of lineage III are important for preparing improved prevention and control practices.
Vampire bats are not a migratory species and usually inhabit places below 1,800-m altitude. Nonetheless, they may occasionally move relatively long distances and inhabit higher altitudes in response to limited food or roost availability. Movement encouraged by food supplementation may be illustrated by the distribution dynamics observed for lineage IV, which currently is mainly found along the inter-Andean valleys, an important cattle raising area in Per, which has an average altitude >2,000 m (35). Our data suggest that the incursion of lineage IV into the inter-Andean valleys is relatively recent (30–40 years ago) and probably occurred from northern lower lands, consistent with the likely ancestors of this lineage coming from Colombia and Ecuador (Figure 2). Because of the detrimental economic effects of vampire bat–associated rabies in the livestock industry in this region, in 2010, the government of Peru initiated intense control and prevention measures that included culling vampire bats. However, the frequency of rabies cases in livestock has been unaffected (6).
Our study showed that different RABV lineages may overlap temporally and geographically, which indicates that, within a rabies enzootic region, convergence or cocirculation of >1 RABV lineage may occur, perhaps in association with the maintenance of independent rabies foci by distinct vampire bat metapopulations. This observation could affect effective planning of prevention and control strategies because 1 focal point might be vulnerable to rabies reintroduction from adjacent foci, a process that could explain the persistence of the disease. Studies of population structure, gene flow, and dispersal of vampire bats within Peru and throughout the South America are necessary for corroborating observations on the dissemination dynamics of rabies associated with this species.
Although it was not the intent of this study to identify the role of rabies transmission and maintenance among species other than vampire bats, we circumstantially discovered 3 potentially novel RABV lineages in non–vampire bat hosts. This finding stresses the potential emergence of novel RABV reservoirs in the country and the need for enhanced surveillance for lyssaviruses in potential wild animal reservoirs. In Peru, the surveillance system for the detection and monitoring of human rabies cases associated with bats and other wild animals is passive; that is, cases are recorded only as they are reported. Operationally, the system is less than ideal because, even though most clinical cases of rabies in humans may be recorded, few are laboratory confirmed; consequently, the RABV variants associated with them are not typed. Rabies associated with insectivorous bats is commonly encountered in countries such as the United States, where 1 or 2 cases of rabies occur in humans each year (36). A bat rabies surveillance system such as the one in place in the United States, which tests >20,000 bats and confirms ≈1,400 infections each year, relies heavily on submissions of sick or dead bats to rabies diagnostic facilities from the general public (36). This public participation in the process has been augmented by active educational programs that emphasize the potential risk for rabies transmission from bats to humans, pets, and livestock. Human rabies associated with insectivorous bats has been reported in other countries in Latin America, such as Chile and Mexico (37,38), but the role of these bats in rabies transmission to humans is largely unknown in Peru. Therefore, better understanding of these transmission cycles and better programs for the taxonomic identification of bats with rabies should be implemented.
We also identified RABV in a kinkajou; this strain that was not closely related to any known RABV. We could not determine whether this animal represented a single spillover infection from a previously unknown bat reservoir or an emerging host shift with ongoing transmission within kinkajous. Kinkajous are in the same taxonomic family (Procyonidae) as raccoons (Procyon lotor) (39), which are a well-established rabies reservoir in North America. This relationship suggests that kinkajou could serve as an emerging RABV reservoir if the traits that enable the establishment of RABV reservoirs are conserved along the phylogeny of procyonids. Serologic surveys and enhanced surveillance would be useful for further exploring this possibility.
In conclusion, our study demonstrates the presence of diverse RABV lineages associated with vampire bats and several other species in Peru. Although our research was limited by the restrictions of passive surveillance data, RABV lineages in vampire bats appear to show distinct spatiotemporal patterns, with 2 lineages that were abundant and widely distributed throughout the study period and 2 others that occurred more sporadically, consistent with enzootic and epizootic dynamics. Further discrimination of transmission cycles and their drivers will be crucial for prediction of the frequency of outbreaks in humans and domestic animals and, ultimately, for the design of informed strategies for rabies control in this region.
Mr Condori is a guest researcher at the Centers for Disease Control and Prevention and has 10 years of experience in rabies diagnosis and molecular typing. He has a special interest in molecular epidemiology and ecology of rabies.
We thank Charles Rupprecht and Sergio Recuenco for their comments and suggestions on the manuscript, Cecilia Otero and Rebecca Alvarado and the American Fellows program for sponsoring R.E.C-C.’s visit to the Centers for Disease Control and Prevention, and the rabies team of the Instituto Nacional de Salud (Ricardo Lopez, Albina Diaz, Margarita Fernandez and Alejandro Arenas). We also thank all colleagues working in the national network of public health laboratories and personnel of the zoonosis program of Direcciones Regionales de Salud of the Ministry of Health in Peru for their valuable work in rabies surveillance.
This study was sponsored by the Centers for Disease Control and Prevention, Atlanta GA, USA; Instituto Nacional de Salud in Lima, Peru; and the American Fellows program, Partners of the Americas of the USA government.
- Wunner WH, Larson JK, Dietzschold B, Smith CL. The molecular biology of rabies viruses. Rev Infect Dis. 1988;10(Suppl 4):S771–84.
- World Health Organization. WHO Expert Consultation on Rabies. World Health Organ Tech Rep Ser. 2005;931:1–88 .
- Kobayashi Y, Sato G, Mochizuki N, Hirano S, Itou T, Carvalho AA, Molecular and geographic analyses of vampire bat–transmitted cattle rabies in central Brazil. BMC Vet Res. 2008;4:44.
- Schneider MC, Romijn PC, Uieda W, Tamayo H, da Silva DF, Belotto A, Rabies transmitted by vampire bats to humans: an emerging zoonotic disease in Latin America? Rev Panam Salud Publica. 2009;25:260–9.
- Delpietro HA, Russo RG. Ecological and epidemiologic aspects of the attacks by vampire bats and paralytic rabies in Argentina and analysis of the proposals carried out for their control. Rev Sci Tech. 1996;15:971–84 .
- Streicker DG, Recuenco S, Valderrama W, Gomez Benavides J, Vargas I, Pacheco V, Ecological and anthropogenic drivers of rabies exposure in vampire bats: implications for transmission and control. Proc Biol Sci. 2012;279:3384–92.
- Sétien AA, Brochier B, Tordo N, De Paz O, Desmettre P, Peharpre D, Experimental rabies infection and oral vaccination in vampire bats (Desmodus rotundus). Vaccine. 1998;16:1122–6.
- Arellano-Sota C. Biology, ecology, and control of the vampire bat. Rev Infect Dis. 1988;10(Suppl 4):S615–9.
- Schneider MC, Santos-Burgoa C, Aron J, Munoz B, Ruiz-Velazco S, Uieda W. Potential force of infection of human rabies transmitted by vampire bats in the Amazonian region of Brazil. Am J Trop Med Hyg. 1996;55:680–4 .
- Gilbert AT, Petersen BW, Recuenco S, Niezgoda M, Gómez J, Laguna-Torres VA, Evidence of rabies virus exposure among humans in the Peruvian Amazon. Am J Trop Med Hyg. 2012;87:206–15.
- Warner CK, Zaki SR, Shieh WJ, Whitfield SG, Smith JS, Orciari LA, Laboratory investigation of human deaths from vampire bat rabies in Peru. Am J Trop Med Hyg. 1999;60:502–7 .
- Velasco-Villa A, Messenger SL, Orciari LA, Niezgoda M, Blanton JD, Fukagawa C, New rabies virus variant in Mexican immigrant. Emerg Infect Dis. 2008;14:1906–8.
- Dean D, Ableseth M, Atanasiu P. The fluorescent antibody test. In: Meslin FX, Kaplan MM, Koprowski H, editors. Laboratory techniques in rabies. 4th ed. Geneva: World Health Organization; 1996. p. 88–95.
- Trimarchi CV, Smith JS. Diagnostic evaluation. In: Press A, Jackson AC, Wunner WH, editors. Rabies. 1st ed. San Diego (CA): Academic Press; 2002. p. 308–44.
- Markotter W, Kuzmin I, Rupprecht CE, Randles J, Sabeta CT, Wandeler AI, Isolation of Lagos bat virus from water mongoose. Emerg Infect Dis. 2006;12:1913–8.
- Orciari LA, Rupprecht CE. Rabies virus. In: Murray PR, Jorgensen JH, Baron EJ, Landry ML, Pfaller MA, editors. Manual of clinical microbiology. Washington (DC): ASM Press; 2007. p. 1470–7.
- Hall TA. Bioedit: a user-friendly biological sequence alignment editor and analysis program for Windows 95/98/NT. Nucl Acids Symp. 1999;40:95–8.
- Conzelmann KK, Cox JH, Schneider LG, Thiel HJ. Molecular cloning and complete nucleotide sequence of the attenuated rabies virus SAD B19. Virology. 1990;175:485–99.
- Larkin MA, Blackshields G, Brown NP, Chenna R, McGettigan PA, McWilliam H, ClustalW and ClustalX version 2.0. Bioinformatics. 2007;23:2947–8 .
- Kissi B, Tordo N, Bourhy H. Genetic polymorphism in the rabies virus nucleoprotein gene. Virology. 1995;209:526–37.
- Tamura K, Dudley J, Nei M, Kumar S. MEGA4: Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0. Mol Biol Evol. 2007;24:1596–9.
- Drummond AJ, Suchard MA, Xie D, Rambaut A. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Mol Biol Evol. 2012;29:1969–73.
- Posada D. jModelTest: Phylogenetic model averaging. Mol Biol Evol. 2008;25:1253–6.
- Castilho JG, Carnieli P Jr, Durymanova EA, Fahl Wde O, Oliveira Rde N, Macedo CI, Human rabies transmitted by vampire bats: antigenic and genetic characterization of rabies virus isolates from the Amazon region (Brazil and Ecuador). Virus Res. 2010;153:100–5.
- Streicker DG, Lemey P, Velasco-Villa A, Rupprecht CE. Rates of viral evolution are linked to host geography in bat rabies. PLoS Pathog. 2012;8:e1002720.
- Ito M, Itou T, Shoji Y, Sakai T, Ito FH, Arai YT, Discrimination between dog-related and vampire bat–related rabies viruses in Brazil by strain-specific reverse transcriptase–polymerase chain reaction and restriction fragment length polymorphism analysis. J Clin Virol. 2003;26:317–30.
- Macedo CI, Carnieli Junior P, Fahl Wde O, Lima JY, Oliveira Rde N, Achkar SM, Genetic characterization of rabies virus isolated from bovines and equines between 2007 and 2008, in the states of São Paulo and Minas Gerais. Rev Soc Bras Med Trop. 2010;43:116–20.
- Carnieli P Jr, Castilho JG, Fahl Wde O, Veras NM, Timenetsky Mdo C. Genetic characterization of rabies virus isolated from cattle between 1997 and 2002 in an epizootic area in the state of São Paulo, Brazil. Virus Res. 2009;144:215–24.
- Mochizuki N, Kobayashi Y, Sato G, Hirano S, Itou T, Ito FH, Determination and molecular analysis of the complete genome sequence of two wild-type rabies viruses isolated from a haematophagous bat and a frugivorous bat in Brazil. J Vet Med Sci. 2011;73:759–66.
- Campos AC, Melo FL, Romano CM, Araujo DB, Cunha EM, Sacramento DR, One-step protocol for amplification of near full-length cDNA of the rabies virus genome. J Virol Methods. 2011;174:1–6.
- Nadin-Davis SA, Huang W, Armstrong J, Casey GA, Bahloul C, Tordo N, Antigenic and genetic divergence of rabies viruses from bat species indigenous to Canada. Virus Res. 2001;74:139–56.
- Delmas O, Holmes EC, Talbi C, Larrous F, Dacheux L, Bouchier C, Genomic diversity and evolution of the lyssaviruses. PLoS ONE. 2008;3:e2057.
- Trajano E. Movements of cave bats in southeastern Brazil, with emphasis on the population ecology of the common vampire bat, Desmodus rotundus (Chiroptera). Biotropica. 1996;28:121–9.
- Crespo JA, Vanella JM, Blood BD, De Carlo JM. Observaciones ecológicas del vampiro (Desmodus rutundus rotundus) (Geofroy) en el norte de Córdoba Revista Museo Argentino de ciencias naturales. Bernardino Rivadavia. 1961;6:131–60.
- Windsor RS. Relating national veterinary services to the country's livestock industry: case studies from four countries—Great Britain, Botswana, Perú, and Vietnam. Ann N Y Acad Sci. 2002;969:39–47.
- Blanton JD, Palmer D, Dyer J, Rupprecht CE. Rabies surveillance in the United States during 2010. J Am Vet Med Assoc. 2011;239:773–83.
- Favi M, de Mattos CA, Yung V, Chala E, Lopez LR, de Mattos CC. First case of human rabies in Chile caused by an insectivorous bat virus variant. Emerg Infect Dis. 2002;8:79–81.
- Velasco-Villa A, Orciari LA, Juarez-Islas V, Gomez-Sierra M, Padilla-Medina I, Flisser A, Molecular diversity of rabies viruses associated with bats in Mexico and other countries of the Americas. J Clin Microbiol. 2006;44:1697–710.
- Bininda-Emonds ORP, Cardillo M, Jones KE, MacPhee RDE, Beck RMD, Grenyer R, The delayed rise of present-day mammals. Nature. 2007;446:507–12.
FiguresCite This Article
No table matches the in-text citation "Table 1". Please supply a table or delete the citation.
No table matches the in-text citation "Table 2". Please supply a table or delete the citation.
No table matches the in-text citation "Table 1". Please supply a table or delete the citation. | <urn:uuid:5bc009bb-19e0-433f-ab55-877b5518f3d9> | CC-MAIN-2022-33 | https://wwwnc.cdc.gov/eid/article/19/9/13-0083_article | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00295.warc.gz | en | 0.894863 | 7,757 | 3.4375 | 3 |
The clinical microsystem puts medical error and harm reduction into the broader context of safety and quality of care by providing a framework to assess and evaluate the structure, process, and outcomes of care. Eight characteristics of clinical microsystems emerged from a qualitative analysis of interviews with representatives from 43 microsystems across North America. These characteristics were used to develop a tool for assessing the function of microsystems. Further research is needed to assess microsystem performance, outcomes, and safety, and how to replicate “best practices” in other settings.
- clinical microsystems
- quality improvement
- patient safety, practice based research
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Health care is provided to patients by caregivers who work in complex organisational arrangements, but the overwhelming amount of their own daily work is as part of “clinical microsystems”. The basic concept of clinical microsystems—small organised groups of providers and staff caring for a defined population of patients—is not new. One can envisage microsystems existing in every healthcare setting—primary care clinics, neonatal intensive care units, renal dialysis units, diabetes care clinics, etc. However, often people lack awareness of the elements and the dynamics of the small systems in which they work. Microsystems are often not recognised as a functioning unit by the larger organisations that provide the organisational context for their work. Research has been important in identifying the extent and general causal pathways of errors in health care. Additional research is needed to develop and test better ways to prevent errors and improve patient safety at the microsystem of healthcare delivery—where patients and providers meet at the front lines of patient care.
The IOM report “To err is human: building a safer health system” estimated that 44 000–98 000 people die each year die from medical errors.1 Even the lower estimate is higher than the annual mortality from motor vehicle accidents (43 458), breast cancer (42 297), or AIDS (16 516), thus making medical errors the eighth leading cause of death in the United States.
Although errors in medication,2 surgery,3 and diagnosis are the easiest to detect, medical errors may result more frequently from the organisation of healthcare delivery. For example, Leape and colleagues4 discovered that failures at the system level were the real culprits in over 75% of adverse drug events. Reason et al5 suggested that some systems are more vulnerable and therefore more likely to experience adverse events. Certain organisational pathologies contribute to what Reason refers to as “vulnerable system syndrome”—blaming front line individuals, denying the existence of systemic error provoking weaknesses, and the blind pursuit of the wrong type of performance measures (for example, financial and production indicators).
The recommendations contained in the IOM report1 emerged from a four-tiered strategy (box 1), the fourth of which is the ultimate target of all the recommendations and the objective of this paper, which is to give an overview of the concept of clinical microsystems and to offer an assessment tool for those wishing to initiate improvements in the safety of care for patients and populations in microsystems. This tool, which was developed from the results of a cross-case analysis of 43 microsystems, can be used to help form an “awareness” of the microsystem and its functioning. The clinical examples provided throughout the paper are based on our experience in the United States.6 Working at the level of the microsystem, it is possible to develop generalisable methods for application across macro-organisation settings for error reduction.
Box 1 IOM recommendations
Establish a national focus on patient safety by creating a centre for patient safety within the Agency for Healthcare Research and Quality.
Identify and learn from errors by establishing nationwide mandatory and voluntary reporting systems.
Raise standards and expectations for improvement in safety through the actions of oversight organisations, group purchasers, and professional groups.
Create safety systems inside healthcare organisations through the implementation of safe practices at the delivery level.
INTRODUCTION TO CLINICAL MICROSYSTEMS
The “organisation” has been the conventional level of analysis for management of diverse types of healthcare personnel. Some attention has focused on the design of work units within the organisation, such as medical staff, 7 surgical staff,8 nursing staff, support groups, interdisciplinary teams,9 and the applicability of these work units to specific areas of care such as aging, long term care, renal therapy, and oncology. In general, however, research at the level of the microsystem within the organisation has received limited attention. Social policy has also focused at the organisational level and individual provider level, thus missing the potential contribution of how the structures and strategies of the microsystem affect patient outcomes as well as affect the performance of the microsystem.
Research in managing safety has focused on the culture and structure of the organisation.10, 11 Perrow10 advanced the theory that accidents are inevitable in complex, tightly coupled systems such as chemical plants and nuclear power plants. These accidents occur irrespective of the skill of the designers and operators, hence they are “normal” and are difficult to prevent. He further argues that, as systems get more complex, the system becomes opaque to its users and therefore people forget to be afraid of potential adverse occurrences. Organisational models view human error more as a consequence than a cause, and stress the need for proactive measures of “safety and health” with constant reform of the systems processes. Both Perrow and Vaughn emphasise the structural and organisational dimensions or organisational processes, making the case for assessing the operations of an organisation which we extrapolate to the microsystem. Finally, organisational flexibility means possessing a culture capable of adapting to changing demands. High reliability organisations (HROs)12 are an example of highly complex technology sensitive organisations that must operate to a failure free standard. Examples include naval aircraft carriers and air traffic control. These organisations carry out demanding activities with a very low error rate and an almost complete absence of catastrophic failure over many years.
The microsystem concept is based on an understanding of systems theory13–15 coupled with James Brian Quinn's theory of a smallest replicable unit.16, 17 Nelson and colleagues18 have described the essential elements of a microsystem as (a) a core team of healthcare professionals; (b) the defined population they care for; (c) an information environment to support the work of caregivers and patients; and (d) support staff, equipment, and a work environment. A focus on microsystems is a way to provide (1) greater standardisation of common activities and customisation of care to individual patients, (2) greater use and analysis of information to support daily work, (3) consistent measured improvement in performance, (4) extensive cooperation and teamwork across disciplines and specialties within the microsystem, and (5) an opportunity for spread of best practices across microsystems within their larger organisations.18
MAKING THE LINK BETWEEN SAFETY AND THE MICROSYSTEM
Initiating the improvement of the safety of care for patients and populations in clinical microsystems involves increasing the work unit's “awareness” of its functioning as a microsystem and a “mindfulness” of its reliability. We usually think of awareness and mindfulness as things to which individuals aspire. These reflective states are an invitation to consider the clinical microsystem to be composed of individuals who function together as systems, capable of reflecting on their work. Awareness of one's own work unit as a system is a matter of identity and is connected to purpose. Learning to increase the safety and reliability of organisations can be addressed in many ways.1, 19–22 Weick and Sutcliffe offer the idea that HROs have become so by their “mindfulness.”23 By mindfulness they mean that these organisations are:
Preoccupied with failure: they “treat any lapse as a symptom that something is wrong with the system, something that could have severe consequences if separate small errors happen to coincide at one awful moment.”
Reluctant to simplify interpretations: they “take deliberate steps to create more complete and nuanced pictures. They simplify less and see more. Knowing that the world they face is complex, unstable, unknowable, and unpredictable, they position themselves to see as much as possible.”
Sensitive to operations: they recognise that “unexpected events usually originate in what James Reason called “latent failures”. These “loopholes in the system's defences, barriers and safeguards . . . consist of imperfections in . . . supervision, reporting of defects, engineered safety procedures, safety training, briefings, certification, and hazard identification. Normal operations may reveal these lessons, but [they] are visible only if they are attentive to the front line, where the real work gets done.”
Committed to resilience: they “develop capabilities to detect, contain, and bounce back from those inevitable errors that are part of an indeterminate world . . .. [they are not error-free, but errors don't disable them] . . . it is a combination of keeping errors small and of improvising workarounds that keep the system functioning.”
Deferent to expertise: they encourage decisions to be made at the front line and migrate authority to the people with the most expertise, regardless of rank.
According to Weick and Sutcliffe, becoming more mindful means practising more of these behaviours. Mindfulness implies “a radical presentness” and a connection to the actual requirements of the current situation along with a chronic sense of unease that something catastrophic might occur at any moment. This sense is inculcated to all members of the unit, from the leaders to the most junior people on the team.
The relationship between mindfulness and the microsystem requires further clarification. The focus on microsystems invokes consideration of team performance and the relationship of individuals within teams. The idea of high reliability organisations suggests that team and individual performance depends on the development of certain organisational norms. Such cultural attributes are commonly seen as properties of larger systems than teams. Is it possible for mindful microsystems to exist in dysfunctional organisations? In considering this possible relationship between a “mindful” microsystem and a dysfunctional organisation, it is important to recognise the importance of the larger system to the success or failure of the microsystem, as reported by an interviewee at a geriatric unit when asked about how the larger system has supported the efforts of the microsystem:
“The administration has continued to support the geriatric unit by providing both staffing and general resources. Getting a `yes' for a request from the administration depends on how they feel about you and your department. On the converse, rarely do units exist in a vacuum. So, where there is a larger structure, there are always potential negatives.”
Furthermore, a focus at the microsystem level changes the role of senior leadership—indeed, this is not a minor detail. The Health Care Advisory Board reported that a common ingredient in successful organisations is a “tight, loose, tight” deployment strategy.24, 25 What might this mean for creating a microsystem striving to provide safer care? It would mean that senior leaders would mandate that each microsystem should have a “tight” alignment of its mission, vision, and strategies with the organisation's mission, vision, and strategies. But it would also mean that senior leadership gives each microsystem the flexibility needed to achieve its mission. Finally, it would mean that senior leaders hold the microsystems accountable to achieve its strategic mission to provide safer care.
LEARNING FROM CLINICAL MICROSYSTEMS
We have worked with several microsystems seeking to improve their care for patients. Some of them seemed to have a clear sense of their identity as a system and, when they explored change for the improvement of their functioning, they were able to incorporate the change and make it a regular part of their identity as a system. Others—lacking a similar sense—pursued change just as diligently, but seemed to have difficulty incorporating that change into their “system”. As we have begun to tease out the characteristics of the apparently better functioning small systems, certain elements or characteristics have emerged.
As part of a study funded by the Robert Wood Johnson Foundation,6 interviews were conducted with representatives from 43 microsystems and eight characteristics present across multiple microsystems were identified (box 2). The methods used for this study are discussed in detail elsewhere.6, 26
Box 2 Characteristics of effective microsystems
Integration of information
Interdependence of the care team
Supportiveness of the larger system
Constancy of purpose
Connection to the community
Investment in improvement
Alignment of role and training
Each of the dimensions can be thought of on a continuum that represents the presence of the characteristic in the microsystem. Table 1 summarises the characteristics and provides an operational definition for each of them. Increased awareness of the small front line work unit as a microsystem means recognising the characteristics that contribute to their identity (the elements described in table 1) and being mindful of the reliability of these characteristics. A more detailed description of each of the characteristics is given below with verbatim comments from the interviews.
Integration of information
Universal among high performing microsystems is integration of information. Microsystems vary on how well information is integrated into its daily work and the role that technology plays in facilitating the integration. An illustrative comment from a microsystem operating in an “information free environment” follows:
“If you aren't going to have the same nurse working with the patient then you have to have better communication. Patients get the best care when you have health care workers who communicate very well and collaborate very well. One of the biggest problems I see is physicians not talking to each other. Also, so many nurses work part-time, varying shifts. We struggle with getting them to communicate. It's hard to get them to put equal emphasis on communicating, documenting, teaching and the physical tasks that need to be done before the end of the shift. You don't get the same negative feedback from your coworkers if you aren't teaching the patient as you do if you leave some of the physical tasks undone at the end of the shift. A nurse will prioritize and get every thing done before the end of the shift, but they don't look at the patient's care plan and do the teaching that needs to be done before discharge.”
Deming taught that knowledge is built on theory, not information.27 According to Deming, information is static whereas knowledge has a temporal spread. Put simply, with knowledge a theory can be developed that explains what happened in the past and predicts what will happen in the future. It is the integration of the information that allows us to create knowledge. Technology can be instrumental in facilitating the integration of information within the microsystem.
“Sharing information with patients is the biggest safeguard against medical error. The electronic medical record (EMR) does drug-drug interaction alerts. When the patient leaves the office, he/she gets a printout of their medication list. Once in a while a patient will call later and say, `I was looking over the list, and I am not taking x anymore, but Dr So and So has put me on y.' It takes all of us. Another safeguard is that the system we use forces me to consider all the possibilities. For example, if a patient comes in with headaches and vomiting, it has a structured sequence that makes you consider the causes, including cerebral hemorrhage.”
Effective microsystems measure what they do and recognise that the measures at the macrosystem level are not always helpful at the microsystem level. Part of the work of the microsystem becomes the development of a set of measures that are appropriate for the goals of the microsystem. As one interviewee concluded:
“At the local level I don't get the measures that I need and the measures at the regional level aren't at the level I need.”
It may be that this recognition is important in developing a microsystem that routinely measures processes and outcomes, feeds data back to providers, and makes changes based on data.
“We can track process length through our real time `flight simulator' system. By touching the screen, we instantly know such things as arrival to bed, bed to nurse, arrival to doctor aggregated cycle times.”
Interdependence of care team
Key players—the providers and staff who work together on a daily basis—are a fundamental element of the microsystem. However, the interdependence of these key players tends to vary across microsystems. Microsystems with a high degree of interdependence are mindful of the importance of the multidisciplinary team approach to care, whereas those with a lower degree of interdependence are characterised by providers and staff working as individuals with no clear way of sharing information or communicating.
“We developed multidisciplinary rounds—everyone involved in caring for the patient. The major value is having everyone communicate directly with one another. Each person knows they may be asked about the patients and has to be prepared.”
“Often physicians have difficulty working with non-physician providers, giving them the control. Some physicians don't do well sharing responsibility for patient care like this.”
Supportiveness of the larger system
The larger organisation may be either helpful or “toxic” to the efforts of the microsystem.
“The hospital system has shown great effort in helping us out with patient restraint protocols. Restraint management has been an area where they have excelled and this has made the ER a safe place to work. They are also helping us out in quality end-of-life issues and how cultural differences of people necessitate individualized care.”
“It is a mixed message. The organisation talks about team care but then subverts their vision—they put in a centralized phone system with a nurse in charge of scheduling appointments. Well, she has no way of knowing whether Drs X and Y are on the same team. If a patient of Dr X cannot go to Dr X because he is on vacation, the nurse may send the patient to Dr Z though Dr Y is on Dr X's team. So instead of the patient going to Dr Y, they go to Dr Z.”
Constancy of purpose
An important characteristic of a microsystem is that the aim, or what Deming would refer to as “the constancy of purpose”,27 is consistent with the aim of the larger system and guides the work of the microsystem. Where constancy of purpose is high, the aim is apparent to the microsystem, and it is also communicated across the boundaries of the microsystem.
“The thing that distinguished those places that are achieving excellence is the organizational culture. Our culture was `of course babies [in the NICU] get infections, they are not well to begin with'. But those other sites saw an infection as a failure, not entitlement. All the way to the bedside the unit knew that infection was a failure. The philosophy has to permeate the organization.”
In contrast, lack of a clear consistent aim may be destructive to the microsystem and, ultimately, to patient care.
“There are various ways that health care workers let patients know that we are busy—don't tell us that you are having a problem because we don't have time to deal with that. For a lot of nurses the reason for being a nurse was to relieve pain and suffering. But then we send the message that we don't have time to help you.”
Connection to community
Connection to community represents a symbiotic relationship between the microsystem and the community that extends well beyond the clinical care of a defined set of patients.
“The neonatology group has a commitment of being a resource to the region. We have a commitment to the health of a population. This is crucial to our success. As a resource, we provide education and review the quality of care for the whole region.”
Investment in improvement
An investment in improvement comes in the form of resources such as time, money, and training, but above all it involves creating a philosophy of improvement within the microsystem. This characteristic overlaps with “supportiveness of the larger organisation” and suggests an obvious way in which the larger organisation can support the work of the microsystem.
“In a given week we are spending about 100 person-hours on teams. People are being paid to spend their time doing this, not just during their lunch hour. Someone said, `You have to assume you'll be around here 5 years from now. Do you want to be doing things the same way?' Most of us don't. This requires a new attitude that results in understanding that industries must invest in change in these microsystems. You have to tolerate pulling people off-line to work. This is a radically new way of thinking in medicine, which traditionally views any sort of meeting as a waste of time. Traditionally, the view is that the only useful time is spent seeing patients. I think that unless you spend time considering how to deliver care better, much of that time seeing patients is wasted.”
Alignment of role and training
Alignment of role and training suggests that there is a deliberate effort within the multidisciplinary team to match the team member's education, training, and licensure with their role. While several interviewees indicated that this leads to increased staff satisfaction and lower turnover, some are uncomfortable working in what they consider to be an “expanded” role. As one interviewee said: “casualties move on to other parts of the hospital”.
“The system can be an advocate. It can be a reminder that a mammogram needs to be done, that there is a system in place to make sure it happens, that things go well. A system can empower the medical assistant to insist that a patient be seen, even if it means clashing with a provider.”
A clinical microsystem is a small organised group of clinicians and staff working together with a shared clinical purpose to provide care for a defined set of patients.
The clinical purpose defines the essential parts of the microsystem. Use of information is key to its ability to function; information technology facilitates collecting, assessing, and sharing of information.
Microsystems are usually part of a larger organisation and are embedded in a legal, financial, social, and regulatory environment.
Answers to the following questions are needed to define the microsystem:
what is the aim or purpose?
who is the small population of people who benefit from this aim?
who do you work with daily (administratively, technically, and/or professionally)?
what information and information technology is part of the daily work?
Senior leaders of the microsystem should:
look for ways in which the macro-organisation connects to and facilitates the work of the microsystem;
support the needs of the microsystem;
facilitate the coordination among microsystems.
IMPROVING THE QUALITY AND SAFETY OF CARE IN THE MICROSYSTEM
The eight characteristics discussed above were used to create a self-assessment tool (shown in Appendix on page 50) for individuals to assess the functioning of their microsystem and to identify potential areas to focus improvements. We have observed that use of the tool is successful in facilitating discussions around ideas for individual microsystems trying to foster further development of their system and/or a given characteristic.
Several limitations apply to the use of this assessment tool in its present form. Each represents an opportunity for further empirical testing and research. Firstly, we recognise that increasing the strength of an attribute or characteristic does not necessarily increase the overall functioning of the unit as a system. We make the assumption that, as efforts are made to increase the expression of a characteristic in a microsystem, efforts are concurrently being undertaken to integrate the newly expressed element into the functioning of the enhanced unit as a better functioning system. Secondly, we recognise the need for further testing, development, and validation of the assessment tool. However, we caution people about waiting for the “perfect” method or tool if there are tools available that are useful as you try to improve awareness of the functioning of your microsystem. Finally, we make the assumption that a better functioning microsystem provides safer care and achieves better outcomes for its patients.
The concept of microsystems and the assessment tools to assess and evaluate characteristics of a microsystem can make a great contribution to the future study and management of patient safety. We believe that most health care today is sought, created, delivered, and purchased at the level of the clinical microsystem. It is there that real gains in the quality, value, and safety of care can occur. Furthermore, we believe that efforts to increase awareness and mindfulness at the level of the clinical microsystem can contribute to the safety of patient care. Combining organisational characteristics with an analysis of the characteristics of an individual microsystem offers a powerful way to visualise the link between structure, process, and outcomes and to make practical what is theoretically attractive.28
The implications of the microsystem framework for the delivery of care are much broader than just for a given microsystem and the people working within it. There is a need for ongoing research into microsystems, how to assess their functioning, performance, outcomes, and safety, and how to replicate “best practices” in other settings. Clinical leaders can find new energies for common efforts to study and improve their work for patients as they gather around the focus of the actual unit of daily practice—crossing disciplinary and specialty boundaries—using the language of processes and systems, rather than the more conventional role or discipline-bound conversations that often seem to limit change and improvement. If the microsystem is a new frontier in organisational and health services management research, further research is needed to understand the contributions of practice based research in improving the delivery of safer care. | <urn:uuid:958bd710-2ac2-48e1-bb31-0925c9defabb> | CC-MAIN-2022-33 | https://qualitysafety.bmj.com/content/11/1/45?ijkey=aee6d8d478fc8011e86c6207d5e055c40978f066&keytype2=tf_ipsecsha | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00494.warc.gz | en | 0.946124 | 5,494 | 2.96875 | 3 |
This summary outlines the main findings of the newly updated country profile on internal displacement in Nepal. The profile was prepared by the Global IDP Project of the Norwegian Refugee Council, which monitors and analyses internal displacement in over 50 countries worldwide. The full country profile is available from the Project's Database or upon request by e-mail (firstname.lastname@example.org).
Nearly six months after King Gyanendra assumed direct power and declared a state of emergency in February 2005, Nepal is faced with both a deep crisis of governance and a renewed spate of fighting and violence all across the country. The suspension of all civil liberties in the wake of the royal coup and the purely militaristic strategy chosen to deal with the Maoist insurgency have led to an intensification of the armed conflict and a sharp deterioration of an already dramatic human rights situation. Significant population displacements have taken place in the context of an increasingly polarised Nepalese society now on the brink of a humanitarian crisis. Since the conflict started in the mid-1990s, hundreds of thousands of people have been uprooted across the country. Landowners, teachers, and other government employees have been specifically targeted by the rebels and have fled their homes. Poorer sections of the population have also been affected and have fled forced recruitment into Maoist forces, retaliation by security forces or the more general effects of war. Most of them have flocked to the main urban centres, in particular to the capital, Kathmandu. Many more have swollen the migration flows to India.
No reliable figures exist on the current number of people internally displaced due to the conflict, but the most realistic estimates put it at between 100,000 and 200,000. Some estimates of the total number of displaced, including refugees in India, since the fighting began in 1996 go as high as two million, though these are impossible to verify. Virtually all of Nepal's 75 districts are affected by the fighting which has claimed close to 11,000 lives in the past nine years. The government has to a large extent ignored its obligation to protect internally displaced persons (IDPs), particularly those uprooted by its own security forces. The international community has been slow in acknowledging the seriousness of both the human rights and the displacement crisis, although there are now more positive signs that UN agencies and international NGOs, long present in Nepal providing development-oriented assistance, are ready to play a more active role in monitoring human rights abuses and to switch to humanitarian assistance for the most vulnerable among the displaced. The international community, and in particular the main suppliers of Nepal's military equipment, now have a responsibility to bring both parties back to the negotiating table. Only a breakthrough in the peace process and a full restoration of the democratic institutions will create conditions conducive to the return of the displaced.
An autocratic monarchic government has been in place in Nepal since 1962. Despite the re-instatement of a multi-party democracy in 1990 and a new constitution, which followed three decades of panchayat (non-party) system of government, the new political order continued to be dominated by the same elite who were not perceived as genuinely interested in improving the lives and livelihoods of the rural poor. It maintained a centralised system and largely failed to address the systemic inequality of Nepalese society, which politically and socially excludes an important proportion of the population on the basis of their ethnic and caste identity.
It was against this backdrop that in 1996 the "People's war" was launched by the Maoists with the aim of overthrowing the constitutional monarchy and establishing a new democratic socialist republic. The insurgency started in the districts of the mid-western region when Maoists began targeting the police, the main landowners, members of other political parties, teachers and local government officials. Using guerrilla tactics and virtually unchallenged by the government during the first five years, the Maoists gradually gained ground in other districts of the country.
It was not until the deployment of the army and the declaration of a state of emergency in late 2001 that the conflict escalated. In 2001, Prince Gyanendra was crowned as king after most of the royal family was killed during a shooting incident in the palace. A year later he suspended the elected Parliament, installed a prime minister of his choosing and indefinitely postponed elections. Since then, the King has effectively assumed full executive powers with the support of the army.
The January 2003 ceasefire signed by the government and the Maoists raised cautious hopes that the civil war might have come to an end after seven years. Although fighting subsided during the period of the truce, the situation on the ground reportedly changed little. In August 2003, the Maoists withdrew from the peace talks when the government refused to agree to the formation of a constituent assembly to draft a new constitution. The collapse of the ceasefire marked the resumption of fighting in most parts of the country and sent the country into a spiralling human rights crisis of unprecedented proportions.
On 1 February 2005, the king dismissed the government and declared a state of emergency giving him absolute power and effectively suspending all civil liberties. Media censorship was imposed and scores of political leaders, human rights activists and journalists were arrested. Under pressure from the international community to restore fundamental civil and political liberties, the king lifted the emergency rule on 29 April 2005. Many restrictions remained in place, however, including on freedom of movement, freedom of assembly and political activism (AI, 15 June 2005, p. 4).
In the wake of the coup, fighting and displacement has intensified significantly and human rights violations have been on the increase. In the absence of a parliament - dissolved in 2002 - or of a representative and elected government, civil society has currently no say in the conflict and Nepal is sliding dangerously towards an even more militarised and polarised society, with both sides paying lip service to the respect of human rights.
Many uprooted by conflict and human rights abuses
Tens of thousand of people have been displaced in Nepal due to the military activity of both the Maoist rebels and the government forces, and the more general effects of war. Particularly after November 2001, when security deteriorated markedly in rural areas, many people started fleeing to urban district centres, large cities like Kathmandu, Biratnagar and Nepalgunj, and across the border to India. All 75 districts of Nepal are now to varying degrees affected by the fighting, with the rebels more or less controlling the rural areas and the government's presence mainly restricted to district headquarters and urban centres.
In a desperate effort to regain some control of the rural areas, the government has since November 2003 encouraged the creation of "village defence committees" in various districts of the country (ICG, 17 February 2004). Often created by local landlords with the tacit support of the army, these militias are adding to the level of violence and constitute an inflammatory development in the conflict. Shortly after the royal takeover, these militias reportedly started to receive more active support from the army, including guns and training (Times Online, 8 June 2005). In February 2005, in Kapilvastu district an anti-Maoist rampage resulted in the burning of 600 houses, the slaughter of 30 "Maoists" and the displacement of between 20,000 and 30,000 people to the Indian border (Bell, Thomas, 12 March 2005; BBC, 14 March 2005; Kathmandu Post, 19 March 2005).
A large portion of those fleeing the conflict in its initial phase were from relatively well-off strata of the population: landlords, party workers, security personnel, teachers and Village Development Committee chairmen (INSEC, April 2004, p.112). Perceived as enemies of the "People's war" and symbols of the corrupt state, these people are specifically targeted by the Maoists. Since the February 2005 coup, the rebels appear to have stepped up their targeting of the families of army personnel, in particular in the mid-western region districts. Some 1,200 security forces family members have reportedly been forced to flee their homes in this region (Kathmandu Post, 15 May 2005).
Young people - both men and women - have also fled forced recruitment by the Maoist forces and in many areas constitute the bulk of the displaced (SAFHR, March 2005, p. 9). They are particularly vulnerable as they have little choice other than to join the Maoists - although sometimes only temporarily to attend political meetings - or leave their villages. Those who chose to remain are also likely to become targets of the security forces (Mercy Corps, October 2003, p.69). The escalation of the conflict in the past year has led the Maoists to intensify their recruitment campaign. With so many young adults having already fled their homes in rural areas in previous years, the insurgents reportedly force ever younger recruits to join them (CSM, 28 July 2005). This is forcing an increasing number of children to flee their villages. The UN estimates that between 10,000 and 15,000 children will flee their villages in 2005 (IRIN, 4 July 2005). An estimated 40,000 children have been displaced by the conflict since 1996 (Xinhua, 12 June 2005)
But the Maoists are not the only ones to blame. Indeed, civilians have also fled the actions of Nepalese government security forces in their operations against the Maoists. Many villagers have been displaced following food blockades, torture and killings by security forces. Civilians have been targeted on suspicion of supporting the Maoists and often tortured by the army and police (AI, 19 December 2002, pp.7-8). Between August 2003 and May 2005, the army claimed to have killed 4,000 "Maoists". This category reportedly included civilians suspected of having provided shelter, food or money to the rebels, whether under coercion or not (AI, 15 June 2005, p.4). Displacement caused by security forces tends to be less visible. Fear of being identified as rebel sympathisers and the absence of government assistance has discouraged many from registering as IDPs (Martinez, Esperanza, July 2002, pp.8-9). Moreover, the government's restrictions on independent reporting have also masked the extent of the problem (Watchlist, 26 January 2005, p. 9; APHRN, 14 January 2002).
People flee their villages for fear of being caught in the crossfire but also as a result of the indirect consequences of the fighting. The conflict has in many areas led to the breakdown of education, closure of businesses, weakening of local economies and interruption of public services. Food insecurity and lack of employment opportunities have traditionally forced able-bodied males of the mid-western region into seasonal migration to urban areas or to India. Insecurity and blockades have further reduced the availability of food and exacerbated a long-standing rural exodus trend. Many young people end up in India where wages are slightly higher than in Nepal and where they do not face security threats (SAFHR, March 2005, p. 36).
Estimates of displaced since 1996 as high as two million
In the absence of any registration of IDPs and of any systematic monitoring of population movements by national authorities or international organisations, it is difficult to provide any accurate estimates on the total number of people displaced since the conflict started in 1996, or for that matter on the number of people currently displaced. This problem is further compounded by the hidden nature of displacement in Nepal, where people forced from their homes either merge into social networks of friends and families or mingle with urban migrants en route to district centres or to the capital. Many also travel abroad, mainly to India, in search of safety and employment opportunities.
An IDP study conducted in early 2003 by a group of NGOs and UN agencies concluded that a reasonable working figure on the total number of people displaced, directly or indirectly, by the conflict was between 100,000 and 150,000 (GTZ et al., March 2003, p.8). Since then, the intensification of the conflict has thrown many more into displacement. INSEC, a Nepalese human rights NGO, recorded the displacement of some 50,000 people between 2002 and 2004 (INSEC, April 2005).
However, anecdotal evidence and other studies suggest the figures could be much higher. Between 2003 and 2004, estimates from various sources put the number of displaced at between 200,000 in urban areas only (OneWorld, 29 July 2003; Nepalnews, 18 September 2003) and 400,000 (CSWC, 1 February 2004, pp.8-9).
When considering the scope of displacement in Nepal, one has to keep in mind that all figures are highly speculative estimates which are impossible to verify. In addition, the problem is to accurately estimate how many have fled as a consequence of the conflict and how many are "regular" urban or economic migrants. Based on available data, it is estimated that at least 200,000 people are currently internally displaced directly or indirectly by the conflict.
This figure does not include those who have fled abroad, mainly to India, a traditional recipient of Nepal seasonal workforce. The open border between the two countries, the lack of monitoring and the mingling with more traditional economic migrants make it difficult to estimate the numbers of people who have crossed the border because of the conflict. Since 2001, the usual flow of migrants is, however, reported to have increased significantly with sometimes reports of tens of thousands crossing the border each month (ICG, 10 April 2003, p.2; WFP, personal communication, September 2003). In September 2004, the Asian Development Bank (ADB) suggested that between 300,000 and 400,000 rural families, or between 1.8 and 2.3 million people had been displaced by the conflict since 1996 (ADB, September 2004, p.2., and Appendix 3, p.78). It is widely acknowledged that the vast majority has left Nepal for India (IDD, 2 June 2005, p.1).
The lack of data on both displacement within Nepal and displacement to India make a strong case for further studies on this issue.
A humanitarian disaster looming
The socio-economic impact of nine years of conflict on one of the poorest countries in the world has been devastating. A mountainous topography, an inefficient agricultural economy and high population growth combine to make Nepal a chronically food insecure country (Lamade, Philip, August 2003). More than 40 per cent of the population, estimated at 23 million people, live below the poverty line. The incidence of poverty in the rural areas is almost double that in urban areas. The midwestern and farwestern regions, where the most intense fighting has taken place are also the poorest, with poverty rates approaching 75 per cent (ADB, September 2004, p.5).
In March 2005, the UN, international donors and aid agencies in Nepal publicly called on both parties to respect human rights and warned that the conflict, and in particular restrictions imposed on the movements of supplies and vehicles, was leaving many civilians without access to humanitarian and medical assistance. The statement concluded that the actions of both the security forces and the Maoists were "pushing Nepal towards the abyss of a humanitarian crisis" (BBC, 18 March 2005).
When fleeing their homes, the displaced either move to neighboring districts where they have friends or families, or look for safety and assistance near the district headquarters. Most arrive exhausted after having to travel days on a difficult terrain with little food and the few belongings they have managed to take with them. Although sometimes assisted by local organisations, most of the displaced, such as those living in Rajena camp in Nepalgunj near the Indian border, live in inadequate conditions, lacking water supply, shelter, access to health and livelihood opportunities (IRIN, 25 April 2005). In Dailekh district headquarters, 2,000 IDPs fleeing their homes in November 2004 were forced to live in poor hygienic conditions in a public building (Kathmandu Post, 30 November 2004). Villagers who had to flee the anti-Maoists mobs in Kapilvastu district and whose houses and properties had been burnt or looted have reportedly gathered in a makeshift camp and resorted to begging to survive (Kathmandu Post, 12 June 2005).
Following a mission to Nepal in April 2005, the UN interagency Internal Displacement Division reported that there was a significant need for enhanced basic services, including health and education, in particular in areas around urban centres where IDPs tend to settle among the urban poor (IDD, 2 June 2005, p.2).
Difficult living conditions for IDPs in urban areas
Living conditions are difficult for many IDPs in urban areas. The sudden population flood into the cities combined with the growing migration trends to urban areas in the last decade has led to a surge in the number of urban poor and placed a strain on the municipalities' capacity to deliver basic services such as water supplies, sanitation and waste management, as well as health and education (RUPP 2004). According to a study on urban poverty, displacement due to the conflict is increasing the concentration of poor in urban settlements, with many of the displaced turning into urban poor (Kathmandu Post, 20 April 2005).
Indeed, several studies have showed that the lack of income-generating activities was the major problem facing the displaced (INSEC, 2004, p. 117). Many IDPs are peasants and are unprepared to make a living in urban areas. When they find employment, these are generally poorly paid. This is partly because their own arrival has driven down wages in jobs that require low or minimal capital investment. These jobs are physically demanding and insecure (GTZ et al., March 2003, pp. 11-12). Along with poor economic migrants, displaced people work in factories, stone quarries or do small trading that generate low returns. In March 2005, a survey showed that 70 per cent of the displaced surveyed in urban areas did not earn enough to feed their family and that many had to survive on loans (SAFHR, March 2005, p. 15).
Displaced children often face particularly difficult conditions. Many young children have moved to urban or semi-urban areas, unhygienic conditions and hostile environments, where their families can ill-afford to send them to school. An estimated 5,000 children live on the streets of the main cities of the country, denied an education and exposed to a variety of threats, including sexual exploitation and forms of child labour (Watchlist, January 2005, p.30; OneWorld, 14 July 2003). A study of the impact of the conflict on children, released in June 2005 by the International Labour Organisation and Child Workers in Nepal Concern Center (CWIN), estimated that a total of 40,000 children had been displaced since 1996 (Xinhua, 12 June 2005). Many displaced children have witnessed violence and destruction, and are traumatised.
Many of the wealthier IDPs have been able to find shelter in cities and expect to return to their homes when conditions improve. A large majority of this IDP group sought refuge in district centres and main cities. Some have reportedly been able to buy land or build new houses (EC & RNN April 2003, p.79). Most of these well-off IDPs are not thought to experience major problems in their daily survival (Nepalnews, 6 May 2005).
Assistance: insufficient and discriminatory
Since the beginning of the conflict, the government has to a large extent ignored its obligation to protect and assist IDPs. Its response can be described as inadequate, discriminatory and largely insufficient.
Although the government established several compensation and resettlement funds for victims of the conflict, most dried up after a relatively short time. Also, government assistance has only been provided to people displaced by the Maoists. Authorities have not encouraged people displaced by government security forces to come forward with their problems, and people remained reluctant to register as displaced for fear of retaliation or being suspected of being rebel sympathisers (Martinez, Esperanza, July 2002, pp.8-9).
In 2003 and 2004, the government allocated 50 million rupees ($667,000) for the rehabilitation of IDPs or rather to "provide immediate compensation and relief to the victims" (Ministry of Finance, 16 July 2004, p.13). It was not clear if people displaced by government forces were intended to benefit from this fund.
In October 2004, under pressure from IDP associations, the government of Nepal made public a 15-point relief package for victims of the Maoist rebellion, which included monthly allowances for displaced people. The allowance was reportedly limited to IDPs above the age of 60 who had lost the family bread-winner and to children whose parents had been displaced by the Maoists (Government of Nepal, 13 August 2004). Again, those displaced by the security forces were excluded from the assistance scheme.
Since the royal takeover, the government has sent signals that it was willing to do more to help and assist its displaced population. Following the visit in April 2005 of the UN Secretary-General's Representative on the Human Rights of IDPs, Walter Kälin, who described the IDPs in Nepal as "largely overlooked and neglected", the government promised to develop a new IDP policy (UN, 22 April 2005). In May, the Minister of Finance publicly acknowledged the gravity of the displacement crisis and urged donors to help the government provide assistance to the IDPs, described as "the first and foremost victims of terrorism" (The Rising Nepal, 6 May 2005).
It remains to be seen if these promises will be fulfilled at a time when the government appears to be accountable to no one but itself and does not seem even willing to assist those it considers as the only legitimate IDPs - those forced from their homes by the Maoists. In April, the government pledged that it would respond quickly and efficiently to the needs of those displaced by the Maoists (Kathmandu Post, 6 April 2005). However, two months later no assistance had been forthcoming. Instead, the police brutally ended a peaceful demonstration of displaced people asking for food and shelter. Some 150 IDPs were detained on the charge of shouting anti-government slogans (IRIN, 7 June 2005).
International aid slowly shifting in response to needs of IDPs
In the obvious absence of an appropriate response from the government, one could have expected the large international aid community already present in Nepal to react swiftly to fill the assistance gap left by the national authorities. However, it is only recently that the seriousness of the IDP problem seems to have been acknowledged by the international community, which appears now willing to take a more proactive role and accept more responsibility for the displaced.
Many UN agencies and international NGOs have been in Nepal for numerous years providing development-oriented assistance, but almost none provide humanitarian relief or target their assistance at IDPs. Instead, most agencies have preferred to assist conflict-affected areas mainly through already existing development programmes. In order to avoid creating pull factors, likely to further depopulate rural areas, the agencies have been careful to avoid providing assistance directly to the displaced in their area of displacement. Instead, the strategy has been to maintain basic services in the communities of origin. However, since the intensification of the conflict in 2001, many aid programmes have been hampered or stopped by poor security conditions in rural areas. In 2004, many organisations had to suspend their activities due to an intensification of the fighting and restrictions imposed by both sides (Nepalnews, June 2004; OCHA/IDP Unit, June 2004, p.3). Faced with this reality and the deterioration of the conflict and human rights situation, more agencies seem now ready to shift their focus from development to humanitarian aid.
In April 2005, the UN's Internal Displacement Division (IDD) noted a change in the UN agencies attitude and greater willingness to address the humanitarian and protection needs of the displaced. In addition to the updating of contingency plans, taking into account the new situation, UN agencies have established a Crisis Management Group to improve inter-agency coordination (IDD, 2 June 2005, p.3). To strengthen the capacity of the UN to respond to the needs of the displaced, a Humanitarian Affairs Officer as well as an IDP Advisor have during the past year assisted the UN Resident/Humanitarian Coordinator, responsible at the field level for the strategic coordination of protection and assistance to IDPs. The IDD mission further encouraged all agencies to step up their activities towards meeting the needs of the displaced, pointing out that many agencies were still too development-focused and entrenched in a "business as usual" attitude. Donors were also strongly encouraged to support the shift from development to humanitarian action (IDD, 2 June 2005, pp.3-6).
The protection concerns of the displaced and the civilians in general have remained largely unaddressed so far. The government of Nepal accepted in April the setting up of a human rights monitoring operation by the UN Office of the High Commissioner for Human Rights. The mission will monitor and report on human rights abuses as well as provide advisory services to the government (BBC, 11 April 2005). Although the government was clearly reluctant to see the UN monitor more closely its war against the Maoists and only accepted under pressure during the last session of the Commission on Human Rights, this is nevertheless a positive step towards increasing scrutiny of human rights abuses and making both the government and the insurgents accountable for their actions.
Clearly, more efforts are still needed by both the government and the aid community to effectively address the needs of the displaced.
The government, which has the primary responsibility to assist its displaced citizens, has to establish a non-discriminatory and comprehensive IDP policy, for which the UN Guiding Principles on Internal Displacement can serve as a valuable guiding tool. Both people forced to flee their homes due to Maoist abuses as well as those who have fled actions by the security forces need to be recognized as Internally Displaced People and assisted to cope with their predicament.
The international community needs to agree on an IDP strategy and a clear action plan for meeting the protection and assistance needs of the displaced. Recently, a Consolidated Appeal Process (CAP) workshop took place in Nepal. The CAP, which will be launched in early September, will help agencies establish a common understanding of the humanitarian priorities and hopefully lead the way to a coordinated and improved assistance to IDPs.
The international community also has an important mediating role to play by bringing both parties back to the negotiating table. Only a breakthrough in the peace process and a restoration of the democratic process will create conditions conducive to the return of the displaced.
The full Country Profile includes all references to the sources and documents used. | <urn:uuid:7fadc9d0-ea6d-452a-af24-d246175f4ea3> | CC-MAIN-2022-33 | https://reliefweb.int/report/nepal/nepal-displacement-crisis-worsens-wake-royal-coup | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00498.warc.gz | en | 0.966915 | 5,414 | 2.984375 | 3 |
Are you confident that the surgical mask available for your use provides appropriate barrier protection for the task you or your colleagues are about to perform? How many types of surgical masks do you have in your facility? Do you know how to differentiate their barrier protection? Does it matter whether you choose an ASTM Level 1 versus an ASTM Level 3 surgical mask for a procedure where high amounts of splash and spray are anticipated? These are a few of the questions you should consider when pondering the question: Is Your Surgical Mask Protecting You? To answer these questions, healthcare personnel should be aware of the relevant standards, guidelines, and professional recommendations that address surgical masks. Through this knowledge, they are better equipped to make appropriate mask selections; thereby reducing exposure to blood, bodily fluids, and other potentially infectious material (OPIM).
Standards and Guidelines Related to Surgical Masks
Several organizations have published standards and guidelines pertaining to the performance, classification, and use of surgical masks. These organizations include the United States Food and Drug Administration (FDA) and ASTM (formerly known as the American Society for Testing Materials) International. Guidelines related to the use and selection of surgical masks include those published by the Association of periOperative Registered Nurses (AORN). Those guidelines will be addressed in a later section.
Food and Drug Administration (FDA)
The FDA is responsible for protecting the public health by assuring the safety, effectiveness, quality, and security of medical devices. The FDA has identified surgical masks as surgical apparel under the surgical devices category. Surgical masks have been given a Class II classification and, prior to introduction into interstate commerce, are subject to premarket notification where manufacturers submit a Premarket Notification (510(k)) to the FDA. For premarket notification [510(k)] of surgical masks, the FDA recommends that five material performance characteristics be tested and reported. These performance characteristics are fluid resistance, differential pressure, bacterial filtration efficiency (BFE), particulate filtration efficiency (PFE), and flame spread.1 Tests performed as specified by the ASTM F2100-11 Standard evaluate these performance characteristics. The following is a review of this standard and the related testing methods.
ASTM F2100-11: Standard Specification for Performance of Materials Used in Medical Face Masks.2 For face masks, ASTM F2100-11 provides classifications of medical face mask material performance. It is important to note that this Standard does NOT evaluate face masks for regulatory approval as respirators, nor does it address all aspects of face mask design and performance. For instance, it does not evaluate the effectiveness of face mask designs as related to barrier and breathability properties.
The ASTM F2100-11 standard specifies five tests that evaluate the material’s fluid resistance, differential pressure (breathability), BFE, PFE, and flame spread. While the five required tests remain the same from earlier versions of this standard, the classifications were changed from “Performance Class” (Low, Moderate, and High) to “Performance Level” (1, 2, and 3). And, importantly, a graphic of the Performance Level is required on the labeling of the primary mask packaging.
The five tested face mask characteristics and test methods specified in the ASTM F2100-11 Standard are noted in Table 1.
If the tested material has passed the test criteria, it may be identified as an ASTM Level 1, 2, or 3 level mask, depending on the test results. The following is an overview of these test methods.
- ASTM F1862 Standard Test Method for Resistance of Medical Face Masks to Penetration by Synthetic Blood (Horizontal Projection of Fixed Volume at a Known Velocity).3 This test allows face mask materials to be ranked according to their ability to resist synthetic fluid penetration. For this test, face masks are challenged with synthetic blood at various levels of pressure (80, 120, 160 mm). The higher the pressure withstood, the greater the fluid spray and splash resistance.
- Differential Pressure (Delta-P).1 This is a clinical test that measures and reports mask breathability. The differential pressure is the measured pressure drop across a surgical face mask material. It determines the resistance of a surgical face mask to air flowing through it; the pressure drop also relates to the breathability (i.e., how easily air passes through the mask) as well as the comfort of the surgical mask.
Industry standards measure breathability from the most to the least. In general, the lower the Delta-P score, the more breathable the mask is. For example, a mask with a Delta-P score of 1 to 2 is often described by the wearer as comfortable and very cool, while a mask with a Delta-P score of 5 or above may be considered hot and uncomfortable. Typically, a fluid resistant mask will have a mid-range score; whereas a respirator will have a differential pressure of 4, 5, or higher, and a mask with no fluid resistance will have a lower score on the comfort scale.
- ASTM F2101: Standard Test Method for Evaluating the Bacterial Filtration Efficiency (BFE) of Medical Face Mask Materials, Using a Biological Aerosol of Staphylococcus aureus.4 An important measure of a material’s ability to provide an adequate barrier to the transfer of microorganisms is its BFE (i.e., the ability of a material to prevent the passage of aerosolized bacteria). The ASTM F2101 is the standard test method used to measure the BFE of medical face mask materials as an item of protective clothing; although not required, it can also be performed on other barrier fabrics.
The ASTM F2101 test is used to determine the amount of infective agent that is retained by the surgical face mask, which is directly related to the amount of bacteria released through the mask into the air. It is important to note that this test method does not define acceptable levels of BFE, but establishes a basis for comparison of different medical face mask materials. Therefore, when looking at the results for this test method, it is necessary to understand the specific condition under which testing is conducted. The maximum BFE that can be determined by this method is 99.9%. A higher BFE percentage indicates a better protection level for the patient against infective agents from the healthcare staff and better protection for the wearer.
To put this in perspective, Table 2 highlights a hypothetical situation where barrier fabrics with BFE percentages ranging from 50 to 99 are challenged with 2,200 bacteria. Note in the case of fabric with the BFE of 50%, 1,100 or half of the bacteria penetrate. While, in the case of fabric with 99% BFE, only 22 bacteria penetrate the fabric.
- ASTM F2299 Standard Test Method for Determining the Initial Efficiency of Materials Used in Medical Face Masks to Penetration by Particulates Using Latex Sphere.5
This test method measures the submicron particulate filtration efficiency (PFE) and the effectiveness of a material to filter aerosolized particles. It measures the initial filtration efficiency of materials used in medical face masks by sampling representative volumes of the upstream and downstream latex aerosol concentrations in a controlled airflow chamber. Specific test techniques are provided for both manufacturers and users to evaluate materials when exposed to aerosol particle sizes between 0.1 and 5.0 μm. As with the BFE test, the results are expressed in the percentage that does not pass through the fabric at a given aerosol flow rate.
- 16 CFR (Code of Federal Regulations) Part 1610: Standard for the Flammability of Clothing Textiles. The material to be tested is held in a special apparatus at an angle of 45°. A standardized flame is applied to the surface near the lower end of the material for 1 second. When tested as described in section 1610.6, the material is classified as Class 1, Normal flammability, when the burn time is 3.5 seconds or more. 6,7 The fifth test evaluates the flammability of the mask material.
Not All Surgical Masks are Created Equal
The term surgical mask refers to an FDA-cleared laser, isolation, or medical procedure mask, with or without a face shield, worn to protect surgical patients and healthcare team members from transfer of microorganisms and body fluids. Surgical masks are intended to protect the wearer’s face from large droplets (>5 micrometer in size) & splashes of blood and other body fluids. Surgical and high-filtration surgical laser masks DO NOT provide the degree of protection to be considered respiratory personal protective equipment (PPE).8
There are differences in surgical mask performance levels, construction, and fit.
Levels of Performance for Surgical Masks
As summarized in Table 3 to achieve a Level 1 rating, the mask material must pass the fluid resistance test at 80 mm Hg, a differential pressure with less than 4 mm H2O, a BFE and PFE at 95% or greater, and a Class 1 flame spread. To achieve a Level 2 rating, the mask material must pass the fluid resistance test at 120 mm Hg, a differential pressure with less than 5 mm H2O , the BFE and PFE at 98% or greater, and a Class 1 flame spread. To achieve a Level 3 – THE HIGHEST rating – the only test result that has changed from the Level 2 requirements is the fluid resistance. A mask material that is rated at level 3 must pass the fluid resistance challenge at 160 mm Hg or higher.2
Studies have shown that the filtration efficiency for surgical masks is highly variable depending on the type of mask and manufacturer.8,9 Therefore, it is important for end users to check the mask and mask packaging for information on factors that include whether the mask is FDA cleared and whether it has achieved a performance level (i.e., level 1, level 2, or level 3) as specified by the ASTM F2100-11 Standard.
Construction and Fit of Surgical Masks
Masks vary in design features (e.g., tie-on, ear-loop, attached face shield) and style (e.g., duck bill, flat pleated, cone shaped, pouch). Material composition (i.e., type of fabric, metals, and elastic materials) as well as size and dimensions vary as well.1 As noted by OSHA, surgical masks are not designed or certified to prevent the inhalation of small airborne contaminants. Nor are they designed to seal tightly against the user’s face. During inhalation, much of the potentially contaminated air can pass through gaps between the face and the surgical mask and not be pulled through the filter material of the mask. Their ability to filter small particles varies significantly based upon the type of material used to make the surgical mask. Only surgical masks that are cleared by the U.S. food and Drug Administration to be legally marketed in the United States have been tested for their ability to resist blood and body fluids.9
The fit of a face mask is fundamental to overall performance and barrier protection. As examples, a face mask that is too small or too big will fail to provide the wearer with an adequate seal around the nose and mouth. It is important to note that even when surgical masks with the most efficient filter media (i.e., high-filtration surgical laser masks) are properly worn, the fit factor obtained is quite low.8,10 This highlights the importance of appropriate mask size and best facial fit to maximize the benefits of wearing a surgical mask. In addition, healthcare personnel should refer to their professional organization(s) for guidance. As an example, on their Clinical FAQ website, AORN has posted the following guidance on masks with ear loops. “Masks with ear loops may not … provide a secure facial fit that prevents venting at the sides of the mask. The surgical mask should cover the mouth and nose and be secured in a manner that prevents venting at the sides of the mask. A mask that conforms to the perioperative team member’s face decreases the risk of the perioperative team member transmitting nasopharyngeal and respiratory microorganisms to the patient or the sterile field.”11
Strategies for Appropriate Selection of Surgical Masks
Strategies for appropriate selection of surgical masks include using professional guidelines and resources. Recommendations related to surgical masks include guidance from the Health Care Infection Control Practices Advisory Committee (i.e., the federal advisory committee that provides advice and guidance to the Centers for Disease Control and Prevention)12 and the Occupational Safety and Health Administration (OSHA). The OSHA mandate states that when health care workers are performing activities that generate splashes, sprays, aerosols of blood or OPIM, they must wear appropriate facial protection to protect them from exposure. 13
As outlined below, additional guidance for selection and use of surgical masks can be obtained from the Association of periOperative Registered Nurses (AORN) guidelines and applying knowledge of the ASTM F2100-11 Standard Specification for Performance of Materials Used in Medical Face Masks.
Association of periOperative Registered Nurses (AORN)
This professional association provides and regularly updates several recommendations related to the performance characteristics and comfort of surgical PPE that can be found in the AORN guidelines as summarized below.
Guideline for Product Selection14
This guideline provides recommendations for evaluating and purchasing medical devices and other products used for patient care including the following.
A mechanism to select products should be developed; this includes establishing a multidisciplinary product evaluation and selection committee.
- The multidisciplinary committee should develop a process that guides product selection.
- Consistent requirements for each product being evaluated should be determined. Examples of product-specific requirements include:
- compatibility with existing methods of disposal and reprocessing
- procedure-related requirements (e.g., resistance to penetration by blood and other body fluids)
- end-user preferences and requirements (e.g., for a surgical gown: comfort, the amount of protection from blood and body fluids, size, free from toxic ingredients or allergens)
- patient-related requirements (e.g., presence of infectious diseases)
compliance with federal, state, local regulatory agencies, and with standard-setting bodies
- A product-specific evaluation tool should be developed with the use of unique, product-specific criteria, including:
- safety, performance, quality
- efficiency, ease of use
- effect on quality patient care and clinical outcomes
- evidence-based efficacy
- analysis of the financial impact
- sterilization/reprocessing parameters (including degree of difficulty)
- environmental impact
- quality of the manufacturer’s instructions
Guidelines for Prevention of Transmissible Infections 15,16
These guidelines provide guidance in implementing standard and transmission-based precautions (i.e., contact, droplet, and airborne precautions) to reduce the risk for infection. Further guidance is as follows.
- Masks provide protection for the wearer’s mucous membranes of the nose and mouth, which are susceptible to infections pathogens.
- Masks are also used as a component of standard and droplet precautions to prevent contact with respiratory secretions or blood and body fluid sprays.
- Surgical masks that are tested and evaluated for fluid resistance, bacterial filtration efficiency, differential pressure, as well as flammability, are appropriate components of PPE in the perioperative practice setting.
Guideline for Surgical Attire17,18,19
According to the AORN Guideline for Surgical Attire, all personnel who enter the restricted areas should wear a surgical mask when open sterile supplies and equipment are present. A surgical mask should be worn to protect both the healthcare team member as well as the patient from the transfer of microorganisms. The surgical mask protects health care workers from droplets that are larger than 5 micrometers in size. A single surgical mask is worn to protect the health care worker from contact with infectious material from the patient (eg, sprays or spatters of blood or body fluids, respiratory secretions) and to protect the patient from exposure to infectious microorganisms carried in the health care worker’s mouth or nose. A surgical mask protects the nose and mouth of personnel from inadvertent splashes or splatters of blood and other body fluids. Other best practices that improve barrier effectiveness and reduce the potential for contamination of surgical masks include the following.
The mask should completely cover the nose and mouth.
- The mask should be secured in a manner that prevents venting (i.e., securely tied in the back of the headand also behind the neck). Infectious agents can reach the wearer’s nose and mouth if leaks at the face-mask seal are present.
- A fresh, clean surgical mask should be worn for every procedure.
- A mask that becomes wet or soiled should be replaced. When a mask becomes wet, its filtering capacityis compromised; the microbial barrier efficacy of surgical masks with a bacterial filtration efficiency of95% decreases after four hours.
- Masks should not be worn dangling or hanging down from the neck.
- Masks should be discarded after each procedure.
- The mask should be removed carefully by handling only the mask ties.
- Hand hygiene should be performed after the mask is removed.
Guideline for Energy Devices20
In regards to protecting perioperative personnel from surgical smoke generated during surgical procedures, this AORN Guideline recommends respiratory protection (e.g., a fit-tested surgical N95 respirator) as a secondary protection against residual surgical smoke. It is important to note that general room ventilation and smoke evacuation (i.e., local exhaust ventilation) are the first lines of protection against surgical smoke inhalation, not high-filtration masks.
It should also be noted that both surgical and high-filtration surgical laser masks do not provide enough protection to be considered respiratory PPE.8 Therefore, the potential for exposure to airborne contaminants and infectious agents, including those in surgical smoke21, dictates the use of respiratory PPE, such as a surgical N95 particulate respirator. These respirators are designed to protect the user from both droplet and airborne particles and greatly decrease a large size range of particles from entering the wearer’s breathing zone. All health care workers who must use a respirator to control exposure to hazardous agents in the workplace must be trained on the proper use of the respirator and also pass a fit test before it can be used in the workplace.
ASTM F2100-11 Standard Specification for Performance of Materials Used in Medical Face Masks
Healthcare personnel should use their knowledge from ASTM F2100-11 when selecting masks, giving
consideration to whether the patient is under standard precautions or extended precautions, the type of
procedure to be performed, and the anticipated amount of splash or spray.
Table 4 outlines examples of considerations in mask selection based on barrier levels. Note, this table is intended as a tool to provide general guidelines as a starting point for decision-making and is not a substitute for professional judgment and experience.
During invasive procedures, both the patient and members of the health care team are at risk for exposure to infectious agents through pathogen penetration of surgical masks. Therefore, selection of surgical masks with appropriate barrier protection for the task at hand is essential. This selection process is challenging as surgical masks differ in barrier performance levels, construction and fit. Organizations that have published standards and guidelines pertaining to the performance, classification, selection, and use of surgical masks include the FDA, ASTM International, and AORN. Health carepersonnel should be knowledgeable of information provided by these organizations and use this knowledge to enable them to make appropriate surgical mask selections.
Food and Drug Administration. Guidance for industry and FDA staff: surgical masks – premarket notification [510(k)] submissions; guidance for industry and FDA. Document issued on March 5, 2004 and a correction posted on July 14, 2004. . Accessed 6.14.17.
ASTM International. ASTM F2100 – 11. Standard Specification for Performance of Materials Used in Medical Face Masks West Conshohocken, PA; ASTM International; 2011.
ASTM International. ASTM F1862M-13. Standard Test Method for Resistance of Medical Face Masks to Penetration by Synthetic Blood (Horizontal Projection of Fixed Volume at a Known Velocity). West Conshohocken, PA; ASTM International; 2013.
ASTM International. ASTM F2101-07. Standard Test Method for Evaluating the Bacterial Filtration
Efficiency (BFE) of Medical Face Mask Materials, Using a Biological Aerosol of Staphylococcus aureus.
West Conshohocken, PA; ASTM International; 2007.
ASTM F2299 / F2299M-03(2010), Standard Test Method for Determining the Initial Efficiency of Materials Used in Medical Face Masks to Penetration by Particulates Using Latex Spheres, ASTM International, West Conshohocken, PA, 2010.
Federal Register Consumer Product Safety Commission. CFR Part 1610: Standard for the flammability of clothing textiles; proposed rule. Part II Consumer Product Safety Commission. Tuesday, February 27, 2007. https://www.cpsc.gov/PageFiles/95108/clothingflammstd.pdf. Accessed June 14, 2017.
Federal Register Consumer Product Safety Commission. CFR Part 1610.6. Requirements for classifying textiles. In 16 CFR Part 1610 Standard for the flammability of clothing textiles; proposed rule. https://www.law.cornell.edu/cfr/text/16/1610.6. Accessed June 14, 2017.
Benson SM, Novak DA, Ogg MJ. Proper use of surgical N95 respirators and surgical masks in the OR. AORN J. 2013;97(4):458-467.
OSHA. (2009). OSHA FactSheet – Respiratory Infection Control: Respirators Versus Surgical Masks. Accessed June 14, 2017.
Oberg, T. and L. M. Brosseau (2008). “Surgical mask filter and fit performance.” Am J Infect Control 36(4): 276-282.
Association of periOperative Registered Nurses. November 6, 2014. Are Masks with Ear Loops Acceptable in the periOperative Environment? Clinical FAQs: Surgical Attire. Accessed June 17, 2017.
Siegel, J. D., E. Rhinehart, et al. 2007. Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Health Care Settings. Am J Infect Control 35(10 Suppl 2): S65-164.
OSHA. Occupational Safety and Health Standards, Toxic and Hazardous Substances, Bloodborne Pathogens, United States Department of Labor. 1910.1030. https://www.osha.gov/pls/oshaweb/owadisp.show_document?p_table=standards&p_id=10051. Accessed June 3, 2017.
Association of periOperative Registered Nurses. 2017. Guideline for Product Selection. AORN Guidelines for Perioperative Practice, AORN, Inc.: 183-190.
Association of periOperative Registered Nurses. 2017. Ambulatory Supplement: Guideline for Prevention of Transmissible Infections. AORN Guidelines for Perioperative Practice, AORN, Inc.: 540-542.
Association of periOperative Registered Nurses. 2017. Guideline for Prevention of Transmissible Infections. AORN Guidelines for Perioperative Practice, AORN, Inc.: 507-539.
Association of periOperative Registered Nurses. 2017. Guideline for Surgical Attire. AORN Guidelines for Perioperative Practice. AORN, Inc.; 105-127.
Braswell, M. L. and L. Spruce (2012). “Implementing AORN recommended practices for surgical attire.” AORN J 95(1): 122-137; quiz 138-140.
Cowperthwaite, L. and R. L. Holm (2015). “Guideline implementation: Surgical attire.” AORN J 101(2): 188-194; quiz 195-187.
Association of periOperative Registered Nurses. 2017. Guideline for Safe Use of Energy-Generating Devices. AORN Guidelines for Perioperative Practice, AORN, Inc.: 129-156.
Gao, S., R. H. Koehler, et al. (2016). “Performance of Facepiece Respirators and Surgical Masks against Surgical Smoke: Simulated Workplace Protection Factor Study.” Ann Occup Hyg 60(5): 608-618. | <urn:uuid:f62d9e36-6504-42f4-871e-96d173088961> | CC-MAIN-2022-33 | https://www.halyardhealth.co.uk/articles/face-mask/is-your-surgical-mask-protecting-you/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00097.warc.gz | en | 0.907765 | 5,255 | 2.65625 | 3 |
Help: To find Illnesses or Conditions associated with a Herbal Remedy. Select a letter from A - Z of Herbal Remedies. Or Scroll lists. Or Use Search.
Help: To find Illnesses or Conditions associated with a Herbal Remedy. Select a letter from A - Z of Herbal Remedies. Or Scroll lists. Or Use Search.
Grapes: (Vitis vinifera, Vitaceae)
Range and Habitat
Vitis vinifera means “the vine that bears wine” and belongs to the Vitaceae family. Grapes are perennial climbers that have coiled tendrils and large leaves. They contain clusters of flowers that mature to produce small, round, and juicy berries that can be either green (“white”) or red.1 There are seed and seedless varieties, although the seeds are edible and packed with nutrition. The juice, pulp, skin, and seed of the grape can be used for various preparations.2
Grapes are a vine and must be trained to grow along a fence, wall, or arbor.3 The fruit does not ripen after harvesting; therefore, it is important to harvest well-coloured and plump berries that are wrinkle-free and still firmly attached to the vine. They are best stored in the refrigerator since freezing will decrease their flavor.1,4 Pesticide use is common in vineyards, and careful washing is recommended when purchasing conventionally grown grapes.
As one of the leading commercial fruit crops in the world in terms of tons produced, grapes are cultivated all over the world in temperate regions. The top producers are Italy, France, Spain, the United States, Mexico, and Chile.1,5 Annually, worldwide grape production reaches an average of 60 million metric tons, 5.2 million of which are grown in the United States.
Phytochemicals and Constituents
Antioxidants are enzymes and nutrients that prevent oxidation, meaning they neutralize highly reactive ions or molecules known as free radicals in the human body by donating electrons, or modulating enzymes that metabolize free radicals. Free radicals are produced naturally through metabolism as part of normal physiological functions (e.g., a defence mechanism against pathogens), but may be produced in excess, creating a situation where they adversely alter lipids, proteins, and DNA, and trigger a number of human diseases. Grape and grape products are good sources of beneficial antioxidant compounds.
Grapes contain phytochemicals called polyphenols. Polyphenols are the most abundant source of dietary antioxidants and are associated with numerous health benefits.2,6 The phenolic compounds are more concentrated in the skin of the berry, rather than in the flesh or seeds, and the content tends to increase as the fruit ripens. Grapes contain polyphenols from the classes of flavonoids, stilbenes, and phenolic acids. Red wine and grapes are rich in flavonoids such as anthocyanins and catechins, stilbenes such as resveratrol, and phenolic acids such as caffeic acid and coumaric acid. Red grapes have higher concentrations of these phenolic compounds than red wine grapes. Different grape varietals contain varying amounts of phenolic compounds.
Anthocyanins are flavonoids that naturally occur in the plant kingdom and give many plants their red, purple, or blue pigmentation. Vitis vinifera may contain up to 17 anthocyanin pigments, which are contained in the skins.2,7 Grapes also contain other flavonoids, including catechins, epicatechins, and proanthocyanidins. Attempts to study the benefits of individual phytochemicals in humans have been difficult since these phytochemicals are complex and often interact with one another to increase their overall benefits.
There are numerous studies using animal models in phytochemical research.8 Animal models have shown that anthocyanins protect against oxidative stress, which can be the beginning stages of many chronic conditions, such as cardiovascular disease, diabetes, and cancer.
Grape seeds are a particularly rich source of proanthocyanidins, a class of nutrients belonging to the flavonoid family. Proanthocyanidins, also known as condensed tannins, are polymers (naturally occurring large molecules) with flavan-3-ol monomers as building blocks. The term oligomeric proanthocyanidin (OPC), which also is commonly used to describe these compounds, is not well-defined and is debated among various members of the scientific community.
Grape seed extract is available as a nutritional supplement. Partially purified proanthocyanidins have been used in phytomedicinal preparations in Europe for their purported activity in decreasing the fragility and permeability of the blood vessels outside the heart and brain.9
Grapes have a high stilbene, specifically resveratrol, content. Resveratrol, which is found in the skin and seeds of red grapes as well as in red wine, is produced as the plant’s defence mechanism against environmental stressors.1,2,10,11 Resveratrol first gained attention as a possible explanation for the “French Paradox” — the observation that French people tend to have a low incidence of heart disease despite having a typically high-fat diet.1 The antioxidant activity of grapes is strongly correlated with the amount of resveratrol found in the grape.10 Studies have found resveratrol to be anti-carcinogenic, anti-inflammatory, and cardio-protective in animal models.11 However, in a human study in which healthy adults consumed resveratrol, it was determined that the compound was readily absorbed, but it metabolized quickly, leaving only trace amounts.12
In addition to their high resveratrol content, grapes are also an excellent source of vitamin K and provide moderate amounts of potassium, vitamin C, and B vitamins.
Historical and Commercial Uses
Grapes have been consumed since prehistoric times and were one of the earliest domesticated fruit crops.1,13 According to ancient Mediterranean culture, the “vine sprang from the blood of humans who had fought against the gods.”14 But according to archaeological evidence, domestication took place about 5,000 years ago somewhere between the Caspian and Black Seas, and spread south to modern-day Syria, Iraq, Jordan, and Egypt before moving towards Europe.5,13 After the collapse of the Roman Empire in the 5th century, when Christianity became dominant, wine was associated with the Church and the monasteries soon perfected the process of making wine.1
About 300 years ago, Spanish explorers introduced the grape to what is now the United States, and California’s temperate climate proved to be an ideal place for grape cultivation.1
The grape is, famously, the most common ingredient in wine-making. A naturally-occurring symbiotic yeast grows on the grapes, making them easier to ferment and well-suited to the wine-making process.4 Popular wine cultivars of V. vinifera include Cabernet Sauvignon, Merlot, Chardonnay, Sauvignon Blanc, Vermentino, and Viognier.10
Wine often has been used as a medium for herbal remedies, due to the solvent nature of the alcohol. Both the Chinese and Western traditions made use of medicated wines (though ancient recipes in China, which date to the Shang Dynasty [ca 1600-1046 BCE], would have been made with rice [Oryza sativa, Poaceae] wine rather than grape wine).15 Many aperitifs and liqueurs originally were digestive aids made with wine and fortified with herbs such as wormwood (Artemisia absinthium, Asteraceae) and anise seed (Pimpinella anisum, Apiaceae).16 Medicated wines are less potent and usually require a higher dosage than tinctures made with higher-proof alcohol.
Grapes are generally sweet and are used as table grapes, juice, jam, jelly, or for wine-making.13,17 About 99% of the world’s wine comes from V. vinifera.14 Grapes can also be dried in the vineyard and turned into raisins. To accomplish this, ripe grapes are plucked from the vine and placed on paper trays for two to four weeks. Afterwards, they are sent to the processing plant to be cleaned, packaged, and shipped.5
Grapes have been the subject of numerous studies focused on many of their bioactive compounds, including flavonoids, stilbenes, and phenolic acids. Researchers have observed antioxidant, anti-tumor, immune modulatory, anti-diabetic, anti-atherogenic, anti-infectious, and neuro-protective properties of the fruit.11 Research suggests that grape product consumption could possibly benefit those with cancer, diabetes, and cardiovascular disease, which are among the leading causes of death worldwide.18 However, more human studies are needed to support any of these purported benefits.
An in vitro study showed that antioxidants from a variety of grape product extracts performed as well as or better than BHT, tocopherol, and trolox in radical scavenging activity, metal chelating activity, and inhibition of lipid peroxidation.7 Water and ethanol seed extracts had the highest amount of phenolic compounds of any of the extracts used in the study.
Grape seed extract (GSE), which has a growing body of study behind it, has gained attention for its possible use in lowering blood pressure and reducing the risk of heart disease, especially in pre-hypertensive populations.19 Unlike grape skins, where only red grapes contain anthocyanins, seeds from both white and red grape contain beneficial compounds. A standardized GSE made from white wine grapes recently was studied for its effects on gastrointestinal inflammation.20 While most studies focus on GSE and cardiovascular health, the preliminary results were promising enough to warrant a future human trial.
Polyphenols have been found to protect the body from inflammation, which is common in people with heart disease.11 In a recent meta-analysis, the acute effects of polyphenols on the endothelium (inner lining of the blood vessels) were investigated. The analysis found that blood vessel function significantly improved in healthy adults in the initial two hours after consuming grape polyphenols.21 Another analysis found that the polyphenol content in every part of the grape — fruit, skin, and seed — had cardioprotective effects.22 In animal, in vitro, and limited human trials, grapes showed beneficial actions against oxidative stress, atherosclerosis (plaque build-up in arteries), high blood pressure, and ventricular arrhythmia (irregular heartbeat).
Cancer Chemopreventive Effects
Although the causes of and treatments for cancer are complex and multifaceted, studies have been done on the antioxidant activity of polyphenols and their cancer chemopreventive effects. These antioxidants demonstrate the ability to protect the body from cancer-causing substances and to prevent tumour cell growth by protecting DNA and regulating natural cell death.8,11,23
In a randomized, double-blind controlled clinical study, healthy overweight/obese first degree relatives to type 2 diabetic patients were given grape polyphenols to counteract a high-fructose diet. After nine weeks of supplementation, grape polyphenols protected against fructose-induced insulin resistance.24 In another study, diabetic patients who consumed a dealcoholized Muscadine grape wine had reduced fasting insulin levels and increased insulin resistance.25
Macronutrient Profile: (Per 150 g [approx. 1 cup] grapes)
1.1 g protein
27.3 g carbohydrate
0.2 g fat
Secondary Metabolites: (Per 150 g [approx. 1 cup] grapes)
Excellent source of:
Vitamin K: 22 mcg (27.5% DV)
Good source of:
Potassium: 288 mg (8.2% DV)
Vitamin C: 4.8 mg (8% DV)
Thiamin: 0.1 mg (6.7% DV)
Riboflavin: 0.1 mg (5.9% DV)
Dietary Fiber: 1.4 g (5.6% DV)
Manganese: 0.1 mg (5.5% DV)
Vitamin B6: 0.1 mg (5% DV)
Phosphorus: 30 mg (3% DV)
Magnesium: 11 mg (2.8% DV)
Iron: 0.5 mg (2.8% DV)
Vitamin A: 100 IU (2% DV)
Niacin: 0.3 mg (1.5% DV)
Vitamin E: 0.3 mg (1.5% DV)
DV = Daily Value as established by the US Food and Drug Administration, based on a 2,000-calorie diet.
HerbalEGram: Volume 12, Issue 12, December 2015
By Hannah Baumana and Mindy Greenb
a HerbalGram Assistant Editor
b ABC Dietetics Intern (UT, 2014)
1. 1. Murray MT, Pizzorno J, Pizzorno L. The Encyclopedia of Healing Foods. New York, NY: Atria Books; 2005.
2. 22. Yang J, Xiao YY. Grape phytochemcials and associated health benefits. Critical Reviews in Food Science and Nutrition. 2013;53:1202-1225.
3. 3. Damrosch B. Grapes. In: The Garden Primer: Second Edition. New York, NY: Workman Publishing; 2008:353-359.
4. 4. Wood R. The New Whole Foods Encyclopedia: A Comprehensive Resource for Healthy Eating. New York, NY: Penguin Books; 1999.
5. 5. Ensminger AH, Ensminger ME, Konlande JE. The Concise Encyclopedia of Foods & Nutrition. Boca Raton, FL: CRC Press; 1995.
6. 6. Tiwari B, Brunton NP, Brennan CS. Handbook of Plant Food Phytochemicals. London, UK: John Wiley & Sons, Ltd; 2013.
7. 7. Keser S, Celik S, and Turkoglu S. Total phenolic contents and free-radical scavenging activities of grape (Vitis vinifera L.) and grape products. International Journal of Food Sciences and Nutrition. 2013;64(2):210-216.
8. 8. Lila MA. Anthocyanins and human health: An in vitro investigative approach. J Biomed Biotechnol. 2004;2004(5):306-313.
9. 9. Yamakoshi J, Saito M, Kataoka S, Kikuchi M. Safety evaluation of proanthocyanidin-rich extract from grape seeds. Food and Chemical Toxicology. 2002;40:599-607.
10. 10. Burin VM, Ferreira-Lima NE, Panceri CP, Bordignon-Luiz MT. Bioactive compounds and antioxidant activity of Vitis vinifera and Vitis labrusca grapes: Evaluation of different extraction methods. Microchemical Journal. 2014;114:155-163.
11. 11. Yadav M, Jain S, Bhardwaj A, et al. Biological and medicinal properties of grapes and their bioactive constituents: An update. Journal of Medinical Food. 2009;12(3):473-484.
12. 12. Walle T, Hsieh F, DeLegge MH, Oatis JE, Walle K. High absorption but very low bioavailability of oral resveratrol in humans. Drug Metabolism and Disposition. 2004;32(12):1377-1382.
13. 13. Myles S, Boyko AR, Owens CL, et al. Genetic structure and domestication history of the grape. Proceedings of the National Academy of Science of the United States of America. 2011;108(9):3530-3535.
14. 14. McGovern PE. Ancient Wine: The Search for the Origins of Viniculture. Princeton, NJ: Princeton University Press; 2007.
15. 15. Chan K, Cheung L. Interactions Between Chinese Herbal Medicinal Products and Orthodox Drugs. Boca Raton, FL: CRC Press; 2000.
16. 16. Hoffmann D. The Herbal Handbook: A User’s Guide to Medical Herbalism. Rochester, VT: Inner Traditions; 1998.
17. 17. Onstad D. Whole Foods Companion: A Guide for Adventurous Cooks, Curious Shoppers, and Lovers of Natural Foods. White River Junction, VT: Chelsea Green Publishing; 2004.
18. 18. The top 10 causes of death – fact sheet no. 310. World Health Organization website. May 2014. Available here. Accessed November 23, 2015.
19. 19. Park E, Edirisinghe I, Choy YY, Waterhouse A, Burton-Freeman B. Effects of grape seed extract beverage on blood pressure and metabolic indices in individuals with pre-hypertension: a randomised, double-blinded, two-arm, parallel, placebo-controlled trial. Br J Nutr. 2015;16:1-13.
20.20. Starling S. White wine extract shows gastro benefits in vitro. Clinicals planned for 2016. NutraIngredients-USA website. November 12, 2015. Available here. Accessed November 19, 2015.
21.21. Li SH, Tian HB, Zhao HJ, Chen LH, Cui LQ. The acute effects of grape polyphenols supplementation on endothelial function in adults. PLOS ONE. 2013;8(7):e69818.
22. 22. Leifert WR, Abeywardena MY. Cardioprotective actions of grape polyphenols. Nutrition Research. 2008;28:729-737.
23. 23. Waffo-Téguo P, Hawthorne ME, Cuendet M, et al. Potential cancer-chemopreventive activities of wine stilbenoids and flavans extracted from grape (Vitis vinifera) cell cultures. Nutrition and Cancer. 2001;40(2):173-179.
24. 24. Hokayem M. Grape polyphenols prevent fructose-induced oxidative stress and insulin resistance in first-degree relatives of type 2 diabetic patients. Diabetes Care. 2013;36:1455-1461.
25. 25. Banini AE, Boyd LG, Allen JG, Allen HG, Sauls DL. Muscadine grape products intake, diet and blood constituents of non-diabetic and type 2 diabetic subjects. Nutrition. 2006;22:1137-45.
26. 26. Basic Report: 09132, Grapes, red or green (European type, such as Thompson seedless), raw. United States Department of Agriculture, Agricultural Research Service. Available here. Accessed November 19, 2015.
Anger is a normal emotion that everyone feels from time to time.VIEW MORE
Excessive facial hair is a touchy subject with many women; those who suffer from this condition have a low self-esteemVIEW MORE
Maca (Lepidum meyenii, Brassicaceae), a root vegetable grown in the Andean region of Peru, is widely used for its nutritional and therapeutic properties. Maca is said to improve male and female reproductive activity in diverse ways, from increasing arousal and reducing symptoms of menopause to boosting sperm quality,VIEW MORE | <urn:uuid:365821ae-1a0b-4953-ab62-71b62833041b> | CC-MAIN-2022-33 | https://www.naturalremediesandcures.co.uk/319,research,grapes.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00095.warc.gz | en | 0.908571 | 4,175 | 3.390625 | 3 |
The rate law or rate equation for a chemical reaction is an equation that links the initial or forward reaction rate with the concentrations or pressures of the reactants and constant parameters (normally rate coefficients and partial reaction orders). For many reactions, the initial rate is given by a power law such as
where [A] and [B] express the concentration of the species A and B, usually in moles per liter (molarity, M). The exponents x and y are the partial orders of reaction for A and B and the overall reaction order is the sum of the exponents. These are often positive integers, but they may also be zero, fractional, or negative. The constant k is the reaction rate constant or rate coefficient of the reaction. Its value may depend on conditions such as temperature, ionic strength, surface area of an adsorbent, or light irradiation. If the reaction goes to completion, the rate equation for the reaction rate applies throughout the course of the reaction.
The rate equation of a reaction with an assumed multi-step mechanism can often be derived theoretically using quasi-steady state assumptions from the underlying elementary reactions, and compared with the experimental rate equation as a test of the assumed mechanism. The equation may involve a fractional order, and may depend on the concentration of an intermediate species.
A reaction can also have an undefined reaction order with respect to a reactant if the rate is not simply proportional to some power of the concentration of that reactant; for example, one cannot talk about reaction order in the rate equation for a bimolecular reaction between adsorbed molecules:
The prefactors −1, −2 and 3 (with negative signs for reactants because they are consumed) are known as stoichiometric coefficients. One molecule of A combines with two of B to form 3 of C, so if we use the symbol [X] for the number of moles of chemical X,
where νi is the stoichiometric coefficient for chemical Xi, with a negative sign for a reactant.
The initial reaction rate has some functional dependence on the concentrations of the reactants,
and this dependence is known as the rate equation or rate law. This law generally cannot be deduced from the chemical equation and must be determined by experiment.
A common form for the rate equation is a power law:
The constant k is called the rate constant. The exponents, which can be fractional, are called partial orders of reaction and their sum is the overall order of reaction.
In a dilute solution, an elementary reaction (one having a single step with a single transition state) is empirically found to obey the law of mass action. This predicts that the rate depends only on the concentrations of the reactants, raised to the powers of their stoichiometric coefficients.
This can be used to estimate the order of reaction of each reactant. For example, the initial rate can be measured in a series of experiments at different initial concentrations of reactant A with all other concentrations [B], [C], … kept constant, so that
The slope of a graph of as a function of then corresponds to the order x with respect to reactant A.
However, this method is not always reliable because
measurement of the initial rate requires accurate determination of small changes in concentration in short times (compared to the reaction half-life) and is sensitive to errors, and
the rate equation will not be completely determined if the rate also depends on substances not present at the beginning of the reaction, such as intermediates or products.
The tentative rate equation determined by the method of initial rates is therefore normally verified by comparing the concentrations measured over a longer time (several half-lives) with the integrated form of the rate equation; this assumes that the reaction goes to completion.
For example, the integrated rate law for a first-order reaction is
where [A] is the concentration at time t and [A]0 is the initial concentration at zero time. The first-order rate law is confirmed if is in fact a linear function of time. In this case the rate constant is equal to the slope with sign reversed.
Method of floodingEdit
The partial order with respect to a given reactant can be evaluated by the method of flooding (or of isolation) of Ostwald. In this method, the concentration of one reactant is measured with all other reactants in large excess so that their concentration remains essentially constant. For a reaction a·A + b·B → c·C with rate law: , the partial order x with respect to A is determined using a large excess of B. In this case
and x may be determined by the integral method. The order y with respect to B under the same conditions (with B in excess) is determined by a series of similar experiments with a range of initial concentration [B]0 so that the variation of k' can be measured.
For zero-order reactions, the reaction rate is independent of the concentration of a reactant, so that changing its concentration has no effect on the rate of the reaction. Thus, the concentration changes linearly with time. This may occur when there is a bottleneck which limits the number of reactant molecules that can react at the same time, for example if the reaction requires contact with an enzyme or a catalytic surface.
Many enzyme-catalyzed reactions are zero order, provided that the reactant concentration is much greater than the enzyme concentration which controls the rate, so that the enzyme is saturated. For example, the biological oxidation of ethanol to acetaldehyde by the enzyme liver alcohol dehydrogenase (LADH) is zero order in ethanol.
Similarly reactions with heterogeneous catalysis can be zero order if the catalytic surface is saturated. For example, the decomposition of phosphine (PH3) on a hot tungsten surface at high pressure is zero order in phosphine, which decomposes at a constant rate.
A first order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but their concentration has no effect on the rate. The rate law for a first order reaction is
Although not affecting the above math, the majority of first order reactions proceed via intermolecular collisions. Such collisions, which contribute the energy to the reactant, are necessarily second order. The rate of these collisions is, however, masked by the fact that the rate determining step remains the unimolecular breakdown of the energized reactant.
The half-life is independent of the starting concentration and is given by .
In organic chemistry, the class of SN1 (nucleophilic substitution unimolecular) reactions consists of first-order reactions. For example, in the reaction of aryldiazonium ions with nucleophiles in aqueous solution, ArN2+ + X− → ArX + N2, the rate equation is v0 = k[ArN2+], where Ar indicates an aryl group.
A reaction is said to be second order when the overall order is two. The rate of a second-order reaction may be proportional to one concentration squared, , or (more commonly) to the product of two concentrations, . As an example of the first type, the reaction NO2 + CO → NO + CO2 is second-order in the reactant NO2 and zero order in the reactant CO. The observed rate is given by , and is independent of the concentration of CO.
For the rate proportional to a single concentration squared, the time dependence of the concentration is given by
The time dependence for a rate proportional to two unequal concentrations is
if the concentrations are equal, they satisfy the previous equation.
This reaction is first-order in each reactant and second-order overall:
If the same hydrolysis reaction is catalyzed by imidazole, the rate equation becomes v0 = k[imidazole][CH3COOC2H5]. The rate is first-order in one reactant (ethyl acetate), and also first-order in imidazole, which as a catalyst does not appear in the overall chemical equation.
If the concentration of a reactant remains constant (because it is a catalyst, or because it is in great excess with respect to the other reactants), its concentration can be included in the rate constant, leading to a pseudo–first-order (or occasionally pseudo–second-order) rate equation. For a typical second-order reaction with rate equation v0 = k[A][B], if the concentration of reactant B is constant then , where the pseudo–first-order rate constant k' = k[B]. The second-order rate equation has been reduced to a pseudo–first-order rate equation, which makes the treatment to obtain an integrated rate equation much easier.
One way to obtain a pseudo-first order reaction is to use a large excess of one reactant (say, [B]≫[A]) so that, as the reaction progresses, only a small fraction of the reactant in excess (B) is consumed, and its concentration can be considered to stay constant. For example, the hydrolysis of esters by dilute mineral acids follows pseudo-first order kinetics, where the concentration of water is constant because it is present in large excess:
CH3COOCH3 + H2O → CH3COOH + CH3OH
The hydrolysis of sucrose (C12H22O11) in acid solution is often cited as a first-order reaction with rate v0 = k[C12H22O11]. The true rate equation is third-order, v0 = k[C12H22O11][H+][H2O]; however, the concentrations of both the catalyst H+ and the solvent H2O are normally constant, so that the reaction is pseudo–first-order.
Summary for reaction orders 0, 1, 2, and nEdit
Elementary reaction steps with order 3 (called ternary reactions) are rare and unlikely to occur. However, overall reactions composed of several elementary steps can, of course, be of any (including non-integer) order.
Here M stands for concentration in molarity (mol · L−1), t for time, and k for the reaction rate constant. The half-life of a first order reaction is often expressed as t1/2 = 0.693/k (as ln(2)≈0.693).
The order of a chain reaction can be rationalized using the steady state approximation for the concentration of reactive intermediates such as free radicals. For the pyrolysis of acetaldehyde, the Rice-Herzfeld mechanism is
CH3CHO → •CH3 + •CHO
•CH3 + CH3CHO → CH3CO• + CH4
CH3CO• → •CH3 + CO
2 •CH3 → C2H6
where • denotes a free radical. To simplify the theory, the reactions of the •CHO to form a second •CH3 are ignored.
In the steady state, the rates of formation and destruction of methyl radicals are equal, so that
so that the concentration of methyl radical satisfies
The reaction rate equals the rate of the propagation steps which form the main reaction products CH4 and CO:
in agreement with the experimental order of 3/2.
More complex rate laws have been described as being mixed order if they approximate to the laws for more than one order at different concentrations of the chemical species involved. For example, a rate law of the form represents concurrent first order and second order reactions (or more often concurrent pseudo-first order and second order) reactions, and can be described as mixed first and second order. For sufficiently large values of [A] such a reaction will approximate second order kinetics, but for smaller [A] the kinetics will approximate first order (or pseudo-first order). As the reaction progresses, the reaction can change from second order to first order as reactant is consumed.
Another type of mixed-order rate law has a denominator of two or more terms, often because the identity of the rate-determining step depends on the values of the concentrations. An example is the oxidation of an alcohol to a ketone by hexacyanoferrate (III) ion [Fe(CN)63−] with ruthenate (VI) ion (RuO42−) as catalyst. For this reaction, the rate of disappearance of hexacyanoferrate (III) is
This is zero-order with respect to hexacyanoferrate (III) at the onset of the reaction (when its concentration is high and the ruthenium catalyst is quickly regenerated), but changes to first-order when its concentration decreases and the regeneration of catalyst becomes rate-determining.
Notable mechanisms with mixed-order rate laws with two-term denominators include:
Michaelis-Menten kinetics for enzyme-catalysis: first-order in substrate (second-order overall) at low substrate concentrations, zero order in substrate (first-order overall) at higher substrate concentrations; and
the Lindemann mechanism for unimolecular reactions: second-order at low pressures, first-order at high pressures.
A reaction rate can have a negative partial order with respect to a substance. For example, the conversion of ozone (O3) to oxygen follows the rate equation in an excess of oxygen. This corresponds to second order in ozone and order (−1) with respect to oxygen.
When a partial order is negative, the overall order is usually considered as undefined. In the above example, for instance, the reaction is not described as first order even though the sum of the partial orders is , because the rate equation is more complex than that of a simple first-order reaction.
A pair of forward and reverse reactions may occur simultaneously with comparable speeds. For example, A and B react into products P and Q and vice versa (a, b, p, and q are the stoichiometric coefficients):
The reaction rate expression for the above reactions (assuming each one is elementary) can be written as:
where: k1 is the rate coefficient for the reaction that consumes A and B; k−1 is the rate coefficient for the backwards reaction, which consumes P and Q and produces A and B.
The constants k1 and k−1 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set v=0 in balance):
Concentration of A (A0 = 0.25 mol/L) and B versus time reaching equilibrium k1 = 2 min−1 and k−1 = 1 min−1
In a simple equilibrium between two species:
where the reaction starts with an initial concentration of reactant A, , and an initial concentration of 0 for product P at time t=0.
The derivative is negative because this is the rate of the reaction going from A to P, and therefore the concentration of A is decreasing. To simplify notation, let x be , the concentration of A at time t. Let be the concentration of A at equilibrium. Then:
A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope k1 + k−1. By measurement of [A]e and [P]e the values of K and the two reaction rate constants will be known.
Generalization of simple exampleEdit
If the concentration at the time t = 0 is different from above, the simplifications above are invalid, and a system of differential equations must be solved. However, this system can also be solved exactly to yield the following generalized expressions:
When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy.
If the rate constants for the following reaction are and ; , then the rate equation is:
For reactant A:
For reactant B:
For product C:
With the individual concentrations scaled by the total population of reactants to become probabilities, linear systems of differential equations such as these can be formulated as a master equation. The differential equations can be solved analytically and the integrated rate equations are
The steady state approximation leads to very similar results in an easier way.
Parallel or competitive reactionsEdit
Time course of two first order, competitive reactions with differing rate constants.
When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place.
Two first order reactionsEdit
and , with constants and and rate equations ; and
The integrated rate equations are then ; and
One important relationship in this case is
One first order and one second order reactionEdit
This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example, A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: and . The rate equations are: and , where is the pseudo first order constant.
The integrated rate equation for the main product [C] is , which is equivalent to . Concentration of B is related to that of C through
The integrated equations were analytically obtained but during the process it was assumed that . Therefore, previous equation for [C] can only be used for low concentrations of [C] compared to [A]0
Stoichiometric reaction networksEdit
The most general description of a chemical reaction network considers a number of distinct chemical species reacting via reactions. The chemical equation of the -th reaction can then be written in the generic form
denoting the net extent of molecules of in reaction . The reaction rate equations can then be written in the general form
This is the product of the stoichiometric matrix and the vector of reaction rate functions.
Particular simple solutions exist in equilibrium, , for systems composed of merely reversible reactions. In this case the rate of the forward and backward reactions are equal, a principle called detailed balance. Detailed balance is a property of the stoichiometric matrix alone and does not depend on the particular form of the rate functions . All other cases where detailed balance is violated are commonly studied by flux balance analysis, which has been developed to understand metabolic pathways.
General dynamics of unimolecular conversionEdit
For a general unimolecular reaction involving interconversion of different species, whose concentrations at time are denoted by through , an analytic form for the time-evolution of the species can be found. Let the rate constant of conversion from species to species be denoted as , and construct a rate-constant matrix whose entries are the .
Also, let be the vector of concentrations as a function of time.
Let be the vector of ones.
Let be the identity matrix.
Let be the function that takes a vector and constructs a diagonal matrix whose on-diagonal entries are those of the vector.
Let be the inverse Laplace transform from to .
Then the time-evolved state is given by
thus providing the relation between the initial conditions of the system and its state at time .
^Walsh, Dylan J.; Lau, Sii Hong; Hyatt, Michael G.; Guironnet, Damien (2017-09-25). "Kinetic Study of Living Ring-Opening Metathesis Polymerization with Third-Generation Grubbs Catalysts". Journal of the American Chemical Society. 139 (39): 13644–13647. doi:10.1021/jacs.7b08010. ISSN 0002-7863. PMID28944665.
^ abcNDRL Radiation Chemistry Data Center. See also: Capellos, Christos; Bielski, Benon H. (1972). Kinetic systems: mathematical description of chemical kinetics in solution. New York: Wiley-Interscience. ISBN 978-0471134503. OCLC 247275.
^Mucientes, Antonio E.; de la Peña, María A. (November 2006). "Ruthenium(VI)-Catalyzed Oxidation of Alcohols by Hexacyanoferrate(III): An Example of Mixed Order". Journal of Chemical Education. 83 (11): 1643. doi:10.1021/ed083p1643. ISSN 0021-9584.
^Rushton, Gregory T.; Burns, William G.; Lavin, Judi M.; Chong, Yong S.; Pellechia, Perry; Shimizu, Ken D. (September 2007). "Determination of the Rotational Barrier for Kinetically Stable Conformational Isomers via NMR and 2D TLC". Journal of Chemical Education. 84 (9): 1499. doi:10.1021/ed084p1499. ISSN 0021-9584.
^Manso, José A.; Pérez-Prior, M. Teresa; García-Santos, M. del Pilar; Calle, Emilio; Casado, Julio (2005). "A Kinetic Approach to the Alkylating Potential of Carcinogenic Lactones". Chemical Research in Toxicology. 18 (7): 1161–1166. CiteSeerX10.1.1.632.3473. doi:10.1021/tx050031d. PMID16022509.
^Heinrich, Reinhart; Schuster, Stefan (2012). The Regulation of Cellular Systems. Springer Science & Business Media. ISBN 9781461311614.
^Chen, Luonan; Wang, Ruiqi; Li, Chunguang; Aihara, Kazuyuki (2010). Modeling Biomolecular Networks in Cells. doi:10.1007/978-1-84996-214-8. ISBN 978-1-84996-213-1.
^Szallasi, Z. and Stelling, J. and Periwal, V. (2006) System modeling in cell biology: from concepts to nuts and bolts. MIT Press Cambridge.
^Iglesias, Pablo A.; Ingalls, Brian P. (2010). Control theory and systems biology. MIT Press. ISBN 9780262013345.
Atkins, Peter; de Paula, Julio (2006). "The rates of chemical reactions". Atkins' Physical chemistry (8th ed.). W.H. Freeman. pp. 791–823. ISBN 0-7167-8759-8.
Connors, Kenneth Antonio (1990). Chemical kinetics : the study of reaction rates in solution. John Wiley & Sons. ISBN 9781560810063.
Espenson, James H. (1987). Chemical kinetics and reaction mechanisms (2nd ed.). McGraw Hill. ISBN 9780071139496. | <urn:uuid:e4ec2dcc-e534-48b8-ab5e-f709a307e241> | CC-MAIN-2022-33 | https://www.knowpia.com/knowpedia/Rate_equation | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00498.warc.gz | en | 0.9008 | 4,962 | 4.15625 | 4 |
Citation: Simarro PP, Jannin J, Cattand P (2008) Eliminating Human African Trypanosomiasis: Where Do We Stand and What Comes Next? PLoS Med 5(2): e55. https://doi.org/10.1371/journal.pmed.0050055
Published: February 26, 2008
Copyright: © 2008 Simarro et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: The authors received no specific funding for this article.
Competing interests: The authors have declared that no competing interests exist.
Abbreviations: CATT, card agglutination test for trypanosomiasis; HAT, human African trypanosomiasis; PATTEC, Pan African Tsetse and Trypanosomosis Eradication Campaign; SIT, sterile insect technique; WHO, World Health Organization
In the early part of the twentieth century, human African trypanosomiasis (HAT), also known as sleeping sickness, decimated the population in many parts of sub-Saharan Africa. In the 1930s, the colonial administrations, conscious of the negative impact of the disease on its territories, established disease control programmes. Systematic screening, treatment, and follow-up of millions of individuals in the whole continent led to transmission coming to a near halt by the 1960s.
With the advent of independence in most countries where HAT was endemic, the newly independent authorities had other priorities to deal with. The rarity of HAT cases, and a decline in awareness of how the disease could return, led to a lack of interest in disease surveillance. Over time the disease slowly returned, and some thirty years later, flare-ups were observed throughout past endemic areas (Figure 1).
Since 1995, the World Health Organization (WHO) has on many occasions expressed its concern about the rise in HAT cases. The World Health Assembly has passed several resolutions in an attempt to stem this rise. However, social upheavals, wars, and population movements, combined with lack of awareness and shortage of funds, prevented any progress in interrupting transmission, and the disease continued to evolve and spread.
In a 1997 resolution, WHO strongly advocated access to diagnosis and treatment and the reinforcement of surveillance and control activities, concurrently setting up a network to strengthen coordination among all those actively concerned by the problem . As a consequence, the public and private sector granted stronger support to HAT surveillance, control, and research.
Pathology, Clinical Features, and Epidemiology
HAT is a vector-borne parasitic disease that is fatal if left untreated. It is caused by a single-celled protozoa belonging to the Trypanosoma genus. Parasites are transmitted to humans by the bite of a tsetse fly (Glossina genus) that has acquired the infection from human beings or from animals harbouring the human pathogenic parasites (Figure 2).
In T. b. gambiense the cycle is mostly human-to-human (central circle); occasionally transmission may occur from animal to human. In T. b. rhodesiense the animal reservoir plays an important role in the cycle, thus sustaining parasite transmission and human infections.
Tsetse flies, and subsequently sleeping sickness, are usually found in remote sub-Saharan rural areas where health systems are weak or non-existent. For reasons that are so far unexplained, there are many regions where tsetse flies are found but sleeping sickness is not. Sleeping sickness, coupled with nagana, the animal form of African trypanosomiasis, has been a major obstacle to sub-Saharan African rural development and a stumbling block to agricultural production. On the one hand, human infections reduce labour resources, while on the other, the animal disease limits availability of meat and milk and deprives African farmers of draught animal power, substantially minimising crop production. Therefore, both human and animal trypanosomiasis are implicated in the underdevelopment of the African continent, and are considered a major obstacle in the establishment of a flourishing agriculture to provide food security and to lead to sustainable economic growth and healthy populations.
The rural populations that live in regions where transmission occurs and depend on agriculture, fishing, animal husbandry, or hunting are the most exposed to the bite of the tsetse fly and therefore to the disease. Displacement of populations, war, and poverty are important factors leading to increased transmission. The disease develops in areas whose size can range from a village to an entire region. Within a given area, the intensity of the disease can vary from one village to the next.
The human disease takes two forms, depending on the parasite involved. Trypanosoma brucei gambiense is found in west and central Africa. This form represents more than 90% of reported cases of sleeping sickness, and causes a chronic infection. A person can be infected for months or even years without major signs or symptoms of the disease. When symptoms do emerge—such as severe headaches, sustained fever, sleep disturbances, alteration of mental state, and neurological disorders—the patient is often already in an advanced disease stage where the central nervous system is affected. Trypanosoma brucei rhodesiense is found in eastern and southern Africa. This form represents less than 10% of reported cases, and causes an acute infection. The first signs and symptoms—such as chancre, occasional headaches, irregular fevers, pruritus, and the development of adenopathies—are observed after a few weeks or months. Following this first stage, when the parasite has invaded the blood and lymph subsequent to the infective bite of the fly, the disease develops rapidly into a second stage when parasites cross the blood–brain barrier, invading the central nervous system.
Thirty-six sub-Saharan countries are considered endemic for one or the other form of the disease, despite the fact that some of them have reported no cases in the last decade (Figure 3).
Since WHO expressed its concerns in 1995, there have been great improvements in HAT control. In addition to political will at the highest levels, capacities for control and surveillance in endemic countries were strengthened through training and the provision of equipment for screening, diagnosis, and treatment.
The abatement of social upheavals and civil wars in many countries where HAT was endemic facilitated access to diagnosis and treatment, which was then enhanced by the financial and technical support from WHO for outreach activities, and by securing production and free distribution of drugs. Between 1995 and 2006, the total number of new cases reported was reduced by 68%.
With WHO support and commitment, it became possible to reinforce exhaustive screening of the population at risk, using a combination of immunological and parasitological tests. The card agglutination test for trypanosomiasis (CATT) developed in 1978 is widely used for screening the population affected by T. b. gambiense but is not applicable in T. b. rhodesiense areas. CATT-positive results are not sufficiently sensitive and specific to establish a definitive diagnosis, and therefore parasitological tests must be performed to confirm the presence of parasites in seropositive individuals. Such tests consist of microscopic examination of the lymph and blood. They are considered cumbersome and insufficiently sensitive to ascertain absence of infection . Diagnosis is followed by systematic stage determination, which consists of assessing the cerebrospinal fluid for white blood cell increases, elevated protein concentrations, and the presence of parasite—thus requiring a lumbar puncture, an invasive procedure that is not well accepted by patients.
Treatment today relies on four parenteral drugs: suramin for first-stage rhodesiense pentamidine for first-stage gambiense melarsoprol for the second stage of both forms of the disease, and eflornithine, which is only effective in the second stage of the gambiense form. The management of patients using any of these drugs is cumbersome and risky, requiring well-trained staff.
Despite success in reducing the number of cases reported, the complexity of the current tools available to control the disease does not allow the full involvement of the health care system, hampering the sustainability of HAT surveillance and control as discussed below.
In early 2006, WHO published an update on the disease situation and ongoing control activities in each of the 36 countries considered endemic for HAT . The two forms of the disease were considered separately, due to their different epidemiological characteristics.
Between 1997 and 2006, the gambiense form (97% of the total cases reported at continental level) responded well to intensive control activities mainly focused on the human reservoir (the animal reservoir was considered to have only a minor impact on the transmission process). The number of people under active surveillance increased, and the number of new cases decreased (Table 1 and Figure 4). However, control activities focusing on the human T. b. rhodesiense reservoir (3% of the total number of cases reported) were found insufficient to control the disease, probably due to the role played by the animal reservoir on transmission. Thus, T. b. rhodesiense showed only a small decrease in the number of cases (Table 2).
Out of 36 countries considered endemic for HAT, 24 are experiencing T. b. gambiense transmission. In these countries, there was a 69% reduction in the number of new cases reported during the period from 1997–2006. In 2006, 11 out of the 24 countries reported no cases; six of them had no control activities and reported no cases over a decade; and the other five implementing control activities reported only sporadic cases during the 1997–2004 period. Together, the cases in these 11 countries represent only 0.1% of the total gambiense cases reported.
Six countries reported an average of less than 100 new cases per year, representing 1.2% of the total gambiense cases. All these countries (except Nigeria) have National Sleeping Sickness Control Programmes and carry out regular control activities. Four countries, with well-established Control Programmes and regular control activities, have reported more than 100 but less than 1,000 new cases per year, representing 8.8% of the total gambiense cases reported. Finally, three countries have reported an average of more than 1,000 new cases per year during the period from 1997–2006; together they represent 89.9% of the total gambiense cases reported (Table 1).
In the 13 countries endemic for rhodesiense there was a 21% reduction in the number of new cases reported during the 1997–2006 period (Table 2). However, only Kenya, Malawi, Tanzania, and Uganda implemented control activities (since Uganda is affected by both forms of the disease, the country appears in Tables 1 and 2). Out of these 13 countries, five reported no cases over a decade and four reported sporadic cases. Together, these four countries reporting only sporadic cases represent 2.5% of the total rhodesiense cases reported during the 1997–2006 period. Two countries reported an average of less than 100 new cases per year, representing 8.7% of the total rhodesiense cases, and two reported more than 100 but less than 1,000 new cases per year, representing 88.8% of the rhodesiense cases reported.
The road to elimination.
To achieve the 1997 World Health Assembly elimination resolution , the WHO HAT Surveillance and Control Programme established a new initiative based on a global alliance bringing together all actors concerned about the disease . In 2003, the World Health Assembly called on member states to sustain the effort to eliminate the disease as a public health problem, which led the WHO programme to intensify its coordinating efforts, bringing together national control programmes, nongovernmental organisations, research institutions, and other concerned United Nations Agencies (under the Programme against African Trypanosomiasis, PAAT) , as well as private and public contributors (Sanofi-Aventis, Bayer HealthCare, the Bill & Melinda Gates Foundation, and the Belgium and French Cooperation). With this broad coalition, field activities were scaled up, leading to better knowledge of the disease distribution and a reduction in new cases by 2006, as described above. The current prevalence and incidence figures are believed to reflect the overall situation quite accurately, in contrast with the uncertainties surrounding the figures prior to 1997.
Given that in 2006, 20 out of 36 endemic countries achieved or were close to achieving the target of reporting no new cases, and eight countries reported less than 100 new cases per year, elimination has become a feasible objective in many countries endemic for HAT. With elimination in mind, in May 2007 WHO organised an Informal Consultation on Sustainable Sleeping Sickness Control, during which endemic country representatives debated the current disease landscape and concluded that elimination was possible.
During the July 2000 Organization of African Unity (now the African Union) summit held in Lomé, Togo, the African Heads of State and Government adopted the decision to collectively embark on a Pan African Tsetse and Trypanosomosis Eradication Campaign (PATTEC). This campaign was based on the realisation that (1) solving the tsetse fly and disease problem would be an important contribution to Africa's development, and (2) this could not be done by a single country acting alone. A task force of African experts concluded that such a campaign was not only technically feasible, but economically productive .
Implementation is on its way; six countries have recently received financial support from the African Development Bank and have initiated the first phase of a PATTEC project. In addition, four countries in the Kwando/Zambezi region have begun PATTEC activities, with very encouraging results.
The Next Steps
Integration of activities.
The challenge for the immediate future is to avoid repeating past mistakes, and to achieve cost-effective, sustainable HAT surveillance and control. Sustainability can only be achieved through an integration of activities in a strengthened health system able to face such responsibilities. The current approach should include specialised teams and health care systems, rather than falling back on the former debate between the value of specialised teams or primary health care. In other words, specialised teams and primary health care need to work together synergistically .
But integration is not a simple delegation process. Major responsibilities cannot simply be passed on to the existing health services of remote rural areas inappropriately trained and equipped to handle HAT control. Integration must mean the active participation of a strengthened health system capable of implementing surveillance and control activities, buttressed by specialised HAT national staff. Unfortunately, the existing tools limit the full participation of the health care system staff in controlling the disease. The two main technical bottlenecks are the lack of a sensitive and specific diagnostic test and of a new drug that is cheap, safe, and easy to administer.
New approaches to surveillance and control.
To sustain recent achievements in HAT control and the epidemiological downward trend, it will be necessary to develop a novel approach for surveillance and control adapted to the new requirements. This approach consists of an integration process involving national health care systems. Implementation, however, will require better tools than those presently available for diagnosis and treatment. Such a health systems–based approach may be adequate for areas affected by T. b. gambiense but in areas affected by T. b. rhodesiense disease control cannot rely exclusively on human health services and will have to involve veterinary and entomological services as well.
Developing new diagnostic tools.
Attempts to identify new antigens should result in more specific and sensitive tests for serodiagnosis of the disease, while changes in test format (i.e., the development of non-invasive saliva tests ) should result in more user-friendly tests. Much progress has been made in the development of molecular tools. Specific genes for both T. b. gambiense and T. b. rhodesiense have been identified [9–11] for PCR-based detection of infection. Molecular dipstick tests allow easier reading of the PCR result , and the first results using loop-mediated isothermal amplification are encouraging.
Disease bio-markers are being investigated using proteomics, such as surface-enhanced laser desorption/ionisation time-of-flight mass spectrometry (SELDI-ToF-MS) . However, these newly developed techniques, claimed to be substantially more sensitive and specific than those available in the field today, often rely on complicated equipment. As a result, the test protocols are not compatible with prevailing conditions at HAT treatment centres in rural Africa. WHO has established a collaboration with the Foundation for Innovative New Diagnostics (http://www.finddiagnostics.org/) to develop new simple diagnostic tools for the control of HAT that meet the requirements of a sustainable elimination approach. The desired characteristics of a new test were defined as being “ready for use”, stable at room temperature, and affordable by national health systems. The new test should provide an uncontroversial diagnosis of both forms of the disease and require minimum training and equipment to allow its execution by any health worker.
Developing new tools for determining stage of disease.
As long as there is no safe and effective drug available to treat both stages of the disease, determining disease stage will remain necessary. Some progress has been made through the development of a point-of-care card agglutination test for immunoglobulin M quantification in cerebrospinal fluid . Although this test appears highly promising in establishing central nervous system involvement, its accuracy and feasibility in the field still need to be ascertained. The study of anti-neurofilament and anti-galactocerebrosides antibodies may open new avenues for staging the disease. Unfortunately, all these techniques continue to require a lumbar puncture. Stage markers in other body fluids such as serum, urine, or saliva could become ideal tests to avoid the invasive procedure of a lumbar puncture, but remain to be identified.
Another possible technique for the diagnosis of central nervous system involvement is the measurement of sleep-onset rapid eye movement by polysomnography, a method that involves assessing the sleep pattern of patients . However, although it is not invasive, polysomnography has not yet been proven to be universally accurate. Obviously, much work still needs to be done to make improved staging tests available to health workers in endemic areas.
Advances in drug development.
Eflornithine was developed over 20 years ago, and was registered for the treatment of gambiense disease in 1990. While the drug is safer than melarsoprol , eflornithine does have side effects: fever, unusual bleeding and weakness, diarrhoea, nausea, stomach pain, and vomiting are common, while rarer side effects such as convulsions, loss of hearing, hair loss, headache, anaemia, leucopenia, and thrombocytopenia have also been observed . The administration of eflornithine, which requires multiple daily infusions, limits its use in the context of rural Africa, despite the determination of some programmes to use it as a first-line drug.
Recently a short-course melarsoprol treatment was developed [20,21]. Unfortunately, it does not provide a safer treatment; however, it has substantially reduced the hospitalisation time of patients and as a consequence the cost of treatment.
With the development of parasite resistance to some of the available drugs , a number of studies have attempted to combine existing drugs to overcome treatment failures [23,24]. A clinical trial, sponsored by Médecins sans Frontières-Holland, the Drugs for Neglected Diseases Initiative, and the UNICEF-UNDP-World Bank-WHO Special Programme for Research and Training in Tropical Diseases, is currently ongoing to test a combination of eflornithine and nifurtimox, the latter being a drug registered for the treatment of American trypanosomiasis (Chagas disease). The aim of the study is not only to improve efficacy and simplify administration, which would contribute to easier field use and reduce cost, but also to find a way to avoid the development of resistant strains to eflornithine. Such combination therapy remains far from ideal, since it continues to require intravenous administration with costly and complicated logistics and skilled staff. Furthermore this combination will only be effective for T. b. gambiense and would certainly not be safe enough to be used in first-stage patients; thus it will not help in avoiding the risky staging process.
A new oral drug called DB289 is in the final clinical trial phase. Given its oral administration, it should be considered an important step forward. Unfortunately, the drug requires ten days of treatment, twice a day, and is only effective in the first stage. Consequently, DB289 cannot be considered a new drug that would be a major advance in control of HAT.
The major challenge in developing a new drug that can ensure sustainable disease control will be to find a safe and affordable, orally administered drug that is effective against both forms of the disease, in both disease stages, and that does not require any particular skills or care to administer. The ideal regimen should not last more than a few days, thus making it manageable by peripheral health staff in an out-patient context.
Advances in vector control.
Current vector control interventions involve the use of insecticides (through the sequential aerosol spraying technique, insecticide-treated targets , or insecticide-treated animals [26,27]); the use of traps [28,29]; and the sterile insect technique (SIT) .
The sequential aerosol technique, which uses extremely low concentration of insecticide through several consecutive aerial sprayings, can effectively clear large areas of tsetse flies in a relatively short time, but it is expensive and requires substantial economic and infrastructure support. Pourons or selective spraying application of insecticides to animals on which tsetse feed are another effective means of vector control.
Odour-baited targets or traps have been used in many countries to effectively suppress tsetse population. The relative low cost and simplicity of the traps or targets recommends them for use by local communities, but they are applied on a scale so small that control efforts are bound to be frustrated by re-invasion.
While effective baits have been developed for savannah tsetse, to date no such baits exist for riverine tsetse, which are major vectors of HAT. However, research continues in an attempt to develop effective baits for the latter species.
The SIT, which involves the release of laboratory-reared and sterilised males to compete with wild males so that females inseminated by them produce no offspring, has been effectively used for eradication of tsetse (G. austeni), for example, in Unguja Island in Zanzibar . The cost of SIT is, however, exorbitant. The feasibility of this costly approach in areas where multiple species are present remains doubtful .
The recent availability of genomics of tsetse-symbiotic bacteria is of interest since in the absence of their gut flora, tsetse flies are severely impaired in their longevity and reproduction. Two bacteria have been implicated in modifying vector competence of their host, and a third symbiont can confer mating sterility. However, further research is needed to turn such new knowledge into practical use for disease control.
Despite the considerable progress made in controlling the vector, an ideal methodology easily accessible to the population at risk still does not exist.
While the number of new detected cases of HAT is falling, sleeping sickness could suffer the “punishment of success,” receiving lower priority by public and private health institutions with the consequent risk of losing the capacity to maintain disease control. While waiting for new tools for sleeping sickness control, the greatest challenge for the coming years will be to increase and sustain the current control efforts using existing tools. Effective surveillance and control followed by good reporting will be crucial. Furthermore, advocacy in endemic countries should continue to be maintained in the face of decreasing cases reported; sleeping sickness should retain its high priority with health policy makers and planners. Research must be encouraged to resolve the technical issues preventing the development of a new approach to surveillance and control that could be sustained by countries themselves.
Since elimination of the disease has been considered feasible, WHO will adopt the conclusions of countries where HAT is endemic, who have demonstrated that: (1) the participation of existing health systems is not only desirable but essential for surveillance and control sustainability; (2) the development of new diagnostic tools and drugs is crucial to guarantee the effective participation of existing health structures; and (3) the maintenance of a specialised central structure at national level is necessary to ensure the coordination and overall technical assistance needed. In that context, WHO is ready to take up the challenge and continue to lead countries, supporting and coordinating the work of all the actors involved.
- 1. World Health Organization (1997) Resolution 50.36, 50th World Health Assembly. Geneva: World Health Organization.
- 2. Chappuis F, Loutan L, Simarro P, Lejon V, Büscher P (2005) Options for field diagnosis of human African trypanosomiasis. Clin Microbiol Rev 18: 133–146.
- 3. World Health Organization (2006) Human African trypanosomiasis (sleeping sickness): Epidemiological update. Wkly Epidemiol Rec 8: 71–80. Available: http://www.who.int/wer/2006/wer8108/en/index.html. Accessed 21 January 2008.
- 4. World Health Organization (2002) WHO programme to eliminate sleeping sickness—Building a global alliance. Available: http://whqlibdoc.who.int/hq/2002/WHO_CDS_CSR_EPH_2002.13.pdf. Accessed 21 January 2008.
- 5. Food and Agriculture Organization, Animal Production and Health Division (2008) Programme Against African Trypanosomiasis. Available: http://www.fao.org/PAAT/html/home.htm. Accessed 21 January 2008.
- 6. African Union (2002) Statement of the Commission of the African Union to the AHP/DFID Special Workshop on: “Tsetse Control—The Next Hundred Years”. Available: http://www.africa-union.org/Structure_of_the_Commission/Pattec/Statement%20by%20the%20Commission%20V2.pdf. Accessed 21 January 2008.
- 7. Samarasekera U (2007) Margaret Chan's vision for WHO. Lancet 369: 1915–1916.
- 8. Lejon V, Kwete J, Buscher P (2003) Towards saliva-based screening for sleeping sickness. Trop Med Int Health 8: 585–588.
- 9. Radwanska M, Claes F, Magez S, Magnus E, Perez-Morga D, et al. (2002) Novel primer sequences for polymerase chain reaction-based detection of Trypanosoma brucei gambiense. Am J Trop Med Hyg 67: 289–295.
- 10. Radwanska M, Chamekh M, Vanhamme L, Claes F, Magez S, et al. (2002) The serum resistance-associated gene as a diagnostic tool for the detection of Trypanosoma brucei rhodesiense. Am J Trop Med Hyg 67: 684–690.
- 11. Welburn SC, Picozzi K, Fèvre EM, Coleman PG, Odiit M, et al. (2001) Identification of human-infective trypanosomes in animal reservoir of sleeping sickness in Uganda by means of serum-resistance-associated (SRA) gene. Lancet 358: 2017–2019.
- 12. Deborggraeve S, Claes F, Laurent T, Mertens P, Leclipteux T, et al. (2006) Molecular dipstick test for diagnosis of sleeping sickness. J Clin Microbiol 44: 2884–2889.
- 13. Kuboki N, Inoue N, Sakurai T, Di Cello F, Grab DJ, et al. (2003) Loop-mediated isothermal amplification for detection of African trypanosomes. J Clin Microbiol 41: 5517–5524.
- 14. Papadopoulos MC, Abel PM, Agranoff D, Stich A, Tarelli E, et al. (2004) A novel and accurate test for human African trypanosomiasis. Lancet 363: 1358–1363.
- 15. Lejon V, Legros D, Richer M, Ruiz JA, Jamonneau V, et al. (2002) IgM quantification in the cerebrospinal fluid of sleeping sickness patients by a latex card agglutination test. Trop Med Int Health 7: 685–692.
- 16. Courtioux B, Bisser S, M'belesso P, Ngoungou E, Girard M, et al. (2005) Dot enzyme-linked immunosorbent assay for more reliable staging of patients with human African trypanosomiasis. J Clin Microbiol 43: 4789–4795.
- 17. Buguet A, Bisser S, Josenando T, Chapotot F, Cespuglio R (2005) Sleep structure: A new diagnostic tool for stage determination in sleeping sickness. Acta Trop 93: 107–117.
- 18. Chappuis F, Udayraj N, Stietenroth K, Meussen A, Bovier PA (2005) Eflornithine is safer than melarsoprol for the treatment of second-stage Trypanosoma brucei gambiense human African trypanosomiasis. Clin Infect Dis 41: 748–751.
- 19. Burri C, Brun R (2003) Eflornithine for the treatment of human African trypanosomiasis. Parasitol Res 90(Suppl 1): S49–S52.
- 20. Burri C, Nkunku S, Merolle A, Smith T, Blum J, et al. (2000) Efficacy of new, concise schedule for melarsoprol in treatment of sleeping sickness caused by Trypanosoma brucei gambiense: A randomised trial. Lancet 355: 1419–1425.
- 21. Schmid C, Nkunku S, Merolle A, Vounatsou P, Burri C (2004) Efficacy of 10-day melarsoprol schedule 2 years after treatment for late-stage gambiense sleeping sickness. Lancet 364: 789–790.
- 22. Kennedy PGE (2004) Human African trypanosomiasis of the CNS: Current issues and challenges. J Clin Invest 113: 496–504.
- 23. Priotto G, Fogg C, Balasegaram M, Erphas O, Louga A, et al. (2006) Three drug combinations for late-stage Trypanosoma brucei gambiense sleeping sickness A randomized clinical trial in Uganda. PLoS Clin Trial 1: e39.
- 24. Checchi F, Piola P, Ayikoru H, Thomas F, Legros D, et al. (2007) Nifurtimox plus eflornithine for late-stage sleeping sickness in Uganda: A case series. PLoS Negl Trop Dis 1: e64.
- 25. Vale GA, Lovemore DF, Flint S, Cockbill GF (1988) Odour-baited targets to control tsetse flies, Glossina spp. (Diptera: Glossinidae), in Zimbabwe. Bull Entomol Res 78: 31–49.
- 26. Bauer B, Amsler-Delafosse S, Clausen P-H, Kabore I, Petrich-Bauer J (1995) Successful application of deltamethrin pour-on to cattle in a campaign against tsetse flies (Glossina spp) in the pastoral zone of Samorogouan, Burkina Faso. Trop Med Parasitol 46: 183–189.
- 27. Hargrove JW, Silas O, Msalilwa JSI, Fox B (2000) Insecticide-treated cattle for tsetse control: The power and the problems. Med Vet Entomol 14: 123–130.
- 28. Brigtwell R, Dransfield RD, Kyorku C (1991) Development of a low-cost tsetse trap and odour baits for Glossina pallidipes and G. longipennis in Kenya. Med Vet Entomol 5: 153–164.
- 29. Hassanali A, McDowell PG, Owaga MA, Saini RK (1986) Identification of tsetse attractants from excretory products of a wild host animal, Syncerus caffer. Insect Sci Applic 7: 5–9.
- 30. Vreysen MJB, Saleh KM, Ali MY, Abdulla AM, Zhu Z-R, et al. (2000) Glossina austeni (Diptera: Glossinidae) eradicated on the Island of Unguja, Zanzibar, using the sterile insect technique. J Econ Entomol 93: 123–135.
- 31. Enserink M (2007) Welcome to Ethiopia's fly factory. Science 317: 310–313.
- 32. Aksoy S, Berriman M, Hall N, Hattori M, Hide W, et al. (2005) A case for a Glossina genome project. Trend Parasitol 21: 107–111. | <urn:uuid:e85071b3-ef9d-4790-b335-76c27257445a> | CC-MAIN-2022-33 | https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0050055 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00697.warc.gz | en | 0.908241 | 7,140 | 2.8125 | 3 |
Understanding Rankings and Related Concepts in DEtools[rifsimp]
Properties of a Ranking
A ranking is essentially an ordering of all indeterminates in a system. To introduce rankings, we must first introduce some related concepts.
This term describes any unknown function that is present in the input system. As an example, consider a system involving f(x,y) and its derivatives, g(x,y) and its derivatives, and exp(x). In a calculation, you may want to view f(x,y) as the solving variable, and g(x,y) as a parameter. Even in this case, these are both considered to be dependent variables. Because exp(x) is a known function, it is not considered to be a dependent variable.
For a problem containing f(x,y) and its derivatives, g(x,y) and its derivatives, and exp(x), x and y are the independent variables.
Any unknown not having a dependency, and not occurring in a dependency, is considered to be a constant. This is true even if it appears in a known function. For example, in the equation a*f(x,y)+sin(c)*g(x,y), both a and c are considered to be constants.
Note: The distinction between independent variables and constants is vital, since mistaking one for the other will not give a valid result for a system. For information on the specification of independent variables, please see rifsimp[options].
An indeterminate can be any constant, dependent variable, or derivative of a dependent variable. This does not include any known functions or independent variables. This is exactly the group of items that a ranking is defined for.
With these definitions, a more precise definition of ranking for a system is now possible:
A ranking is a strict ordering of all indeterminates appearing in a system in the course of a calculation. Note that it is necessary to define a ranking that works for more than just the indeterminates appearing in the initial system, because higher derivatives may appear in the course of the algorithm. For example, if the initial system contains only second order derivatives in the dependent variables, the specified ranking may need to be able to rank much higher order derivatives, as these may appear in a run of the algorithm.
The leading indeterminate of an equation is defined as the indeterminate in that equation that is of the highest rank (maximal with respect to the ranking).
The concept of a leading indeterminate is important for understanding how the rifsimp algorithm works, because any equation in which the leading indeterminate appears linearly is solved with respect to that indeterminate.
Rankings have a number of properties, some of which are required for the proper performance and termination of the algorithm, others of which may be helpful in tackling specific systems.
In any of the descriptions below, v1 and v2 are indeterminates that may depend on some of x1, x2, x3, ...
Preservation of ranking under differentiation
Given a ranking v1 > v2, this ranking also holds after equal differentiation of both v1 and v2 with respect to any independent variable, for all v1, v2 where v1 and v2 are indeterminates.
Note: You must restrict yourself to non-vanishing indeterminates (for example, for h(x) > g(x,y), differentiation with respect to y makes h(x) vanish, so the rule does not apply).
This property is required for the proper running of rifsimp. Once an equation is solved for the leading indeterminate, any differentiation of that equation (assuming that the leading indeterminate does not vanish) is also in solved form with respect to its new leading indeterminate (the derivative of the prior leading indeterminate).
Given a ranking >, it must be true that diff(v1,x1) > v1 for all indeterminates v1 and all independent variables x1, as long as diff(v1,x1) is nonzero.
This property is required for the proper running of rifsimp, because it prevents any infinite chain of differential substitutions from occurring.
As an example, consider the solved form of the equation u[t]-u[tt] = 0 under a non-positive ranking u[t] = u[tt]. Differential elimination of the derivative u[xt] with respect to this equation will give u[xtt], then u[xttt], then u[xtttt], and so on. It will never terminate.
Let dord() give the total differential order of an indeterminate with respect to all independent variables. Then for v1 and v2 derivatives of the same dependent variable, given a ranking >, then dord(v1) larger than dord(v2) implies v1 > v2.
For rifsimp to run correctly, a ranking does not have to be total degree. In some cases this does allow rifsimp to run faster, however. For those familiar with Groebner basis theory, a total degree ranking is similar to a total degree Groebner basis ordering, because calculations usually terminate more quickly than they would with a lexicographic ordering.
A ranking is said to be orderly if, for any indeterminate, no infinite sets of indeterminates that are lower in the ordering exist.
As an example of a ranking that is not orderly, we consider a ranking of f(x), g(x), and of all derivatives. If we choose to solve a system using rifsimp for f(x) in terms of g(x) (by specification of only f(x) in the solving variables), then this is not an orderly ranking, because g(x) and all of its derivatives are of lower rank than f(x) and any of its derivatives.
For rifsimp to run correctly, using the defaults for nonlinear equations, a ranking is not required to be orderly (see rifsimp[nonlinear].)
The Default ranking
The rifsimp default ranking is an orderly, total-degree ranking that is both positive and preserved under differentiation. Note that if the solving variables are specified, we may no longer be using the default ranking (since specification of solving variables alters the ranking. See "Specification of a ranking" below.
On input, rifsimp assumes that all dependent variables present in the input system are solving variables, and that all constants are to be viewed as parameters. The set of dependent variables is then sorted alphabetically, along with the set of constants. In contrast, the set of independent variables is ordered based on order of appearance in the dependency lists of the dependent variables.
A description of the sorting method for independent variables can be found in the "Specification of a ranking" section. Note that this sort is used to break ties for derivatives of equal differential order (under some circumstances).
Under the above restriction, the default ranking is defined by the following algorithm, which returns true if v1 is greater than v2 with respect to the default ordering (and false if v1 is less than v2).
# Criterion 1: Solving variables
If v1 is a solving variable, and v2 is not, then
If v2 is a solving variable, and v1 is not, then
# Criterion 2: Total Degree
If dord(v1) is larger than dord(v2), then
If dord(v2) is larger than dord(v1), then
# Criterion 3: Independent Variable Differential Order
Loop i := each independent variable in sorted order
If diff. order of v1 in i is larger than order of v2 in i, then
If diff. order of v2 in i is larger than order of v1 in i, then
# Criterion 4: Dependent Variable
If dependent var for v1 occurs before v2, then
If dependent var for v2 occurs before v1, then
The following examples are for a system containing f(x,y,z), g(x,y), and h(x,z), and derivatives with the constants a and b. The system is recognized with the following (already sorted):
So, by default, f, g, and h are considered solving variables, and a and b parameters.
When the criteria are considered in order, any pair of distinct indeterminates are ranked by exactly one of them (as the ranking process then stops and returns the result). Of course to reach, say, criterion 3, criteria 1 and 2 must not differentiate the inputs. The following is a list of examples of each criterion in action:
Any step of the ranking process can be viewed as separating all possible indeterminates into a sorted chain of equivalence classes. When considering all criterion simultaneously, the size of each equivalence class must be exactly one. Sometimes viewing the ranking from the point of view of equivalence classes helps visualize how ranking is performed. As an example, we illustrate the equivalence classes for the prior example for each criterion considered independently of the other criteria:
Criterion 1: Rank solving variables higher than parameters
Criterion 2: Rank by differential order
Criterion 3: Rank by differential degree in each independent variable in turn
Criterion 4: Rank by dependent variable or constant name
So the process is equivalent to determining in which equivalence class each indeterminate falls, and checking if this criterion distinguishes the two input indeterminates.
Specification of a ranking
Three options can be used to control the ranking in rifsimp. Two of these perform simple modifications to the default ranking, while the third allows complete control of the ranking.
Specification of solve variables
As mentioned in the "Default ranking" section, specification of solving variables in a manner or order different from the default order changes the ranking. The solving variables can be specified in two ways:
1. Simple List
When the vars parameter is entered as a simple list, it affects the ranking in the following ways:
Criterion 1 of the default ranking is changed to add an additional class of indeterminates, which is specified by the solving variables. Specifically, any indeterminate mentioned in vars is ranked higher than any indeterminate not in vars. Any dependent variables not mentioned in vars are still ranked higher than any constants not mentioned in vars.
As an example, suppose an input system contained f(x,y,z), g(x,y), h(x,z), a, b, and c. If vars had been specified as [f(x,y,z),b,h(x,y)], then f, h, any f or h derivatives, and the constant c would be ranked higher than g and any g derivatives, which would be ranked higher than the constants a and b. Using equivalence classes, criterion 1 becomes
where the new equivalence class is on the right.
Criterion 4 of the default ranking is changed to reflect the same order as the specified vars, so when criterion 4 is reached, f is ranked higher than b, which in turn is ranked higher than g, which in turn is ranked higher than h, and so on.
Using equivalence classes, we have the following:
This is an unusual ranking (since it allows for c to be solved in terms of h, g, and derivatives of g), but it was chosen to highlight the flexibility of rankings.
2. Nested List
This is just a variation of the simple list specification that allows multiple equivalence classes to be specified. It is activated by the presence of a list in the vars input. When this specification is used, every entry in the vars list is interpreted as an equivalence class (even if it is not a list itself). This is best illustrated by an example. Use the same system as the prior example. If vars is specified as [f(x,y,z),[g(x,y),c]], we notice that the second entry is a list, so the equivalence classes for criterion 1 become
An equivalence class has been added for f and its derivatives, then one for g and c, and g derivatives, then the two that are created by default.
We can interpret this as "solve for f in terms of all other indeterminates; if the expression does not contain an f, then solve for g or c in terms of all other indeterminates, and so on".
Criterion 4 is changed to reflect the order in which the entries of vars appear in their equivalence classes:
This option allows for specification of the independent variables, but modifies the ordering as well. Put simply, it specifies the order in which criterion 3 looks at the dependent variables.
First, recall how the default ordering works. To begin with, the set of independent variables is sorted alphabetically. Then, the set of independent variables is sorted based on their occurrence in the dependent variable lists, where the dependent variable lists are considered in the same order in which they are ranked. If the order of a dependency list disagrees with the order of another dependency list, only the one of higher rank one is used.
As an example, consider a system containing f(x,y), g(y,x). In this case the independent variables are sorted in the order [x,y] if f is ranked higher than g, but in the order [y,x] if the reverse is true.
For a system containing f(x,y), g(x,z), the independent variables are sorted [x,y,z], because ties are broken alphabetically.
The specification of indep=[...] enforces the order specified in the list, so if the input contains f(x,y,z) and h(x,y,z) and we specify [z,x,y], then the independent variables are ordered as specified.
These two ways of controlling the ordering of a system are sufficient for most purposes, but you can also fully specify the exact ordering to be used for a system.
Advanced specification of a ranking
This method requires a bit of description first:
Say that we have a system with n independent variables (we call these x1,...,xn), and m dependent variables (we call these V1,...,Vm). For each derivative, we can then construct a list of n+m elements, where the first n elements are the differentiations of the derivative with respect to the corresponding independent variable, and the remaining m elements are all zero, except for the one that corresponds to the dependent variable of the derivative.
Say that we have a system with independent variables indep = [x,y,z], and dependent variables [f,g,h,s]. Then the vector for g[xxz] can be constructed as [2,0,1, 0,1,0,0]. This vector then contains all the information required to identify the original derivative. From the last four items in the list, we see that our dependent variable is g (since the 1 corresponds to the placement of g in the dependent variable list). We can also see from the first three elements of the list that it is differentiated twice with respect to x, 0 times with respect to y, and once with respect to z (where we are matching up the first three elements of the list to the corresponding independent variables).
With the same system, we may obtain:
Now we have specified a way to turn each derivative into a list of integer values. Using this, we now can create a new list called a criterion, which must be of the same length and must be specified with integer values. The dot product of the derivative list and the criterion is called the weight of that derivative with respect to the criterion.
So for the above example, if we specified the criterion list to be [1,0,0, 0,0,0,0], then g[xxz] would have weight 2, f[xyz] would have weight 1, h[xxxxx] would have weight 5, and so on.
Now when two derivatives are being compared with respect to one of these list criteria, the ranking would be determined by their respective weights. So, for example, f[xyz] < g[xxz] with respect to [1,0,0, 0,0,0,0], because weightfxyz=1 is less than weightgxxz=2 with respect to [1,0,0, 0,0,0,0].
The new ranking can then be constructed as a list of these criteria which, during a comparison, are calculated and compared in order. The construction of a ranking in this way is called a Janet ranking.
As an example, we can construct the default ranking as a criterion list for the example system as:
ranking = [
# This corresponds to criterion 1
# This corresponds to criterion 2
# These three lines are criterion 3
# This corresponds to criterion 4
So if we compared f[xyz] to f[xyy], the weights for the first entry would be 1 and 1, for the second entry 3 and 3, for the third entry 1 and 1, and for the fourth entry 1 and 2, at which point it is recognized that f[xyz] < f[xyy].
Specification of the ranking to rifsimp
The ranking is specified on the command line to rifsimp as ranking = list of criteria, where the criterion list is as described above. We recommend that you specify the dependent variables and independent variables so that the order is known and the ranking behaves as expected.
In the event that the input ranking does not fully specify a ranking (two different indeterminates are not ranked differently by the input ranking), the default ranking is then used (see examples). If the system contains constants, and any of the entries of the input ranking do not have corresponding entries for these constants, then the entries are padded with zeros.
For examples we will take as input single equations or a system of decoupled equations and observe their solved form in the output. They will be solved for their leading indeterminate.
By default, the above will be solved for the g derivative, as f and g have equal weight (criterion 1). The equation is differential order 2, so this narrows it down to the three second order derivatives (criterion 2), but x derivatives are of greater weight than y derivatives (criterion 3), so the equation will be solved for g[xx]:
So how can we solve for f instead? The obvious way is to give f more weight than g by declaring it as the solving variable by using vars (alter criterion 1):
What if we wanted to solve for the y derivative of f? Well, we could then also weight y derivatives greater using indep:
Good, but what if we want to solve for the t derivative of f. This is an unusual example because we are solving for a lower order derivative in terms of higher order derivatives. We could specify a new ranking that weights t derivatives higher than everything else:
With the above ranking, f is always greater than g, and t derivatives are always greater than x or y derivatives of any order. We have to declare the order of occurrence in the command line arguments so that we can match the independent and dependent variables to the sample_rank table. (These are typed in above for visualization.)
Note: We did not specify a full ranking, but instead specified as much as we required, then let the default ranking take over.
Note that a ranking like the one above is natural for certain classes of equations. As an example, consider the heat equation u[t] = u[xx]+u[yy]+u[zz] where the form of the solved equation is only physically meaningful when solving for the time derivatives in terms of the space derivatives, even when the space derivatives are of higher differential order.
As a final example, we construct a strange ranking that weights t derivatives twice as heavily as x derivatives. This is done for the following:
Download Help Document
What kind of issue would you like to report? (Optional) | <urn:uuid:d28ddff9-f6ee-4f53-8d10-7d3b31c2a9a6> | CC-MAIN-2022-33 | https://fr.maplesoft.com/support/help/errors/view.aspx?path=DEtools%2Frifsimp%2Franking | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00495.warc.gz | en | 0.880388 | 5,816 | 2.84375 | 3 |
The Traitors in the Officer Corps of the German Armed Forces
By Wilfried Heink-
Following World War I, Germany's army was demoralized, reduced to groups of free lance mercenaries. Discipline, the core of any army, especially in the German/Prussian army, was no longer. This breakdown had already started in the last month of WWI, when German troops who had been exposed to the Bolshevik virus while on the eastern front, were transferred west after the treaty of Brest-Litovsk. Soldier counsels (Soviets) were formed and Officers orders questioned or ignored ("The Kings Depart…”, by Richard M. Watt, pp.142ff). Strikes broke out in Germany which affected the war effort, the Kaiser, the Commander in Chief, forced to abdicate, in short, the Officers felt that they were stabbed in the back (Dolchstoss). Under the Versailles Diktat, Germany's armed forces were reduced to 100 000 lightly armed forces. The Officer core, a proud clan, was devastated.
Many of the Officers, among them Ludwig Beck later to be appointed chief of staff, supported the NSdAP and Hitler, who promised to do away with Versailles (the communists also promised this, but the Officers, many of them aristocrats, could never site with them). Hitler, when appointed Chancellor in 1933, offered in a speech of May 17,1933 to disarm completely if all other states would do the same (H. Härtle, Die Kriegsschuld der Sieger, p.64, pp.102ff). He continued by saying that if the others are not willing to at least comply with conditions set out in Art. 8 of the Versailles Treaty and reduce their forces, Germany would be forced to rearm. Art. 8 was ignored by England and France, they kept arming themselves, as did Russia, thus forcing Hitler to abandon the Versailles Treaty since all other European sates ignored it. Accordingly, he broke a treaty that had never been adhered to by Britain of France. Hitler did what any other responsible states man would have done: finding himself surrounded by countries armed to the teeth, he ordered re-armament.
So far so good, no Officer who’s job depends on a strong army could disagree. But, there were conflicts, namely differences of opinion between Hitler and the army hierarchy. National Socialism as an ideology was a movement, a socialist movement, a peasant movement. Hitler, after realizing that other countries were engaged in an arms race, knew he needed a strong Germany able to defend itself, as well as a Germany able to sustain itself in case of war. The British blockade served as a constant reminder. The core of the officer caste never accepted NS ideology, they looked down at the party as peasant upstarts (W. von Oven, Finale Furioso, p.175), but for them anything was better than the Weimar chaos and they also hoped to be able to influence Hitler. Stalin solved the same problem by butchering or incarcerating scores of Czarist Officers, replacing them with the party faithful. Hitler never even considered this, as the National Socialists were proud of their largely bloodless revolution, as compared to those that had taken place in France and Russia. He may, however, had done well by doing some house cleaning. When von Hindenburg died, in August 1934, “the little corporal”, as Hitler was called by them, became Commander-in-Chief of the armed forces, something the Officers, used to monarchist traditions, never believed possible.
Beck was against the Anschluss, the unification of Germany with Austria, he could not understand that this was part of National Socialist ideology. When all went well, he accepted it, but when the Czechoslovak crisis loomed, Ludwig Beck, on August 18,1938, resigned as chief of staff, to be replaced by Franz Halder. Beck cited the so-called Hoßbach-Protokoll (a topic to be discussed) as evidence of Hitlers intentions, the reason for his resignation, even though there is some proof that Hoßbach wrote what he did on the suggestion of Beck (A. v. Ribbentrop, Verschwörung gegen den Frieden, p.56). It is hard to tell when Beck started his treacherous activities, but Annelies von Ribbentrop provides evidence that in the spring of 1937 he already had contact with Goerdeler, one of the main conspirators (A. v. Ribbentrop, Die Kriegsschuld des Widerstandes, p.36). Beck tried hard to have Halder join the conspirators, however, no solid evidence exists to show that he was successful. That he was successful with other Officers is evident. In 1943 he told Wilhelm Leuschner, who was to be vice-chancellor to Goerdeler following the successful coup, that there were enough confidants in the command structures in the east that it will be possible to regulate the activities of the armed forces (Ibid, p.398).
Not much information available on outright traitor activities in the Poland war of 1939. That is not to say that Goerdeler, and others, including foreign office (AA) ministers, did not try to sabotage what had been decided by the government. A. v. Ribbentrop, as well as other authors, provide details. The officers also remained relatively quiet during the successful French campaign which Hitler helped to plan; “the little corporal was just lucky”, or so Officers assured themselves. But when Barbarossa dragged on and the initial successes turned into disaster, this situation changed. The battle for Moscow in December 1941 was perhaps the turning point (R. Sorge, Soviet’s spy in Japan, told Stalin that the Japanese would not open a front against Russia in the east, which allowed Stalin to move troops stationed there to Moscow). The Germans were exhausted, the supply lines destroyed by partisans, winter had set in and when attacked by the fresh Russian troops from the east, the Germans were beaten back. This is when “the little corporal” lost his aura of invincibility, and the jackals moved in.
Evidence of collaboration between German officers and Russians is scarce. In a meeting between the American General R. G. Tindall and von Moltke in Cairo in November 1943, the latter told the American that two groups of opponents exist in Germany. One of them, consisting of mostly military people, was trying to establish a good relationship with Russia, the other was sympathetic to the west (Valentin Falin, “Zweite Front”, p.395). There is little known about the activities of that first group, but Falin writes that Stauffenberg was not convinced that Germanys salvation, as he perceived it, lay exclusively in the west (Zweite Front, p. 429).
Here is what Allan W. Dulles wrote about Stauffenberg in “Germany’s Underground”, pp.170/71:
“At approximately the same time Count von Stauffenberg was acquiring influence among the conspirators. He had gathered around himself several younger army officers and civilians who were attracted by his forceful personality and by his determination to act. Among them was Count Fritz von der Schulenburg, a reformed Nazi and cousin of the Ambassador. Schulenburg's great energy and administrative ability and his position as second in command of the Berlin police had made him an important member of the inner circle of those pressing for early action even before Stauffenberg appeared on the scene. Through his contacts with Trott, Yorck and others of their friends, he had brought the Kreisau Circle closer to the group of military conspirators.
Stauffenberg recognized the over-all leadership of Beck and Goerdeler, but had no sympathy with them politically. He was one of those who were attracted by the resurgence of the East, and believed liberalism to be decadent and the adjective "Western" a synonym for "bourgeois." Gisevius told me Stauffenberg toyed with the idea of trying for a revolution of workers, peasants and soldiers. He hoped the Red Army would support a Communist Germany organized along Russian lines. His views were shared by other conspirators, particularly by certain of the younger men of the Kreisau Circle, including the Haeften brothers and Trott. In the case of some it was a matter of ideology, in other cases it was a question of policy. Some had reached the conclusion that nothing constructive could be worked out with the West. Soviet propaganda had influenced others.
The Free Germany Committee, although only a tool of psychological warfare, impressed many Germans. Germans captured by the Russians on the eastern front were sent back to Germany to spread the Communist gospel. "Free Germany" committees began to form in secret on the eastern front, and to a limited extent in Germany. While British and American planes ruined one German city after another, and London and Washington talked only of unconditional surrender, the Free Germany Committee broadcast on the Moscow radio:
“The Soviet Union does not identify the German people with Hitler. . . . Our new Germany will be sovereign and independent and free of control from other nations. . . . Our new Germany will place Hitler and his supporters, his ministers and representatives and helpers before the judgment of the people, but it will not take revenge on the seduced and misguided, if, in the hour of decision, they side with the people. . . . Our aim is: A free Germany. A strong powerful democratic state, which has nothing in common with the incompetence of the Weimar regime. A democracy which will suppress every attempt of a renewed conspiracy against the liberties of the people or the peace of Europe. . . . For people and fatherland. Against Hitler's war. For immediate peace. For the salvation of the German people.”
The Russians kept up this propaganda to the end, and when we reached Berlin in May of 1945, the city was already placarded with Soviet propaganda, including these words of Stalin: "Hitlers come and go, but the German people, the German state, remain."[…]”
There is some evidence that Henning von Tresckow (a traitor), chief of staff of army group center (Heeresgruppe Mitte) since November 1943, was responsible for the early collapse of that group because of his traitorous activities (Friedrich Georg, “Verrat in der Normandie”, p.315).
The Stalingrad catastrophe, starting in the middle of November 1942, was the next big setback for the Wehrmacht and here we have the first signs of, if not sabotage, then insubordination, of intentionally ignoring the little corporals orders. Hitler had studied maps found in a Russian archive about the civil war following the October revolution in which the “Whites” tried to defeat the Bolsheviks. Stalin with the Red Army had crossed the river Don between Zarizyn (later Stalingrad) and Rostov and consequently defeated Denikin of the “Whites”. Hitler feared that this maneuver would be repeated and ordered Halder to station heavy artillery as well as anti-tank cannon behind the Hungarians guarding this section. He also “wished” (quotation marks in the original) to have the 22nd tank division brought into position. Halder followed those orders weeks later, partially, and ignored the tank order completely. Neither Gehlen, head of German Armies East (the jury is still out on whether he was a traitor) nor anyone else recognized this danger, Hitler did. In the middle of November the Red Army broke through at precisely that area and with overwhelming men- and material power succeeded in surrounding the 6th army. Zhukov, the hero of Stalingrad (Suvorov differs) later admitted that Gehlen had helped him (Werner Maser, “Fälschung, Dichtung und Wahrheit über Hitler und Stalin”, pp. 281-284). Goebbels held that the Generals wanted defeats, not to loose the war, but to loose battles to make the Germans realize that Hitler was a bad leader and thus prepare them for the coup (Finale Furioso, pp. 176ff). This was because the Officers turned traitors faced a seemingly insurmountable obstacle, namely the fact that the German people still stood firmly behind Hitler.
Now to the west, but first a brief observation on the games played by Churchill and Roosevelt, according to Falin in Zweite Front (Second Front). I have to admit that Falin, even though he tries to convince us that Stalin did everything to preserve peace, and thereby discredits himself, provides a new perspective in regards to the conflicts of interest in the Anti-Hitler coalition of Winston and Franklin Delano (The full title of his book: Zweite Front, Die Interessenkonflikte in der Anti-Hitler-Koalition [Second Front: The conflicts of interest in the anti-Hitler coalition]). He shows that Churchill was quite contend to let the Russians and Germans wear each other out, kill each other, regardless of the fact that assistance had been promised to the Russians by opening a second front in the west. Roosevelt, who in my opinion was a communist sympathizer, made an efforts to establish that second front, but Churchill sabotaged it according to Falin, and he makes a fairly convincing case. Friedrich George, in his “Verrat in der Normandie”, is of the opinion, and backs this up with evidence, that “D-Day” happened when it did because of the German advances towards an atomic bomb, not to help Stalin.
The English and Americans used the German traitors, be they military men or other officials, to their advantage but never promised them anything. This co-operation went so far, by April 1944, as to have the traitors viewed by Dulles et al as agents of the western powers (Zweite Front, p.429). The west-leaning military brass wanted to make peace with the western allies to concentrate their efforts on the eastern front (and here a conflict of interest must have existed between that group and the Russian sympathizers). They offered, and did their best, to allow the Normandie invasion to succeed, and all they wanted in return was a guaranty to be allowed to continue the fight in the east , whereas other details were to be worked out later.
A partial list of the traitors in the military (Ibid, p.423): Field Marshall’s Rommel and Witzleben, the military commander in occupied France General Heinrich von Stülpnagel, Paris commandant General Boineburg-Lengsfeld, commander of the troops in Belgium and Northern-France Alexander von Falkenhausen, generals Tresckow, Hammerstein, Thomas, Wagner, Olbricht (Field Marshall Rundstedt refused to join, but remained silent). To this list we must add: General Hans Speidel, Rommel’s Chief of Staff (F. George does not believe that Rommel was part of it, but his case is weak), Admiral Wilhelm Franz Canaris, head of military intelligence, Hans Oster, number two in the Abwehr and Walther Friedrich Schellenberg, SS intelligence officer and later head of intelligence. The number one of this group: Ludwig Beck. There are many more, including the afore mentioned Claus von Stauffenberg, but this will do to show that high ranking officers were part of the treachery.
The question has to be asked: How were they able to operate, quite openly, without getting caught? Louis Kilzer provides a partial answer in his “Hitler’s Traitor” when he writes that officials, whose job it was to uncover this sort of thing, were part of the conspiracy. This is true, but differences of ideology as well as distain for “the little corporal” also played a big part. Did Hitler not realize what was happening? Was he so removed from reality, living in the Wolfsschanze, with information possibly fed to him by conspirators? Or did he know and realized that at this stage he was powerless to do anything about it? We will perhaps never know.
The efforts of the traitors were not limited to contacting the enemy and sabotaging Hitlers orders. There were 42 attempts on Hitler’s life, according to Felix Kellerhoff in an article in “Die Welt” of March 3, 2009. The number originates with Will Berthold, Kellerhoff believes it to be much too high, but provides ten examples. Following an example of one of those attempts, mentioned by Kellerhoff, I copied this from “Hitler’s Traitor”, by L. Kilzer, p.168/69:
But such was the case with the conspiracy against Hitler that, following this great victory (Manstein in the east in 1943. Wilf), the participants finally decided to assassinate the Führer. Former chief of staff Ludwig Beck was the motivating force. He had earlier declined assassination on moral principles but now had resolved this internal conflict. Carl Goerdeler, former Leipzig mayor and a major leader of the civilian resistance, made the same moral choice (Gisevius, Hans Bernd, “To the Bitter End”, 1989, p.468). But it would be men and armies in the field that would have to carry out the coup. The ever-traitorous Hans Oster, number-two man in the Abwehr, prevailed upon Field Marshal Günther von Kluge to provide the services of assassination host. Goerdeler had also worked Kluge, commander of Army Group Center, in December, and Kluge's continued acquiescence in the Operation seemed assured.
As originally planned, the assassination was to take place on March 13.(1943) after Hitler flew into the army group on Kluge's invitation. Lieutenant Colonel Freiherr von Boeselager and other officers in the 23d Cavalry Regiment were to shoot Hitler (Clark, Alan, “Barbarossa: The Russian German Conflict, 1941-1945, 1965, p.308). Oster and Olbricht (chief of the Heeresamt) were to orchestrate simultaneous takeovers in Berlin, Munich, and Vienna.
But once Hitler and his SS entourage were on the ground Kluge got cold feet. Because of Manstein, Hitler once again was seen as a victor. Kluge thought that the German people would not accept the coup, stressing that "we ought to wait until unfavorable military developments made the elimination of Hitler a evident necessity"(Ibid).
The conspirators were not deterred. Perhaps they couldn’t shoot down Hitler while Kluge was nearby, but they could certainly bomb him into oblivion when he left on his plane. General Erwin Lahousen agreed to supply the means: small blocks of trotetramethanium. General Henning von Tresckow would deliver the device in the form of two bottles of brandy.
When Hitler's departure approached, Treschkow walked over to a colonel standing beside Hitler's plane and asked if he would be so kind as to take the brandy back to a friend at Rastenburg. The colonel said “of course," and the two bombs, each separate fused, were loaded on board (Ibid, p.309).
As usual, Hitler’s would be assassins came up short. The fuses failed, and Hitler arrived back home knowing nothing of the mortal danger that had traveled with him. To the relief of the conspirators, no one discovered the bottles, which were delivered to their staged designee, who was part of the plot[…]”
In November 1942, Allan Dulles set up shop in Bern, to co-ordinate the actions of the agent networks of Germany and the US (Zweite Front, p.336). Aside from that, connections via the Vatican, Sweden, Switzerland, Turkey, Spain, Portugal and South-America remained intact (Ibid). Canaris met Donovan from the OSS, they maintained personal contact for three years (Ibid, p.391/92), Menzies from M-6 was also in on it (Ibid). Helmuth von Moltke traveled to Turkey in 1943, as Canaris’s emissary to meet an American contact (Ibid, p.394), etc., etc. All sorts of information was exchanged and plans discussed. In the view of the allies, Hitler had to be eliminated and they would not have minded if he was replaced by Himmler (Ibid, p.420). Himmler, as contact person, is mentioned often.
The German High Command knew exactly when the allies were going to land on “D-Day”, the ‘where’ became apparent when the armada approached Normandie (Normandie, p.25). German intelligence was, in spite of Canaris and Oster, up to their task and had informed headquarters of all allied movements, including the direction the invasion fleet was taken. Some historians claim that German meteorologists did not predict the clearing of the skies in the night of June 6, but they are wrong (Ibid, p.55). Hitler decided on Normandie on March 4, ignoring the many allied subterfuges, landing at Calais one of them. General Speidel, however, ignored all of this information on June 6, telling commanders that this was only a deception, and the real invasion will happen in Calais (closest point to England), this despite the fact that intelligence had told headquarters about the size of the armada approaching. Rommel was home in Germany, celebrating his wife’s 50th birthday (Ibid, p.229), other field commanders were also on leave. The allies landed and although they met with some resistance, it was not co-ordinated. It was as if the Germans, who were offering stiff resistance against overwhelming odds on the eastern front, had forgotten how to fight a war. Orders were given, countermanded, divisions send to the wrong areas, etc., etc. (Zweite Front, p.424). F. Georg provides some 300 pages of details in “Verrat in der Normandie”. The invasion succeeded but the traitors did not achieve what they set out to do, or did they?
The authors of the books I have used as source all frequently state that archives are still locked and documents inaccessible. It is therefore impossible to know the full extent of what really happened, but many agree that the traitor issue has not been adequately addressed. However, those who wish that the issue be addressed and cleared are forgetting the elephant in the living room, "The Holocaust". For, when this issue is dealt with, it will become apparent that this crime, if it really was committed, could not have been concealed from the brass of the armed forces.
The conspirators tried hard to get rid of Hitler, but were at the end afraid of the reaction from the German people, they knew that Hitler was very popular right to the end. Why not then, to discredit Hitler, provide evidence of the alleged mass murder of Jews? Something like that could not have been kept secret with some 400 000 participating (“Der Spiegel”, 10.3.08). Stauffenberg, when talking to other traitors, spoke of the so-called commissar order, the starving of Russian POW’s and the forced labor program as crimes committed by Hitler (Zweite Front, p.422), but not one word was uttered about what has become known as “The Holocaust”. In 1944, the conspirators planned to inform the German people about the hopeless state German forces were in and also about the crimes committed by Hitler, stating them as reason for his removal, but “The Holocaust” is not mentioned (Zweite Front, p.428). There are many more examples. In the end, the people were not informed of anything. Why not? What were those non-specified crimes that were mentioned? Why not refer to the single most horrific alleged crime, “The Holocaust”, if it then happened? Himmler, the arch villain, or so we are ordered to believe, was used as contact person, even though the allies were allegedly informed about “The Holocaust”! At Nürnberg, not one of the high ranking officers knew anything about “The Holocaust”. This is being interpreted as self defense, but when looking at how many of those Officers were traitors who were looking for an excuse to justify the removal of Hitler to the German people, this excuse falls flat on its face.
If “The Holocaust” really happened, German army brass would have known about it and would have used this knowledge to topple Hitler. The German people would not have supported Hitler any longer if such a horrendous crime had become known. But since this was not done, and since details about "The Holocaust" only emerged following the war, we can surmise that "The Holocaust" never happened.
Additional information about this document
|Title:||The Traitors in the Officer Corps of the German Armed Forces|
|First posted on CODOH:||Aug. 24, 2009, 12:11 p.m.| | <urn:uuid:7f5328e0-14c0-4ae0-babb-892ac0b46ce0> | CC-MAIN-2022-33 | https://codoh.com/library/document/the-traitors-in-the-officer-corps-of-the-german/en/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00698.warc.gz | en | 0.975491 | 5,451 | 2.921875 | 3 |
The Sarma (New Transmission Period) traditions of Tibetan Buddhism – Kagyu, Sakya and Gelug – divide the tantras into four classes:
- Kriya tantra (bya-rgyud) – ritual deity practice
- Charya tantra (spyod-rgyud) – behavioral deity practice
- Yoga tantra (rnal-'byor rgyud)– integrated deity practice
- Anuttarayoga (bla-med rnal-'byor rgyud) – peerlessly integrated (highest yoga) deity practice.
The Nyingma (Old Transmission Period) divides tantra into six classes – the same first three as the Sarma traditions, but in place of anuttarayoga, has mahayoga, anuyoga and atiyoga (dzogchen).
Distinctive Features of the Four Classes of Tantra
A standard way of explaining the distinction among the four classes is in terms of the analogy of an increasing level of bliss awareness used to focus on voidness (emptiness):
- Kriya tantra – the bliss of partners looking at each other
- Charya tantra – the bliss of smiling at each other
- Yoga tantra – the bliss of hugging each other
- Anuttarayoga tantra – the bliss of being in union.
But I’ve never seen anything that actually describes that as part of the practice. That seems to be more like an analogy.
Another standard way of describing the differences is in terms of the emphasis each places in its external practices:
- Kriya tantra – external practices
- Charya tantra – external and internal practices equally
- Yoga tantra – internal practices
- Anuttarayoga tantra – special internal practices.
But that also doesn’t give us a very clear picture of the differences either. So if we look more deeply:
- Kriya has a great deal of emphasis on ritual cleanliness, keeping clean. And so there’s emphasis on being vegetarian, not eating onion or garlic (these types of so-called "dark foods"). There’s ritual washing, and there’s making external purification on different parts of the body with certain type of mudras. There are special ways of gaining shamatha – a stilled and settled state of mind – by focusing on not just the visualizations, but also on the sound of the mantra without you actually reciting it, but just sort of hearing it resounding in your heart. Each Buddha-figure (yidam, "tantric deity") of course will have special individual features. So you have various healing practices to heal imbalances of elements with White Tara. You have similar types of things with Medicine Buddha and with Amitayus, a long-life deity. Avalokiteshvara (Chenrezig) practice helps to strengthen compassion. Manjushri for clarity of mind and understanding, Vajrapani for powerful abilities and so on. Please bear in mind that these Buddha-figures have many forms and can be used in several classes of tantra, not just one.
- Charya tantra is probably the least commonly practiced of the four classes. It will have practices quite similar to kriya. I’m not so familiar with this class of tantra practice, but from what I understand, there are extensive practices doing visualizations both with yourself as the Buddha-figure and with a Buddha-figure in front of you. The most commonly practiced Buddha-figure in charya tantra is the Abhisambodhi form of Vairochana.
- In yoga tantra, there’s a Buddha-figure called Samvid (Kun-rig). Yoga tantra has a great deal of emphasis on mudras – these hand gestures – very, very elaborate. I’m not really sure of what the internal practices are, but the system is explained in terms of four levels of applying mudras. The bardo rituals for those who have died that are done in the Gelug tradition come from these practices in yoga tantra.
It was mostly these three classes of tantra that went to China and then from there to Japan and Korea and Vietnam. Although we do find in the Chinese canon translations of the Guhyasamaja Tantra and Hevajra Tantra, it doesn’t seem as though their practice was carried on in these countries.
- Anuttarayoga tantra is the only tantra class that works with the subtle energy systems of the body – the chakras, the channels, the winds – and the only class of tantra that accesses and deals with the clear light level of mind, which is the mind's subtlest level.
Buddha-figures, Mandalas and Mantras
In all four classes of tantra you visualize yourself as a Buddha-figure. All four classes also have mandalas (dkyil-’khor), which are the palaces in which the Buddha-figure lives and the environment around it. All four also have multiple figures inside the mandala. As far as I know, it’s only in anuttarayoga tantra, however, that you have actual couples as Buddha-figures. I may be incorrect – because I certainly don’t know all systems – but I think in the tantras of the first three classes, there are just single figures, although there may be many of them in a mandala. All of the Buddha-figures in the mandalas have mantras you recite.
In all four classes you make extensive offerings as part of the practices. The first three classes have the outer set of offerings (phyi’i mchod-pa). It’s only anuttarayoga tantra that has the inner (nang-mchod), secret (gsang-mchod), and offering of reality (de-kho-na-nyid mchod-pa, thusness offering). The inner offering is of the five meats and five nectars which are transformed into nectars; the secret offering is of blissful awareness; and the offering of reality is of the simultaneous cognition of the two truths. As far as I know, it’s only in the anuttarayoga class that you have the offering of tsog (tshogs), which has the transformation of meat and alcohol as you would have in the inner offering.
All four classes have retreats that you do with a certain number of mantras. And all of them will then have a fire puja (sbyin-sreg) to conclude the retreat. This is a very elaborate ritual in which, with a special visualization, you offer many different substances into a fire. Some of these fire pujas also require reciting a large number of mantras during the fire puja itself and making a large number of offerings into the fire. Like, for instance, for the White Tara long-life retreat, which is kriya tantra, you need to say the mantra of Tara a million times, and then during the fire puja, you offer 10,000 pairs of stalks of a special grass and recite 10,000 mantras. It all has to be done all in one sitting – you can’t get up, you have to finish it – plus the whole fire ritual as well. So we shouldn’t think that kriya tantra practice is easier than anuttarayoga. It’s certainly not.
In each of these classes of tantra, there is an empowerment – a so-called "initiation" – a wang (dbang) in Tibetan. This is done usually with some type of mandala. It could either be one that is painted or drawn or a three-dimensional mandala. It’s only in the highest class of tantra, I believe, that some of the systems, for instance Chakrasamvara, have body mandalas from which the empowerment can be made. In such empowerments, the various parts of the guru’s body are visualized as the various parts of the building and the various figures inside the mandala. During the empowerment, the guru visualizes himself and you visualize the guru's body as the mandala, and the empowerment is given from the body mandala.
Empowerments have many parts, each also called an empowerment. Each class of tantra has progressively more parts:
- Kriya tantra – the first two parts of the vajra disciple empowerment (rdo-rje slob-ma' i dbang), which is the first past of the vase empowerment (bum-dbang): namely, the water empowerment (chu-dbang) and the crown empowerment (cod-pan-gyi dbang)
- Charya tantra – all five parts of the vajra disciple empowerment: namely, in addition to the water and crown empowerments, the vajra empowerment (rdo-rje dbang), the bell empowerment (dril-bu dbang) and the name empowerment (ming-dbang)
- Yoga tantra – the complete vase empowerment: namely, both the five vajra disciple empowerments and the vajra master empowerment (rdo-rje slob-dpon-gyi dbang)
- Anuttarayoga tantra – in addition to the complete vase empowerment, the secret empowerment (gsang-dbang), the discriminating deep awareness empowerment (shes-rab ye-shes dbang) and the fourth empowerment (bzhi'i dbang).
Subsequent Permissions (Jenangs)
Then to strengthen an empowerment, there is a jenang (rjes-snang), which means a “subsequent permission.” All the various Buddha-figures systems in each of the four tantra classes has an associated subsequent permission. They are conferred on the basis of a torma (gtor-ma), a ritual cake, which itself is generated as the Buddha-figure. Often in the West, the various teachers will give just this jenang – it’s much shorter – but some people think that that is the empowerment, and they use the word initiation loosely for both the empowerment and the subsequent permission, but these two are quite different rituals.
As part of both empowerments and subsequent permissions, all four classes tantra include the taking of bodhisattva vows. But only yoga tantra and anuttarayoga tantra have tantric vows.
All four classes of tantra have self-initiations (bdag-’jug). After you’ve done the retreat and fire puja, then you can do the self-initiation by yourself, which is extremely long and complicated to do. It involves
- Self-generation – generating oneself as the mandala and Buddha-figures inside
- Front-generation – generating in front of you of the mandala from which you will receive the empowerment
- Generation of the vase – generating the mandala within the vase with which, as an instrument, the empowerment is given
- The empowerment ritual itself.
Self-initiation is done by yourself. There is no teacher. That’s why it’s called "self-initiation." If you are a serious tantric practitioner, you would perform them periodically for renewing your vows by yourself. You also have to perform a self-initiation immediately before conferring an empowerment on others. So when you ask a teacher to give an empowerment, you should be aware that that means hours of ritual practice on the morning of when they give the empowerment. It’s emphasized very much that we try to do perform a self-initiation right before we die so that we die with pure vows.
When you take an empowerment, you usually have the practice commitment to do the practice, called a sadhana (sgrub-thabs) every single day for the rest of your life. If you miss a day, then if you haven’t done the retreat and fire puja, then to make that up you have to recite 100,000 Vajrasattva mantras, the 100-syllable mantra. But if you have done the retreat and fire puja, then to make up that transgression you can do the self-initiation. You don’t need to do the Vajrasattva mantra recitation practice.
Although you might get the impression that tantric practitioners are only doing anuttarayoga tantra practice, that’s not the case. Everybody does some kriya practice. But kriya tantra has a whole long path with many parts and complicated practices, and perhaps not too many practitioners do the more advanced, complicated practices. But that’s the case with anuttarayoga tantra practices as well; very few go beyond the generation stage, which entails sadhana and mantra practice.
Preliminary Practices in the Gelug Tradition
All four classes of tantra require as the foundation or basis a very firm development of the lam-rim points. Tsongkhapa puts the emphasis in terms of the three principle pathways of mind – renunciation or determination to be free, bodhichitta, and the understanding of voidness. Those are absolutely prerequisite. Then there are the so-called extraordinary preliminary practices of 100,000 prostrations and Vajrasattva mantra, etc. Gelugpa has a very extensive presentation of nine of these and not just the more commonly practiced four (prostration, mandala offering, Vajrasattva mantra recitation and guru-yoga). The nine are 100,000 repetitions of:
- Prostration, usually done while reciting the names of the 35 so-called "confession Buddhas"
- Mandala offering
- Refuge and bodhichitta, usually done together with mandala offerings, while reciting a verse that covers both. So all three are done together – mandala offering, refuge and bodhichitta – which in some other traditions might be done separately
- Vajrasattva mantra recitation
- Guru yoga, which in the Gelug tradition is usually the four-line verse of Migtsema (dMigs-brtse-ma), the Tsongkhapa verse. There’s also a five-line and a nine-line variant, but it’s usually the four-line version.
- Damtsig Dorje (Dam-tshig rdo-rje, Skt. Samayavajra) mantra recitation, which is for purifying any transgression of a close bond with your teachers
- Zache Kadro (Za-byed mkha’-’gro), which is another type of fire puja for burning off obstacles.
- Making and offering tsa-tsa clay votive tablets
- Making water bowl offerings.
So we shouldn’t think that the Gelug tradition doesn’t have these preliminary practices. It has a lot of them, more than you find in many other traditions. But usually they’re not done as a whole event, taking a period of time out and just doing these preliminaries, but you do each one when it fits into your study and practice schedule. So you might have a break in your studies, and then you do your prostrations, or something like that. And although in theory one is supposed to do all of these preliminaries before receiving an empowerment, it is very rare in any of the Tibetan traditions that that is followed strictly. Most Tibetans will have received some sort of empowerments much earlier in their Dharma career.
What Is a Damtsig or Samaya?
Could you explain what damtsig or samaya means?
The Tibetan word “damtsig” (dam-tshig), samaya in Sanskrit, means “close bond.” Sometimes people translate it as “holy word,” or “promise,” or things like that: that’s very misleading if one looks at the larger context of all its usages. It’s a close bond, a close connection. It’s used in many different contexts. One is the close bond with a Buddha-figure, as in yidam – "yi" meaning "mind" and "dam" is short for "damtsig," so "damtsig" or "samaya" for the mind – by visualizing ourselves, imagining ourselves in that form.
Then it’s very important to have a very pure damtsig or samaya with our spiritual master, so a close bond with the spiritual master, which is sort of like a heart-to-heart connection that you really feel very strong and you don’t want to sully it by lying, or being deceitful, or cheating, or pretending that you’ve been doing your practice or something when you haven’t really – these sort of things that would mess up that close bond; you want to keep that, it’s really something sacred. The word “dam” in that damtsig has the connotation of “being sacred,” so it’s really something very sacred, very special, and you want to keep it very sacred. So that’s a sacred close bond, close connection.
And then in the various vows – there’s a difference between a vow and a samaya. A vow is to restrain from a certain action, either a naturally destructive action, or something which is proscribed for certain purposes, like eating in the evening for ordained people. One wants to refrain from that. It’s not that they’re negative actions, but certain things that you want to refrain from, restrain yourself from, because it would be detrimental, like eating at night, if you want to meditate at night and have a clear mind at night and in the morning. Eating makes your mind heavy, so you refrain from that. That’s a vow – to restrain from something, refrain from something, whereas a damtsig is a close bond – what you do, rather than what you refrain from – and there are nineteen “close-bonding practices,” I call them, nineteen samayas with the five Buddha-families in the highest class of tantra. Buddha-families are speaking about different aspects of Buddha-nature. To make a close bond, let’s say in the Ratnasambhava family, the jewel family, which is dealing with the Buddha-nature factor of equalizing awareness, to be able to see the equality of everyone, to put them all together in terms of “Everybody wants to be happy and nobody wants to be unhappy. Everybody is equally void in terms of how they exist...” all this equalizing thing, which is like seeing the pattern how everything fits together in one equalizing way. To make a close bond with that, one does four types of generosity, of giving to others equally – material things, and Dharma, and love, and protection from fear – so those are damtsigs, those are close-bonding practices, to bond you closely to that Buddha-nature factor of equalizing awareness, so that we develop it more. That’s the meaning of damtsig
Breaking Practice Commitments and Samaya
If we’re an old person and we’ve broken our samaya because we’re sick and can’t do the practice, we may die in any moment, and so we have no opportunity to purify our transgression of our samaya. And if there is no lama who is close to us at that time, nobody can help us. So that’s a very dangerous situation. Or because of sickness, if we can't just do the practice it could be the reason of transgression of samaya.
Well, that is true. It depends here what we mean by samaya (dam-tshig). We need to be careful not to confuse samaya with a practice commitment. A practice commitment is to do a sadhana recitation, to do a certain number of mantras every day, or it may be to do the retreat. A retreat in a Tibetan context certainly is not referring to a weekend residential course. That is not a retreat. A retreat means doing 100,000 – or often many, many more than just 100,000 – repetitions of a mantra, which, by the way, is not the main emphasis of the retreat; that’s just a measure of the length of the retreat. The emphasis in the retreat is the sadhana ritual and developing single-minded concentration, and when you get tired doing that, then you do the mantra. But in any case, all of these are practice commitments. And although there are long versions for a sadhana, when one is familiar with the long version you can practice a more abbreviated form, particularly if you discuss that with your teacher.
Breaking Practice Commitments
When you do a retreat, for example, during the retreat you must never break the continuity of the retreat. So you can’t miss a day. And for that reason, the advice is always given that the first night of the retreat – you usually start retreats at night – you set at that time the number of mantras that is going to be the absolute minimum for each day. And so the advice is always given to only say three mantras that first night, because if you’re sick you can usually manage to do three mantras.
With the Vajrayogini empowerment, there you make a commitment – which you say to yourself (you don’t have to say it anybody else) – of how many mantras you’re going to do every day. Some people are over-enthusiastic and make a commitment to do not even just one mala (one hundred mantras), but they might even say two or three hundred, and then they’re in big, big trouble if they get sick. So my teacher always recommended to just say that you’re going to do three a day – three repetitions, not three malas. And if you want to do three malas or three hundred malas a day, you are most welcome to do that. But if you’re sick, then three is enough.
In terms of a practice commitment, if you are totally sick – let’s say you are in a coma, or something like that – obviously you haven’t broken your practice commitment, because you can’t say it. I mean, it’s not that fanatic, that: “You’re going to go to hell because you’re in a coma.” There are always exceptions.
But when we talk about samaya, samaya means a “close bond” literally. And the most important one is the close bond with the teacher and not to reveal the private teachings to those who are unripe. So as part of an empowerment ritual, you actually promise to keep the practice private (which is what keeping secrecy about them means) and to keep a vajra and bell – not that you have to keep one in your pocket all the time – which represent voidness and blissful awareness. The close bond with the teacher means that you’re going to always respect the teacher, not despise the teacher, or get angry and yell at the teacher, say the teacher is stupid and no good, and so on. The close bond is to always be respectful. There’s a whole set of protocols of how you regard the tantric teacher. That’s the most important samaya.
By the way, you have to always keep in mind the advice that’s given in the Fifty Stanzas on the Guru. It’s said that one should study this text before receiving an empowerment and the teacher should teach this before giving an empowerment. It’s not done so frequently, but that’s the proper protocol. And although it says some things that are a little bit unusual – not to step on the shadow of the guru, etc. – what is most relevant in this text is that if the teacher asks you to do something unreasonable or which you’re not able to do, or if the teacher acts strangely, then you politely ask the teacher about it. You don’t hate the teacher and say they’re stupid or horrible, but you politely ask the teacher, “Could you kindly explain to me why you’re acting that way? It's not the way that is described in the texts,” or “You asked me to do this, and I’m not able to do it. This is really impossible for me. Could you explain why you asked me to do this?” Or you simply say, “I’m sorry, I can’t do this,” but you’re polite.
The Kalachakra Tantra says that if it really gets difficult with the teacher and you didn’t examine the teacher well enough before you received the empowerment and you find the teacher really is not qualified, then just keep a polite distance, but without despising the teacher. Just keep a distance. So even if one dies or is very sick, then the dying or the getting sick is not going to be the cause of breaking that close bond with the teacher, that samaya. It’s your attitude that breaks it.
Now, the samaya, or close bond, not to reveal the hidden or secret teachings to those who are unripe, that’s not so easy to understand. If one took it in a very literal way, it would mean never teaching any of the tantra material to those who haven’t received an empowerment. That’s based on the assumption that everybody who is given an empowerment has been examined very, very well by the teacher and is qualified – the teacher has found that that student is qualified – and then the teacher gives the empowerment. But this is hardly ever done nowadays. So just because somebody has attended empowerment doesn’t mean at all that they are a qualified person for tantra practice or even that they’re interested in it (they just went because it was given). So who is ripe, who is unripe? That’s very difficult for us to know
Secondly, almost everything is publicly available now, in any case. Nothing is really secret anymore. And so, as His Holiness the Dalai Lama jokes, there are some teachings which say they should not be written down or printed, and you find not only printed versions of these teachings that have been published, but people even put at the beginning of it: “This is not to be printed or published,” which is of course extremely silly. So His Holiness says that if the information is available anyway, then it is better that it be correct information rather than misleading information.
So I think that it’s difficult really to understand how we would put this close bond into practice. I think that one guideline for it – that at least I try to follow, but it’s difficult if you make a book and put something on the internet – in terms of personal interaction is a guideline from one of the secondary tantric vows, the vow not to spend more than a week among the shravakas, the so-called listeners. The point of that is not that they are a Theravada or another type of Hinayana practitioner. That’s not the point. The point is that if they’re someone who would discourage you from working toward enlightenment on the Mahayana path of tantra, and say, “This is stupid,” and tell you, “Well, just work for liberation,” if you spent a lot of time with them, then you would get discouraged from your tantric practice
So by extension from this, the way to practice that I find helpful in terms of this samaya is to emphasize another way of translating the word secret (gsang). Secret can mean either hidden or it can mean private. And so what is at least a guideline that I try to follow is don’t publicize your tantric practice to those who would make fun of it or who wouldn’t understand. Keep it to yourself, and only discuss it with others who are also tantric practitioners. Because if you tell others who are not into tantra, they might make fun of you, they might discourage you, might tell you this is crazy. (It’s the same thing if you have thangkas, Tibetan paintings, of various figures in union or naked, and so on, and just anybody who walks into your house can see it. They might ask some very difficult questions or get a very wrong idea, especially if it’s children.) So you keep that private. Either you have a meditation room or your own room so that just anybody who walks into the house doesn’t see it.
And keeping the vajra and bell, that samaya. Although of course it’s very nice to have those ritual instruments, the main emphasis is to remember what they represent. The bell represents the discriminating awareness of voidness; and the vajra, the blissful awareness with which you understand voidness.
Dying with Transgressions
If we die with various transgressions and so on, then if we have time and we have the conscious awareness, of course the self-initiation is best. If not, then we need to apply the four opponent forces:
- Admit that what we’ve done was mistaken and regret it
- Give the strong resolve or promise that in the future and in future lives, we won’t repeat it
- Reaffirm our basis, which is safe direction (refuge) and bodhichitta
- And then apply opponent forces, like Vajrasattva mantra practice.
But as His Holiness the Dalai Lama has explained, although in anuttarayoga tantra we do practice that is similar to death, bardo, and rebirth – which, by the way, is unique to anuttarayoga tantra (you don’t have that in the three lower classes) – nevertheless when we’re actually dying, for most people it’s not very practical to try to do these elaborate visualizations that we have been practicing in the sadhanas. They’re too complicated, too difficult, and it might just put you into a state of stress (that: “Oh, I can’t get it exactly right!”). So whatever practice you’ve done earlier in your life, the force of that will carry on. But when you’re actually dying, the best thought is to keep bodhichitta – “May I continue to work toward enlightenment to benefit all beings” – which would then include having a precious human rebirth, meeting with the teachers, having all the opportunities to be able to continue on the path. This is a much more stable state of mind within which to die.
And obviously if we die when we are in our sleep or unconscious or we die suddenly, then whatever thoughts and state of mind we were in before that will have a big effect on our future lives. And also very important is what has been the most dominant, frequent state of mind that we’ve had during our life. Actually, one of the main meditations that are done in lam-rim in terms of the three worse states of rebirth, taking them seriously in terms of what our future lifetimes might be, is to review at the end of the day how many times and how much during the day did we have a constructive, positive state of mind and how many times did we have a negative state of mind. How many times did I have thoughts of compassion towards others? How many times did I have anger or lust or jealousy or negative thoughts about others? And for most of us, every day we will find that we have built up far more causes for a worse rebirth than for a better one. That’s actually a very effective meditation. So that’s why it’s important to try to have our most frequent thought, what we’re most accustomed to, to be constructive. And that’s very difficult because we’re far more familiar, through countless lifetimes, of having quite a negative mind.
When you’re driving in traffic, how many thoughts of love and compassion do you have for the people in the other cars? And how many nasty thoughts do you have about them and about the traffic? That gives us a good indication of where we’re going after we die. | <urn:uuid:98916360-5c9c-4546-bdb1-cbd9cf0da2cf> | CC-MAIN-2022-33 | https://studybuddhism.com/en/advanced-studies/vajrayana/tantra-theory/the-differences-among-the-four-classes-of-tantra | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00497.warc.gz | en | 0.948801 | 6,912 | 2.59375 | 3 |
Positivism is an empiricist philosophical theory that holds that all genuine knowledge is either true by definition or positive—meaning a posteriori facts derived by reason and logic from sensory experience. Other ways of knowing, such as theology, metaphysics, intuition, or introspection are rejected or considered meaningless.
Although the positivist approach has been a recurrent theme in the history of western thought, modern positivism was first articulated in the early 19th century by Auguste Comte. His school of sociological positivism holds that society, like the physical world, operates according to general laws. After Comte, positivist schools arose in logic, psychology, economics, historiography, and other fields of thought. Generally, positivists attempted to introduce scientific methods to their respective fields. Since the turn of the 20th century, positivism has declined under criticism from antipositivists and critical theorists, among others, for its alleged scientism, reductionism, overgeneralizations, and methodological limitations.
The English noun positivism was re-imported in the 19th century from the French word positivisme, derived from positif in its philosophical sense of 'imposed on the mind by experience'. The corresponding adjective (Latin: positīvus) has been used in a similar sense to discuss law (positive law compared to natural law) since the time of Chaucer.
Kieran Egan argues that positivism can be traced to the philosophy side of what Plato described as the quarrel between philosophy and poetry, later reformulated by Wilhelm Dilthey as a quarrel between the natural sciences (German: Naturwissenschaften) and the humanities (Geisteswissenschaft).
In the early nineteenth century, massive advances in the natural sciences encouraged philosophers to apply scientific methods to other fields. Thinkers such as Henri de Saint-Simon, Pierre-Simon Laplace and Auguste Comte believed the scientific method, the circular dependence of theory and observation, must replace metaphysics in the history of thought.
Auguste Comte (1798–1857) first described the epistemological perspective of positivism in The Course in Positive Philosophy, a series of texts published between 1830 and 1842. These texts were followed by the 1844 work, A General View of Positivism (published in French 1848, English in 1865). The first three volumes of the Course dealt chiefly with the physical sciences already in existence (mathematics, astronomy, physics, chemistry, biology), whereas the latter two emphasized the inevitable coming of social science. Observing the circular dependence of theory and observation in science, and classifying the sciences in this way, Comte may be regarded as the first philosopher of science in the modern sense of the term. For him, the physical sciences had necessarily to arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. His View of Positivism therefore set out to define the empirical goals of sociological method.
"The most important thing to determine was the natural order in which the sciences stand—not how they can be made to stand, but how they must stand, irrespective of the wishes of any one. ... This Comte accomplished by taking as the criterion of the position of each the degree of what he called "positivity," which is simply the degree to which the phenomena can be exactly determined. This, as may be readily seen, is also a measure of their relative complexity, since the exactness of a science is in inverse proportion to its complexity. The degree of exactness or positivity is, moreover, that to which it can be subjected to mathematical demonstration, and therefore mathematics, which is not itself a concrete science, is the general gauge by which the position of every science is to be determined. Generalizing thus, Comte found that there were five great groups of phenomena of equal classificatory value but of successively decreasing positivity. To these he gave the names astronomy, physics, chemistry, biology, and sociology."
Comte offered an account of social evolution, proposing that society undergoes three phases in its quest for the truth according to a general "law of three stages". The idea bears some similarity to Marx's belief that human society would progress toward a communist peak (see dialectical materialism). This is perhaps unsurprising as both were profoundly influenced by the early Utopian socialist Henri de Saint-Simon, who was at one time Comte's mentor. Comte intended to develop a secular-scientific ideology in the wake of European secularisation.
Comte's stages were (1) the theological, (2) the metaphysical, and (3) the positive. The theological phase of man was based on whole-hearted belief in all things with reference to God. God, Comte says, had reigned supreme over human existence pre-Enlightenment. Humanity's place in society was governed by its association with the divine presences and with the church. The theological phase deals with humankind's accepting the doctrines of the church (or place of worship) rather than relying on its rational powers to explore basic questions about existence. It dealt with the restrictions put in place by the religious organization at the time and the total acceptance of any "fact" adduced for society to believe.
Comte describes the metaphysical phase of humanity as the time since the Enlightenment, a time steeped in logical rationalism, to the time right after the French Revolution. This second phase states that the universal rights of humanity are most important. The central idea is that humanity is invested with certain rights that must be respected. In this phase, democracies and dictators rose and fell in attempts to maintain the innate rights of humanity.
The final stage of the trilogy of Comte's universal law is the scientific, or positive, stage. The central idea of this phase is that individual rights are more important than the rule of any one person. Comte stated that the idea of humanity's ability to govern itself makes this stage inherently different from the rest. There is no higher power governing the masses and the intrigue of any one person can achieve anything based on that individual's free will. The third principle is most important in the positive stage. Comte calls these three phases the universal rule in relation to society and its development. Neither the second nor the third phase can be reached without the completion and understanding of the preceding stage. All stages must be completed in progress.
Comte believed that the appreciation of the past and the ability to build on it towards the future was key in transitioning from the theological and metaphysical phases. The idea of progress was central to Comte's new science, sociology. Sociology would "lead to the historical consideration of every science" because "the history of one science, including pure political history, would make no sense unless it was attached to the study of the general progress of all of humanity". As Comte would say: "from science comes prediction; from prediction comes action." It is a philosophy of human intellectual development that culminated in science. The irony of this series of phases is that though Comte attempted to prove that human development has to go through these three stages, it seems that the positivist stage is far from becoming a realization. This is due to two truths: The positivist phase requires having a complete understanding of the universe and world around us and requires that society should never know if it is in this positivist phase. Anthony Giddens argues that since humanity constantly uses science to discover and research new things, humanity never progresses beyond the second metaphysical phase.
Comte's fame today owes in part to Emile Littré, who founded The Positivist Review in 1867. As an approach to the philosophy of history, positivism was appropriated by historians such as Hippolyte Taine. Many of Comte's writings were translated into English by the Whig writer, Harriet Martineau, regarded by some as the first female sociologist. Debates continue to rage as to how much Comte appropriated from the work of his mentor, Saint-Simon. He was nevertheless influential: Brazilian thinkers turned to Comte's ideas about training a scientific elite in order to flourish in the industrialization process. Brazil's national motto, Ordem e Progresso ("Order and Progress") was taken from the positivism motto, "Love as principle, order as the basis, progress as the goal", which was also influential in Poland.
In later life, Comte developed a 'religion of humanity' for positivist societies in order to fulfil the cohesive function once held by traditional worship. In 1849, he proposed a calendar reform called the 'positivist calendar'. For close associate John Stuart Mill, it was possible to distinguish between a "good Comte" (the author of the Course in Positive Philosophy) and a "bad Comte" (the author of the secular-religious system). The system was unsuccessful but met with the publication of Darwin's On the Origin of Species to influence the proliferation of various secular humanist organizations in the 19th century, especially through the work of secularists such as George Holyoake and Richard Congreve. Although Comte's English followers, including George Eliot and Harriet Martineau, for the most part rejected the full gloomy panoply of his system, they liked the idea of a religion of humanity and his injunction to "vivre pour autrui" ("live for others", from which comes the word "altruism").
The early sociology of Herbert Spencer came about broadly as a reaction to Comte; writing after various developments in evolutionary biology, Spencer attempted (in vain) to reformulate the discipline in what we might now describe as socially Darwinistic terms.
Within a few years, other scientific and philosophical thinkers began creating their own definitions for positivism. These included Émile Zola, Emile Hennequin, Wilhelm Scherer, and Dimitri Pisarev. Fabien Magnin was the first working-class adherent to Comte's ideas, and became the leader of a movement known as "Proletarian Positivism". Comte appointed Magnin as his successor as president of the Positive Society in the event of Comte's death. Magnin filled this role from 1857 to 1880, when he resigned. Magnin was in touch with the English positivists Richard Congreve and Edward Spencer Beesly. He established the Cercle des prolétaires positivistes in 1863 which was affiliated to the First International. Eugène Sémérie was a psychiatrist who was also involved in the Positivist movement, setting up a positivist club in Paris after the foundation of the French Third Republic in 1870. He wrote: "Positivism is not only a philosophical doctrine, it is also a political party which claims to reconcile order—the necessary basis for all social activity—with Progress, which is its goal."
|Part of a series on|
The modern academic discipline of sociology began with the work of Émile Durkheim (1858–1917). While Durkheim rejected much of the details of Comte's philosophy, he retained and refined its method, maintaining that the social sciences are a logical continuation of the natural ones into the realm of human activity, and insisting that they may retain the same objectivity, rationalism, and approach to causality. Durkheim set up the first European department of sociology at the University of Bordeaux in 1895, publishing his Rules of the Sociological Method (1895). In this text he argued: "[o]ur main goal is to extend scientific rationalism to human conduct... What has been called our positivism is but a consequence of this rationalism."
Durkheim's seminal monograph, Suicide (1897), a case study of suicide rates amongst Catholic and Protestant populations, distinguished sociological analysis from psychology or philosophy By carefully examining suicide statistics in different police districts, he attempted to demonstrate that Catholic communities have a lower suicide rate than Protestants, something he attributed to social (as opposed to individual or psychological) causes. He developed the notion of objective sui generis "social facts" to delineate a unique empirical object for the science of sociology to study. Through such studies, he posited, sociology would be able to determine whether a given society is 'healthy' or 'pathological', and seek social reform to negate organic breakdown or "social anomie". Durkheim described sociology as the "science of institutions, their genesis and their functioning".
David Ashley and David M. Orenstein have alleged, in a consumer textbook published by Pearson Education, that accounts of Durkheim's positivism are possibly exaggerated and oversimplified; Comte was the only major sociological thinker to postulate that the social realm may be subject to scientific analysis in exactly the same way as natural science, whereas Durkheim saw a far greater need for a distinctly sociological scientific methodology. His lifework was fundamental in the establishment of practical social research as we know it today—techniques which continue beyond sociology and form the methodological basis of other social sciences, such as political science, as well of market research and other fields.
In historiography, historical or documentary positivism is the belief that historians should pursue the objective truth of the past by allowing historical sources to "speak for themselves", without additional interpretation. In the words of the French historian Fustel de Coulanges, as a positivist, "It is not I who am speaking, but history itself". The heavy emphasis placed by historical positivists on documentary sources led to the development of methods of source criticism, which seek to expunge bias and uncover original sources in their pristine state.
The origin of the historical positivist school is particularly associated with the 19th-century German historian Leopold von Ranke, who argued that the historian should seek to describe historical truth "wie es eigentlich gewesen ist" ("as it actually was")—though subsequent historians of the concept, such as Georg Iggers, have argued that its development owed more to Ranke's followers than Ranke himself.
Historical positivism was critiqued in the 20th century by historians and philosophers of history from various schools of thought, including Ernst Kantorowicz in Weimar Germany—who argued that "positivism ... faces the danger of becoming Romantic when it maintains that it is possible to find the Blue Flower of truth without preconceptions"—and Raymond Aron and Michel Foucault in postwar France, who both posited that interpretations are always ultimately multiple and there is no final objective truth to recover. In his posthumously published 1946 The Idea of History, the English historian R. G. Collingwood criticized historical positivism for conflating scientific facts with historical facts, which are always inferred and cannot be confirmed by repetition, and argued that its focus on the "collection of facts" had given historians "unprecedented mastery over small-scale problems", but "unprecedented weakness in dealing with large-scale problems".
Historicist arguments against positivist approaches in historiography include that history differs from sciences like physics and ethology in subject matter and method; that much of what history studies is nonquantifiable, and therefore to quantify is to lose in precision; and that experimental methods and mathematical models do not generally apply to history, so that it is not possible to formulate general (quasi-absolute) laws in history.
In psychology the positivist movement was influential in the development of operationalism. The 1927 philosophy of science book The Logic of Modern Physics in particular, which was originally intended for physicists, coined the term operational definition, which went on to dominate psychological method for the whole century.
In economics, practicing researchers tend to emulate the methodological assumptions of classical positivism, but only in a de facto fashion: the majority of economists do not explicitly concern themselves with matters of epistemology. Economic thinker Friedrich Hayek (see "Law, Legislation and Liberty") rejected positivism in the social sciences as hopelessly limited in comparison to evolved and divided knowledge. For example, much (positivist) legislation falls short in contrast to pre-literate or incompletely defined common or evolved law.
In jurisprudence, "legal positivism" essentially refers to the rejection of natural law; thus its common meaning with philosophical positivism is somewhat attenuated and in recent generations generally emphasizes the authority of human political structures as opposed to a "scientific" view of law.
Logical positivism (later and more accurately called logical empiricism) is a school of philosophy that combines empiricism, the idea that observational evidence is indispensable for knowledge of the world, with a version of rationalism, the idea that our knowledge includes a component that is not derived from observation.
Logical positivism grew from the discussions of a group called the "First Vienna Circle", which gathered at the Café Central before World War I. After the war Hans Hahn, a member of that early group, helped bring Moritz Schlick to Vienna. Schlick's Vienna Circle, along with Hans Reichenbach's Berlin Circle, propagated the new doctrines more widely in the 1920s and early 1930s.
It was Otto Neurath's advocacy that made the movement self-conscious and more widely known. A 1929 pamphlet written by Neurath, Hahn, and Rudolf Carnap summarized the doctrines of the Vienna Circle at that time. These included the opposition to all metaphysics, especially ontology and synthetic a priori propositions; the rejection of metaphysics not as wrong but as meaningless (i.e., not empirically verifiable); a criterion of meaning based on Ludwig Wittgenstein's early work (which he himself later set out to refute); the idea that all knowledge should be codifiable in a single standard language of science; and above all the project of "rational reconstruction," in which ordinary-language concepts were gradually to be replaced by more precise equivalents in that standard language. However, the project is widely considered to have failed.
After moving to the United States, Carnap proposed a replacement for the earlier doctrines in his Logical Syntax of Language. This change of direction, and the somewhat differing beliefs of Reichenbach and others, led to a consensus that the English name for the shared doctrinal platform, in its American exile from the late 1930s, should be "logical empiricism." While the logical positivist movement is now considered dead, it has continued to influence philosophical development.
Historically, positivism has been criticized for its reductionism, i.e., for contending that all "processes are reducible to physiological, physical or chemical events," "social processes are reducible to relationships between and actions of individuals," and that "biological organisms are reducible to physical systems."
The consideration that laws in physics may not be absolute but relative, and, if so, this might be even more true of social sciences, was stated, in different terms, by G. B. Vico in 1725. Vico, in contrast to the positivist movement, asserted the superiority of the science of the human mind (the humanities, in other words), on the grounds that natural sciences tell us nothing about the inward aspects of things.
Wilhelm Dilthey fought strenuously against the assumption that only explanations derived from science are valid. He reprised Vico's argument that scientific explanations do not reach the inner nature of phenomena and it is humanistic knowledge that gives us insight into thoughts, feelings and desires. Dilthey was in part influenced by the historicism of Leopold von Ranke (1795–1886).
The contestation over positivism is reflected both in older debates (see the Positivism dispute) and current ones over the proper role of science in the public sphere. Public sociology—especially as described by Michael Burawoy—argues that sociologists should use empirical evidence to display the problems of society so they might be changed.
At the turn of the 20th century, the first wave of German sociologists formally introduced methodological antipositivism, proposing that research should concentrate on human cultural norms, values, symbols, and social processes viewed from a subjective perspective. Max Weber, one such thinker, argued that while sociology may be loosely described as a 'science' because it is able to identify causal relationships (especially among ideal types), sociologists should seek relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists. Weber regarded sociology as the study of social action, using critical analysis and verstehen techniques. The sociologists Georg Simmel, Ferdinand Tönnies, George Herbert Mead, and Charles Cooley were also influential in the development of sociological antipositivism, whilst neo-Kantian philosophy, hermeneutics, and phenomenology facilitated the movement in general.
In the mid-twentieth century, several important philosophers and philosophers of science began to critique the foundations of logical positivism. In his 1934 work The Logic of Scientific Discovery, Karl Popper argued against verificationism. A statement such as "all swans are white" cannot actually be empirically verified, because it is impossible to know empirically whether all swans have been observed. Instead, Popper argued that at best an observation can falsify a statement (for example, observing a black swan would prove that all swans are not white). Popper also held that scientific theories talk about how the world really is (not about phenomena or observations experienced by scientists), and critiqued the Vienna Circle in his Conjectures and Refutations. W. V. O. Quine and Pierre Duhem went even further. The Duhem–Quine thesis states that it is impossible to experimentally test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses); thus, unambiguous scientific falsifications are also impossible. Thomas Kuhn, in his 1962 book The Structure of Scientific Revolutions, put forward his theory of paradigm shifts. He argued that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.
Together, these ideas led to the development of critical rationalism and postpositivism. Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. Postpositivism of this type is described in social science guides to research methods. Postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.
In the early 1960s, the positivism dispute arose between the critical theorists (see below) and the critical rationalists over the correct solution to the value judgment dispute (Werturteilsstreit). While both sides accepted that sociology cannot avoid a value judgement that inevitably influences subsequent conclusions, the critical theorists accused the critical rationalists of being positivists; specifically, of asserting that empirical questions can be severed from their metaphysical heritage and refusing to ask questions that cannot be answered with scientific methods. This contributed to what Karl Popper termed the "Popper Legend", a misconception among critics and admirers of Popper that he was, or identified himself as, a positivist.
Although Karl Marx's theory of historical materialism drew upon positivism, the Marxist tradition would also go on to influence the development of antipositivist critical theory. Critical theorist Jürgen Habermas critiqued pure instrumental rationality (in its relation to the cultural "rationalisation" of the modern West) as a form of scientism, or science "as ideology". He argued that positivism may be espoused by "technocrats" who believe in the inevitability of social progress through science and technology. New movements, such as critical realism, have emerged in order to reconcile postpositivist aims with various so-called 'postmodern' perspectives on the social acquisition of knowledge.
Max Horkheimer criticized the classic formulation of positivism on two grounds. First, he claimed that it falsely represented human social action. The first criticism argued that positivism systematically failed to appreciate the extent to which the so-called social facts it yielded did not exist 'out there', in the objective world, but were themselves a product of socially and historically mediated human consciousness. Positivism ignored the role of the 'observer' in the constitution of social reality and thereby failed to consider the historical and social conditions affecting the representation of social ideas. Positivism falsely represented the object of study by reifying social reality as existing objectively and independently of the labour that actually produced those conditions. Secondly, he argued, representation of social reality produced by positivism was inherently and artificially conservative, helping to support the status quo, rather than challenging it. This character may also explain the popularity of positivism in certain political circles. Horkheimer argued, in contrast, that critical theory possessed a reflexive element lacking in the positivistic traditional theory.
Some scholars today hold the beliefs critiqued in Horkheimer's work, but since the time of his writing critiques of positivism, especially from philosophy of science, have led to the development of postpositivism. This philosophy greatly relaxes the epistemological commitments of logical positivism and no longer claims a separation between the knower and the known. Rather than dismissing the scientific project outright, postpositivists seek to transform and amend it, though the exact extent of their affinity for science varies vastly. For example, some postpositivists accept the critique that observation is always value-laden, but argue that the best values to adopt for sociological observation are those of science: skepticism, rigor, and modesty. Just as some critical theorists see their position as a moral commitment to egalitarian values, these postpositivists see their methods as driven by a moral commitment to these scientific values. Such scholars may see themselves as either positivists or antipositivists.
During the later twentieth century, positivism began to fall out of favor with scientists as well. Later in his career, German theoretical physicist Werner Heisenberg, Nobel laureate for his pioneering work in quantum mechanics, distanced himself from positivism:
The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.
In the early 1970s, urbanists of the quantitative school like David Harvey started to question the positivist approach itself, saying that the arsenal of scientific theories and methods developed so far in their camp were "incapable of saying anything of depth and profundity" on the real problems of contemporary cities.
According the Catholic Enciclopedia, the Positivism has also come under fire on religious and philosophical grounds, whose proponents state that truth begins in sense experience, but does not end there. Positivism fails to prove that there are not abstract ideas, laws, and principles, beyond particular observable facts and relationships and necessary principles, or that we cannot know them. Nor does it prove that material and corporeal things constitute the whole order of existing beings, and that our knowledge is limited to them. According to positivism, our abstract concepts or general ideas are mere collective representations of the experimental order—for example; the idea of "man" is a kind of blended image of all the men observed in our experience. This runs contrary to a Platonic or Christian ideal, where an idea can be abstracted from any concrete determination, and may be applied identically to an indefinite number of objects of the same class From the idea's perspective, Platonism is more precise. Defining an idea as a sum of collective images is imprecise and more or less confused, and becomes more so as the collection represented increases. An idea defined explicitly always remains clear.
Other new movements, such as critical realism, have emerged in opposition to positivism. Critical realism seeks to reconcile the overarching aims of social science with postmodern critiques. Experientialism, which arose with second generation cognitive science, asserts that knowledge begins and ends with experience itself. In other words, it rejects the positivist assertion that a portion of human knowledge is a priori.
Echoes of the "positivist" and "antipositivist" debate persist today, though this conflict is hard to define. Authors writing in different epistemological perspectives do not phrase their disagreements in the same terms and rarely actually speak directly to each other. To complicate the issues further, few practising scholars explicitly state their epistemological commitments, and their epistemological position thus has to be guessed from other sources such as choice of methodology or theory. However, no perfect correspondence between these categories exists, and many scholars critiqued as "positivists" are actually postpositivists. One scholar has described this debate in terms of the social construction of the "other", with each side defining the other by what it is not rather than what it is, and then proceeding to attribute far greater homogeneity to their opponents than actually exists. Thus, it is better to understand this not as a debate but as two different arguments: the "antipositivist" articulation of a social meta-theory which includes a philosophical critique of scientism, and "positivist" development of a scientific research methodology for sociology with accompanying critiques of the reliability and validity of work that they see as violating such standards.
|Part of a series on|
While most social scientists today are not explicit about their epistemological commitments, articles in top American sociology and political science journals generally follow a positivist logic of argument. It can be thus argued that "natural science and social science [research articles] can therefore be regarded with a good deal of confidence as members of the same genre".
In contemporary social science, strong accounts of positivism have long since fallen out of favour. Practitioners of positivism today acknowledge in far greater detail observer bias and structural limitations. Modern positivists generally eschew metaphysical concerns in favour of methodological debates concerning clarity, replicability, reliability and validity. This positivism is generally equated with "quantitative research" and thus carries no explicit theoretical or philosophical commitments. The institutionalization of this kind of sociology is often credited to Paul Lazarsfeld, who pioneered large-scale survey studies and developed statistical techniques for analyzing them. This approach lends itself to what Robert K. Merton called middle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.
In the original Comtean usage, the term "positivism" roughly meant the use of scientific methods to uncover the laws according to which both physical and human events occur, while "sociology" was the overarching science that would synthesize all such knowledge for the betterment of society. "Positivism is a way of understanding based on science"; people don't rely on the faith in God but instead on the science behind humanity. "Antipositivism" formally dates back to the start of the twentieth century, and is based on the belief that natural and human sciences are ontologically and epistemologically distinct. Neither of these terms is used any longer in this sense. There are no fewer than twelve distinct epistemologies that are referred to as positivism. Many of these approaches do not self-identify as "positivist", some because they themselves arose in opposition to older forms of positivism, and some because the label has over time become a term of abuse by being mistakenly linked with a theoretical empiricism. The extent of antipositivist criticism has also become broad, with many philosophies broadly rejecting the scientifically based social epistemology and other ones only seeking to amend it to reflect 20th century developments in the philosophy of science. However, positivism (understood as the use of scientific methods for studying society) remains the dominant approach to both the research and the theory construction in contemporary sociology, especially in the United States.
The majority of articles published in leading American sociology and political science journals today are positivist (at least to the extent of being quantitative rather than qualitative). This popularity may be because research utilizing positivist quantitative methodologies holds a greater prestige[clarification needed] in the social sciences than qualitative work; quantitative work is easier to justify, as data can be manipulated to answer any question.[need quotation to verify] Such research is generally perceived as being more scientific and more trustworthy, and thus has a greater impact on policy and public opinion (though such judgments are frequently contested by scholars doing non-positivist work).[need quotation to verify]
The key features of positivism as of the 1950s, as defined in the "received view", are:
Any sound scientific theory, whether of time or of any other concept, should in my opinion be based on the most workable philosophy of science: the positivist approach put forward by Karl Popper and others. According to this way of thinking, a scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested. ... If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes.
one of the features of positivism is precisely its postulate that scientific knowledge is the paradigm of valid knowledge, a postulate that indeed is never proved nor intended to be proved.
Positivism is marked by the final recognition that science provides the only valid form of knowledge and that facts are the only possible objects of knowledge; philosophy is thus recognized as essentially no different from science [...] Ethics, politics, social interactions, and all other forms of human life about which knowledge was possible would eventually be drawn into the orbit of science [...] The positivists' program for mapping the inexorable and immutable laws of matter and society seemed to allow no greater role for the contribution of poets than had Plato. [...] What Plato represented as the quarrel between philosophy and poetry is resuscitated in the "two cultures" quarrel of more recent times between the humanities and the sciences.
To conclude, logical positivism was progressive compared with the classical positivism of Ptolemy, Hume, d'Alembert, Comte, Mill, and Mach. It was even more so by comparison with its contemporary rivals—neo-Thomisism, neo-Kantianism, intuitionism, dialectical materialism, phenomenology, and existentialism. However, neo-positivism failed dismally to give a faithful account of science, whether natural or social. It failed because it remained anchored to sense-data and to a phenomenalist metaphysics, overrated the power of induction and underrated that of hypothesis, and denounced realism and materialism as metaphysical nonsense. Although it has never been practiced consistently in the advanced natural sciences and has been criticized by many philosophers, notably Popper (1959 , 1963), logical positivism remains the tacit philosophy of many scientists. Regrettably, the anti-positivism fashionable in the metatheory of social science is often nothing but an excuse for sloppiness and wild speculation.
The upshot is that the positivists seem caught between insisting on the V.C. [Verifiability Criterion]—but for no defensible reason—or admitting that the V.C. requires a background language, etc., which opens the door to relativism, etc.
|Library resources about | | <urn:uuid:a659b744-4e4a-4d95-bab3-39b36f6917bf> | CC-MAIN-2022-33 | https://demo.azizisearch.com/starter/google/wikipedia/page/Positivism | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00697.warc.gz | en | 0.946336 | 7,678 | 3.859375 | 4 |
- Open Access
Socioeconomic status and exposure to disinfection by-products in drinking water in Spain
Environmental Health volume 10, Article number: 18 (2011)
Disinfection by-products in drinking water are chemical contaminants that have been associated with cancer and other adverse effects. Exposure occurs from consumption of tap water, inhalation and dermal absorption.
We determined the relationship between socioeconomic status and exposure to disinfection by-products in 1271 controls from a multicentric bladder cancer case-control study in Spain. Information on lifetime drinking water sources, swimming pool attendance, showering-bathing practices, and socioeconomic status (education, income) was collected through personal interviews.
The most highly educated subjects consumed less tap water (57%) and more bottled water (33%) than illiterate subjects (69% and 17% respectively, p-value = 0.003). These differences became wider in recent time periods. The time spent bathing or showering was positively correlated with attained educational level (p < 0.001). Swimming pool attendance was more frequent among highly educated subjects compared to the illiterate (odds ratio = 3.4; 95% confidence interval 1.6-7.3).
The most highly educated subjects were less exposed to chlorination by-products through ingestion but more exposed through dermal contact and inhalation in pools and showers/baths. Health risk perceptions and economic capacity may affect patterns of water consumption that can result in differences in exposure to water contaminants.
Environmental inequity has been defined as the disproportionate higher risk of environmental pollution exposure that some individuals suffer due to their race, age, ethnicity, or lower income . It also refers to the equal treatment that all people should receive independently of their individual or social condition resulting from environmental policies, regulations and statutes . Environmental inequity implies a wide variety of concepts, such as environmental classism, environmental justice or environmental racism .
Some individuals are more likely to be exposed to pollution due to low socioeconomic status. For instance, in the US, minority communities like African-American, Hispanic, Asian and Native American, are more exposed to air, water and soil pollutants released from hazardous waste sites . Also, disadvantaged populations in terms of poverty, age or ethnicity live closer to industrial sites, being exposed to higher levels of airborne pollutants [5–7]. Biological or chemical pollution in drinking water is also a major concern for public health. Disinfection by-products (DBPs), inadvertently produced when drinking water is chlorinated, have been associated with adverse reproductive effects [8, 9] and cancer [10–12]. Trihalomethanes (THM), the most prevalent component of the disinfection by-product mixture, are highly volatile and skin permeable. Consequently, exposure may occur through ingestion of water, inhalation and dermal contact while showering, bathing and swimming in pools. The higher molecular weight compounds in the mixture enter the body primarily through ingestion.
Our study aimed to evaluate the relation between DBP exposures and socioeconomic status (SES), under the hypothesis that the higher social classes would be less exposed to disinfection by-products through ingestion because of enhanced ability to purchase bottled water. Other routes of exposure, such as dermal absorption and inhalation during showering, bathing, and use of swimming pools, were also considered.
Study design and subjects
One thousand two-hundred nineteen (1,219) bladder cancer cases and one thousand two-hundred and seventy-one (1,271) hospital controls were recruited between 1998 and 2001 for the Spanish Bladder Cancer case-control study from 18 hospitals in 5 regions of Spain: Barcelona, Vallès/Bages, Alicante, Tenerife and Asturias. Subjects were 21-80 years old . In the present study, only controls were included in the analysis. Controls were selected from patients admitted to the participating hospitals mainly for minor surgery or trauma. The main diagnoses in hospital admissions were hernias (37%), other abdominal surgery (11%), fractures (23%), other orthopedic conditions (7%), hydrocele (12%), circulatory conditions (4%), dermatological conditions (2%) and ophthalmologic conditions (1%). Eighty-eight percent (88%) of the controls completed a face-to-face computer-assisted personal interview (CAPI) that included socio-demographic information, smoking, occupational history, lifetime residential history, environmental exposures, medication, and family history of cancer. The study was approved by the Ethics Committees of all participating institutes; a written informed consent was obtained from all patients.
Chlorination by-products exposure
Lifetime residential history was collected from subjects for each place of residence of longer than one year duration. Information requested included the main type of water consumed at each residence (i.e., public water supply, private well or bottled water), although it did not request information about changes of type of drinking water in the same residence. Other water ingestion information included: average daily water consumption (including water-based fluids such as coffee and tea); average frequency and duration of showers and/or baths; and ever lifetime swimming pool attendance. Data on current and historical levels of THM were collected from water utilities. In addition, a central laboratory measured THM in 113 tap water samples. Stratifying the regions according to the levels of THM measured, Barcelona, Vallès/Bages and Alicante would be included in the high THM levels area (mean 64, SD 27 μg/l) while Asturias and Tenerife have lower THM levels (mean 17, SD 13 μg/l) . Lifetime individual exposure indices were calculated, as described elsewhere , merging individual and municipal databases by year and municipality, obtaining individual year-by-year average THM levels. Several individual exposure indices were created: current residential THM level, as the dichotomous variable (below or above the median - 48 μg/l) level of THM in the residence at the time of interview; average residential THM exposure, i.e. time-weighted average municipal THM level (μg/l) for all residences over age 15, as a dicthomous variable of levels below or above the median (26 μg/l); swimming pool attendance, as a dichotomous variable: attending a swimming pool once or more than once per year contrasted with never (or less than once per year); showers and baths, as a dichotomous variable below or equal to the median of minutes/day spent in the bath/shower (≤7 min/day) or above the median. The reproducibility of the questions about showering, bathing and swimming was evaluated in a subsample of the study subjects, obtaining more than 90% agreement in the answers .
Educational level achieved by subjects was classified in 4 categories: illiterate, incomplete primary school, complete primary school (education through 13-14 years of age), and high school (through 17-18 years), or higher education. For some analyses these were grouped in two categories: low education (subjects with primary school or lower) and high education (subjects with higher than primary education). Household income at the time of the interview was recorded as a categorical variable, as was the number of family members living on that income. The variables were then combined into income per person, with 3 categories: low (<300 Euros/month), medium (300-600 Euros/month) and high income (>600 Euros/month). The response rate for this variable was lower (75%) than for the other variables (99%).
We cross-tabulated social class and type of water variables, and used the chi-square test. Logistic regression was used to estimate the odds ratios (OR) and 95% confidence intervals (CI) for the type of water consumed by socioeconomic status adjusting for smoking status (pack-years), age, gender, area (5 groups: Barcelona, Vallès/Bages, Alicante, Tenerife and Asturias) and average municipal THM level for all residences over age 15. The analyses of THM and water source were restricted to subjects with exposure information for at least 70% of the exposure window examined (from 15 years of age until disease or interview) . The analysis was performed using Stata, release 8.2 (StataCorp, 2005, College Station, TX, USA).
Characteristics of study subjects are shown in Table 1. The population includes 71% of ever smokers and 48% of people with less than primary school education. Educational level achieved differed between regions. The proportion of subjects with completed high school or higher education was 23% in Barcelona, 20% in Vallès/Bages, 18% in Alicante, 15% in Asturias and 12% in Tenerife.
More than 60% of the population ever drank water from a public water supply and 13% had attended swimming pools (Table 2). A public water supply was the main source of drinking water at the time of the interview for 71% of subjects in Barcelona, 66% in Asturias, 60% in Tenerife, 55% in Vallès/Bages, and 46% in Alicante. There are significant differences in the use of bottled water by area of residence and the level of disinfection by-products. Among subjects living in high THM regions, the percentage that consumed bottled water was between 27 and 41% whereas 15 to 24% of subjects in low THM level areas drank bottled water.
Bottled water was the main source of drinking water at home at the time of interview among 17% of illiterate subjects and 33% of those with a high school degree or higher education, while the corresponding proportions for public water supply use were 69% and 57% (Table 2). There was also a difference in bottled water consumption at home when considering the longest residence from 11% of illiterate subjects to 21% of highly educated subjects, although we did not observe inverse rates of use of tap water. The most highly educated subjects lived in residences with, on average, higher THM levels (36 μg/l) than illiterate subjects (29 μg/l, Table 2), although no clear pattern of increasing THM levels with educational level was seen. In the multivariate analyses, considering the use of public supply water among the illiterate as the referent, subjects who had attended primary school had a two-fold probability of consumption of bottled drinking water (OR = 2.1; 1.2-3.7) while the more highly educated subjects had over three times the probability of consuming bottled water (OR = 3.3; 1.8-6.0).
The more highly educated subjects tended to take longer baths or showers and attended swimming pools more than persons with a lower educational level (Table 2). Subjects with high school education have a three fold higher probability (OR = 2.7, 95%CI 1.6-4.6) of taking bath/showers longer than 7 min/day than illiterate subjects and a 1.6 (95%CI 1.0-2.6) higher probability than subjects with primary school education. Use of swimming pools (ever/never) was also more likely in subjects with a high school education or above compared to the illiterate (OR = 3.4, 95%CI 1.6-7.3) while smaller differences were observed for subjects having attended (OR = 1.3, 95%CI 0.5-3.0) or completed (OR = 1.6, 95%CI 0.7-3.5) primary school.
Income was also used as a measure of socio-economic status, showing trends similar to those observed for educational level. Twenty-nine percent (29%) of subjects in the high-income category drank bottled water compared with 18% in the low-income category (p = 0.047). Eight percent of the low-income subjects were swimming pools attendees, in contrast to 21% of those with higher income (p < 0.001). The time spent in the shower and bath per day did not show a statistically significant difference by income (p > 0.05).
An increase in the consumption of bottled water from 1980 to 2000 was observed overall, from 18% in 1980 to 20% in 1990 and 23% in the year 2000. When stratifying by socioeconomic categories as defined by education (Figure 1), among subjects with lower levels of education, there were no changes over time in the use of water from public supplies (66% during 1980-1990 and 65% during 2000). A decrease from 63% to 56% was observed for the same time period among subjects with higher education. In the same period there was an increase of bottled water use from 17% to 21% among subjects with a lower educational level and from 26% to 33% among the more highly educated subjects, along with an increase in the levels of THMs, only observed in the latest category of education (Figure 1).
The trend of water consumption at work according to educational level was similar to that observed in the home, along with a decrease in water consumption from public supplies.
The main source of drinking water in the Spanish population is from public water supplies. In our study population, the use of bottled water increased from 18% to 23% over the 20 year period from 1980 to 2000, with the greatest increase occurring where THM levels were the highest. The use of bottled water as the main source of drinking water, the use of swimming pools and the frequency and duration of showers and baths was higher among subjects with higher education and income compared to those with lower levels.
The higher proportion of bottled water use among the higher social classes could be explained by the higher income available to spend on purchase of bottled water, related to taste, perception of quality or perception of risk. However, we did not have information about the reasons that led people to switch from tap or other sources to bottled water. A more general perception of risk related to exposure to disinfection by-products has only recently been publicized in Spain and changes related to use of bottled water in the 1980 s or 1990 s are probably not due to health risk perception. This perception, if existed, would affect the type of drinking water consumed, but not other water uses that could lead to exposure to DBPs through inhalation or dermal absorption.
The association between socio-economic status, environmental exposures and health effects is not always obvious. For instance, allergies and hay fever have an inverse relation with social class, with lower prevalence in lower socioeconomic classes . In the case of DBP, this relation is complex and can vary regionally. In our Spanish study population, subjects with a higher socioeconomic status tended to live in areas with elevated DBP, had a lower exposure from ingestion (because of elevated consumption of bottled water) and a higher exposure from inhalation and dermal absorption (due to the more frequent use of pools and longer showers and baths).
A concern in these analyses is the measurement of social class. We used two measures of socioeconomic status (education and income) and results were similar for both indices, although the percentage of missing data for the income variable was high. The most important and frequently used indicators of socioeconomic status are education, income and occupation . There is continuing debate regarding the best measure, and whether these measures should be used separately or as a composite index . Socioeconomic indices measure different elements in social position and cannot be treated equally as measures of social class . Education has been extensively used as an indicator of social class, as it is stable over time. In our study population, a low percentage of participants reached secondary or higher education (17%), which reflects the percentage in the general population of Spain between 55-64 years old (16%) .A potential limitation is that education may not capture changes in social class that occur in adulthood .
In conclusion, in this population the use of bottled water as a source of drinking water, the use of swimming pools, and the frequency and duration of showers and baths was higher among subjects with higher education and income compared to those with lower levels of either. This would result in lower exposure to chlorination by-products through ingestion among subjects of higher socioeconomic status but higher exposure through dermal contact and inhalation. Health risk perceptions and economic capacity may affect patterns of water consumption that, however, may not necessarily result in differences in exposure to water contaminants. Our findings, of course, are specific to the study population of the five regions of Spain, formed mainly by elderly males. The broader implication of this analysis is that the relationship between socioeconomic status and exposure to DBP is complex and is dependent on the social and environmental context of exposed populations.
The participating study centres in Spain were: Institut Municipal d'Investigació Mèdica, Universitat Pompeu Fabra, Barcelona--Coordinating Center (M Kogevinas, N Malats, F X Real, M Sala, G Castaño, M Torà, D Puente, C Villanueva, C Murta, J Fortuny, E López, S Hernández, R Jaramillo); Hospital del Mar, Universitat Autònoma de Barcelona, Barcelona (J Lloreta, S Serrano, L Ferrer, A Gelabert, J Carles, O Bielsa, K Villadiego); Hospital Germans Tries I Pujol, Badalona, Barcelona (L Cecchini, J M Saladié, L Ibarz); Hospital de
Sant Boi, Sant Boi, Barcelona (M Céspedes); Centre Hospitalari Parc Taulí, Sabadell, Barcelona (C Serra, D García, J Pujadas, R Hernando, A Cabezuelo, C Abad, A Prera, J Prat); ALTHAIA, Manresa, Barcelona (M Domènech, J Badal, J Malet); Hospital Universitario, La Laguna, Tenerife (R García-Closas, J Rodríguez de Vera, A I Martín); Hospital La Candelaria, Santa Cruz, Tenerife (J Taño, F Cáceres); Hospital General Universitario de Elche, Universidad Miguel Hernández, Elche, Alicante
(A Carrato, F García-López, M Ull, A Teruel, E Andrada, A Bustos, A Castillejo, J L Soto); Universidad de Oviedo, Oviedo, Asturias (A Tardón); Hospital San Agustín, Avilés, Asturias (J L Guate, J M Lanzas, J Velasco); Hospital Central Covadonga, Oviedo, Asturias (J M Fernández, J J Rodríguez, A Herrero); Hospital Central General, Oviedo, Asturias (R Abascal, C Manzano, T Miralles); Hospital de Cabueñes, Gijón, Asturias (M Rivas, M Arguelles); Hospital de Jove, Gijón, Asturias
(M Díaz, J Sánchez, O González); Hospital de Cruz Roja, Gijón, Asturias (A Mateos, V Frade); Hospital Alvarez-Buylla, Mieres, Asturias (P Muntañola, C Pravia); Hospital Jarrio, Coaña, Asturias (A M Huescar, F Huergo); Hospital Carmen y Severo Ochoa, Cangas, Asturias (J Mosquera).
Computer-Assisted Personal Interview
Anderton DL, Anderson AB, Oakes JM, Fraser MR: Environmental equity: the demographics of dumping. Demography. 1994, 31: 229-248. 10.2307/2061884.
Anonymous: Less equal than others. Lancet. 1994, 343: 805-806. 10.1016/S0140-6736(94)92018-4.
Brown P: Race, class, and environmental health: a review and systematization of the literature. Environ Res. 1995, 69: 15-30. 10.1006/enrs.1995.1021.
Faber DR, Krieg EJ: Unequal exposure to ecological hazards: environmental injustices in the Commonwealth of Massachusetts. Environ Health Perspect. 2002, 110 (Suppl 2): 277-288.
Perlin SA, Wong D, Sexton K: Residential proximity to industrial sources of air pollution: interrelationships among race, poverty, and age. J Air Waste Manag Assoc. 2001, 51: 406-421.
Sexton K, Gong H, Bailar JC, Ford JG, Gold DR, Lambert WE, Utell MJ: Air pollution health risks: do class and race matter?. Toxicol Ind Health. 1993, 9: 843-878.
Briggs D, Abellan JJ, Fecht D: Environmental inequity in England: small area associations between socio-economic status and environmental pollution. Soc Sci Med. 2008, 67: 1612-1629. 10.1016/j.socscimed.2008.06.040.
Nieuwenhuijsen MJ, Toledano MB, Eaton NE, Fawell J, Elliott P: Chlorination disinfection byproducts in water and their association with adverse reproductive outcomes: a review. Occup Environ Med. 2000, 57: 73-85. 10.1136/oem.57.2.73.
Grellier J, Bennett J, Patelarou E, Smith RB, Toledano MB, Rushton L, Briggs DJ, Nieuwenhuijsen MJ: Exposure to disinfection by-products, fetal growth, and prematurity: a systematic review and meta-analysis. Epidemiology. 2010, 21: 300-313. 10.1097/EDE.0b013e3181d61ffd.
Villanueva CM, Cantor KP, Grimalt JO, Malats N, Silverman D, Tardon A, Garcia-Closas R, Serra C, Carrato A, Castano-Vinyals G, et al.: Bladder cancer and exposure to water disinfection by-products through ingestion, bathing, showering, and swimming in pools. Am J Epidemiol. 2007, 165: 148-156. 10.1093/aje/kwj364.
Villanueva CM, Cantor KP, Cordier S, Jaakkola JJ, King WD, Lynch CF, Porru S, Kogevinas M: Disinfection byproducts and bladder cancer: a pooled analysis. Epidemiology. 2004, 15: 357-367. 10.1097/01.ede.0000121380.02594.fc.
Rahman MB, Driscoll T, Cowie C, Armstrong BK: Disinfection by-products in drinking water and colorectal cancer: a meta-analysis. Int J Epidemiol. 2010, 39: 733-745. 10.1093/ije/dyp371.
Villanueva CM, Kogevinas M, Grimalt JO: Haloacetic acids and trihalomethanes in finished drinking waters from heterogeneous sources. Water Res. 2003, 37: 953-958. 10.1016/S0043-1354(02)00411-6.
Villanueva CM, Cantor KP, Grimalt JO, Castano-Vinyals G, Malats N, Silverman D, Tardon A, Garcia-Closas R, Serra C, Carrato A, et al.: Assessment of lifetime exposure to trihalomethanes through different routes. Occup Environ Med. 2006, 63: 273-277. 10.1136/oem.2005.023069.
Helmert U, Shea S: Social inequalities and health status in western Germany. Public Health. 1994, 108: 341-356. 10.1016/S0033-3506(05)80070-8.
Liberatos P, Link BG, Kelsey JL: The measurement of social class in epidemiology. Epidemiol Rev. 1988, 10: 87-121.
Montgomery LE, Carter-Pokras O: Health status by social class and/or minority status: implications for environmental equity research. Toxicol Ind Health. 1993, 9: 729-773.
National Statistics Institute. [http://www.ine.es/daco/daco42/sociales09/sociales.htm]
Berkman LF, Macintyre S: The measurement of social class in health studies: old measures and new formulations. IARC Sci Publ. 1997, 51-64.
We thank Mustafa Dosemeci for his important role in the organizing the initial stages of this study. We thank Robert C. Saal from Westat, Rockville, MD, Leslie Carroll and Eric Boyd from IMS, Silver Spring, MD, and Paco Fernández, IMIM, Barcelona, for their support in study and data management; Dr. Maria Sala from IMIM, Barcelona, for her work in data collection; physicians, nurses, interviewers (Ana Alfaro, Cristina Villanueva, Cristina Pipó, Joan Montes, Iolanda Velez, Pablo Hernández, Ángeles Pérez, Carmen Benito, Adela Castillejo, Elisa Jover, Natalia Blanco, Avelino Menéndez, Cristina Arias, Begoña Argüelles) and all participants in the study for their efforts during field work. This work was supported by the Intramural Program of the National Institutes of Health, National Cancer Institute (NCI), Division of Cancer Epidemiology and Genetics, NCI-Westat contract no. N02-CP-11015, FIS/Spain 00/0745, G03/174, PI061614, CA34627, and Red Temática de Investigación Cooperativa en Cáncer (RTICC). Cristina M Villanueva has a contract funded by the Instituto de Salud Carlos III, Spanish Ministry of Health and Consumption (CP06/00341).
The authors declare that they have no competing interests.
GC performed the statistical analysis and drafted the manuscript with input from all investigators. KPC participated in the study design, statistical analysis and helped to draft the manuscript. CMV participated in the study design and in the analysis and interpretation of data. AT participated in the study design and acquisition of data. RG participated in the study design and acquisition of data. CS participated in the study design and acquisition of data. AC participated in the study design and acquisition of data. NM participated in the study design and acquisition of data. NR and DS participated in the study design and enrollment of patients. MK participated in the study design, enrollment of patients, statistical analysis and helped to draft the manuscript. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
About this article
Cite this article
Castaño-Vinyals, G., Cantor, K.P., Villanueva, C.M. et al. Socioeconomic status and exposure to disinfection by-products in drinking water in Spain. Environ Health 10, 18 (2011). https://doi.org/10.1186/1476-069X-10-18
- Bottle Water
- Swimming Pool
- Public Supply
- Public Water Supply
- Dermal Contact | <urn:uuid:89e0b766-0bea-43b3-af7e-9f7badaa7fe1> | CC-MAIN-2022-33 | https://ehjournal.biomedcentral.com/articles/10.1186/1476-069X-10-18 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00697.warc.gz | en | 0.915624 | 6,028 | 2.8125 | 3 |
A folk costume (also regional costume, national costume, or traditional garment) expresses an identity through costume, which is usually associated with a geographic area or a period of time in history. It can also indicate social, marital or religious status. If the costume is used to represent the culture or identity of a specific ethnic group, it is usually known as ethnic costume (also ethnic dress, ethnic wear, ethnic clothing, traditional ethnic wear or traditional ethnic garment). Such costumes often come in two forms: one for everyday occasions, the other for traditional festivals and formal wear.
|Part of a series on|
Following the outbreak of romantic nationalism, the peasantry of Europe came to serve as models for all that appeared genuine and desirable. Their dress crystallised into so-called "typical" forms, and enthusiasts adopted that attire as part of their symbolism.
In areas where Western dress codes have become usual, traditional garments are often worn at special events or celebrations; particularly those connected with cultural traditions, heritage or pride. International events may cater for non-Western attendees with a compound dress code such as "business suit or national dress".
In modern times, there are instances where traditional garments are required by sumptuary laws. In Bhutan, the traditional Tibetan-style clothing of gho and kera for men, and kira and toego for women, must be worn by all citizens, including those not of Tibetan heritage. In Saudi Arabia, women are also required to wear the abaya in public.
- Burundi – Imvutano
- Comoros – Lesso (female), Kanzu (male)
- Djibouti – Macawiis (male), Koofiyad (male), Dirac (female), Garbasaar (female)
- Eritrea – Kidan Habesha (male), Zuria or Habesha kemis (female)
- Ethiopia – Ethiopian suit or Kidan Habesha (male), Habesha kemis (female)
- Kenya – Kenya is unique among African nations in that it is the only country that does not have a national costume. All tribes have their respective traditional garments, for example: Maasai traditional costume: Kitenge, Kikoi, Maasai beadwork
- Madagascar – Lamba
- Mauritius – Sega dress
- Rwanda – Mushanana
- Seychelles – Sega dress
- Somalia – Macawiis (male), Koofiyad (male), Dirac (female), Guntiino (female), Garbasaar (female)
- Sudan – Jalabiyyah, Taqiyyah, and Turban (male), Toob, a cotton women's dress (female)
- Tanzania – Kanzu and Kofia (male), Kanga (female)
- Uganda – Kanzu and Kofia (male), Gomesi (female)
- Algeria - Burnous, Caftan, Caftan El-Bey, Gandoura, Haïek, Jellaba, Mlaya, Sarouel
- Bikhmar (Ouargla)
- Blouza (Oran)
- Chemsa (Jijel)
- Fergani (Constantine)
- Gandoura Annabiya (Annaba)
- Ghlila, Karakou, Sarouel Mdawer (Algiers)
- Labsa M'zabia (M'zab)
- Labsa Naïlia (Ouled Naïl)
- Lefa we dlala (Annaba)
- Melhfa Chaouïa (Aures)
- Melhfa Sahraouia (Tindouf)
- Qashabiya (Djelfa et Laghouat), Labsa Kbaylia (Kabylie)
- Sétif – Binouar
- Labsa Touratia (Hoggar)
- Egypt – Galabeya
- Libya – Jellabiya, Farmla (an embroidered vest), Fouta
- Morocco – Djellaba, Fez hat and Balgha (male), Takchita (female)
- Sahrawi Arab Democratic Republic – Darra'a (male), Melhfa Sahraouia (female)
- Tunisia – Jebba, Chechia, Fouta
- Angola – Pano
- Lesotho – Shweshwe clothing and blankets, Mokorotlo
- Malawi – Chitenje
- Mozambique – Capulana
- Namibia – Herero traditional clothing
- South Africa –
- Zambia – Chitenje
- Zimbabwe – Chitenje
- Benin – Dashiki suit and Aso Oke Hat (male), Buba and wrapper set (female)
- Burkina Faso – Batakari (male), Kaftan (female)
- Cape Verde – Pano de terra
- Côte d'Ivoire – Kente cloth (male), Kente kaba and slit set (female)
- Gambia – Boubou (male), Kaftan (female)
- Ghana – Kente cloth or Ghanaian smock and Kufi (male), Kente kaba and slit set (female), Agbada (male)
- Guinea – Boubou (male), Kaftan (female)
- Guinea-Bissau – Boubou (male), Kaftan (female)
- Liberia – Dashiki suit and Kufi (male), Buba and skirt set (female)
- Mali – Grand boubou and Kufi (male), Kaftan (female)
- Mauritania – Darra'a (male), Melhfa Sahraouia (female)
- Niger – Babban riga, Tagelmust, Alasho (male), Kaftan (female)
- Nigeria – Agbada, Dashiki or Isiagu and Aso Oke Hat (male), Buba and wrapper set (female)
- Senegal – Senegalese kaftan and Kufi (male), Kaftan (female)
- China – Chinese clothing. Each ethnic groups of China have their own traditional costume.
- Japan – Kimono, Junihitoe, Sokutai
- Korea – Hanbok (South Korea)/Chosŏn-ot (North Korea)
- Mongolia – Deel
- Taiwan -
- Afghanistan – Pashtun dress: Afghan cap, turban, Shalwar Kameez (male), Firaq partug, Chador (veil) (female)
- Bangladesh – Sherwani, Kurta and Pyjama (male) and Sari, Lehengha, Shalwar Kameez and Dupatta (female)
- Bhutan – Gho (male) and Kira (female)
- India – Achkan, Shalwar Kameez, Sherwani, Dhoti, Phiran, Churidar, Kurta, Turban,(male) and Sari, Patiala salwar, Lehenga, Choli, Pathin (female)
- Maldives – Dhivehi libaas (women) and Dhivehi mundu (men)
- Nepal – Daura-Suruwal and Dhaka topi, (male) and Gunyou Cholo (female); Traditional Newar, Sunuwar, Rai, Limbu clothing
- Pakistan – Peshawari pagri, Shalwar Kameez, Churidar (male), Shalwar Kameez and Dupatta (female)
- Sri Lanka – Kandyan sari (female)
- Brunei – Baju Melayu, Songkok (male), Baju Kurung, Tudung (female)
- Cambodia – Sampot, Apsara, Sabai, Krama, Chang kben
- East Timor – Tais cloth clothing
- Indonesia – (See: National costume of Indonesia). There are hundreds of types of folk costumes in Indonesia because of the diversity in the island nation. Each ethnic group of Indonesia have their own traditional costume;
- Laos – xout lao, suea pat, pha hang, pha biang, sinh
- Malaysia – Baju Melayu and Songkok (male), Baju Kurung, Baju Kebarung (Kebaya/Kurung hybrid), Tudung (female)
- Myanmar – Longyi, Gaung baung
- Philippines – Barong (male) and Baro't Saya; Maria Clara gown, Terno (female)
- Thailand – Chut thai: Thai female: Thai Chakkri, Thai male: Suea Phraratchathan, Both genders: Chong kraben and Sabai.
- Vietnam – Áo giao lĩnh, Áo dài, Áo tứ thân, Áo bà ba.
- Armenia – Armenian dress, Arkhalig, Chokha
- Azerbaijan – Azerbaijani traditional clothing: Arkhalig, Chokha, Kelaghayi
- Bahrain – Thawb
- Israel – Tembel hat, Biblical sandals, Yemenite Jewish clothes; Jewish religious clothing: Rekel, Bekishe, Tzitzit, Kippah, Tichel.
- Iran – Chador, Turban, Kurdish clothing, minority traditional clothes: Qashqai, Azerbaijani, Gilaki and Turkmen clothing.
- Iraq – Assyrian clothing, Keffiyeh, Hashimi Dress, Bisht, Dishdasha; Kurdish clothing in Iraqi Kurdistan.
- Jordan – Keffiyeh, Bisht, Bedouin clothing
- Lebanon – Tantour, Keffiyeh, Labbade, Taqiyah
- Kurdistan – Sirwal (pants), Kurdish clothing, gold coin belt and necklace for women.
- Kuwait – Thawb
- Oman – Dishdasha
- Palestine – Keffiyeh, Palestinian costumes.
- Qatar – Kandura
- Saudi Arabia – Thawb, Ghutrah, Agal, Bisht, Abaya, Jilbab, Niqab
- Syria – Dishdasha, Sirwal, Taqiyah, Keffiyeh
- Turkey – Fez, Kaftan, Shalvar.
- United Arab Emirates – Kandura, Abaya
- Yemen – Similar to Saudi Arabia, but with the addition of an ornate jambiya and leather bandoliers for the men's costume.
|Part of a series on|
|Western dress codes|
and corresponding attires
- Belarus – Slutsk stash, the national type of wimple (namitka)
- Georgia – Chokha (Every region has its own specific design of Chokha)
- Ossetia – Chokha
- Russia – Sarafan, Kokoshnik, Kosovorotka, Ushanka, Valenki; (Sami) Gákti, Luhkka for colder weather
- Ukraine – National costumes of Ukraine: Vyshyvanka, Sharovary, Żupan, Ukrainian wreath
- Denmark – Folkedragt
- Estonia – Rahvariided
- Finland – Every region has its own specific design of national costume (kansallispuku, nationaldräkt). These vary widely. Many of them resemble Swedish costumes, but some take influences from Russian costumes as well. For the Sami in Finland, each place has its own Gákti or Luhkka for colder weather.
- Iceland – Þjóðbúningurinn
- Ireland – Aran sweater, Irish walking hat, Grandfather shirt, Leine
- Latvia - Tautastērps
- Lithuania - Tautinis kostiumas
- Norway – Bunad, Sami: Gákti, and for colder weather, Luhkka
- Sweden – Sverigedräkten has varied from region to region but since 1983 has an official National Costume in one common version; 18th century: Nationella dräkten; Sami: Gákti, Luhkka for colder weather
- United Kingdom:
- England – English country clothing, Morris dance costumes, Flat cap, English clogs
- Northern Ireland: Similar to Ireland.
- Scotland – Highland dress: Kilt or trews, tam o'shanter or Balmoral bonnet, doublet, Aboyne dress, and brogues or ghillies.
- Wales – Traditional Welsh costume
- Albania – Albanian Traditional Clothing
- Andorra – Barretina, espadrilles
- Bulgaria – Every town has its own design of a national costume (nosia), with different types of clothing items traditional for each of the ethnographic regions of the country.
- Croatia – Croatian national costume, Lika cap, Sibenik cap
- Greece – Fustanella, Amalia costume; Ancient Greek clothing: Peplos, Chiton.
- Italy – Italian folk dance costumes; Roman clothing: Toga, Stola
- Malta – Għonnella
- Montenegro – Montenegrin cap
- North Macedonia – Macedonian national costume
- Portugal – Every region has its own specific design of a national costume.
- Romania – Romanian dress
- Serbia – Every region has different design of a national costume. Serbian traditional clothing, Lika cap, Montenegrin cap, Opanci, Šajkača, Šubara
- Slovenia – Gorenjska noša (Upper Carniola)
- Spain – Every autonomous region has its own national costume.
- Belgium – Bleu sårot
- France – Every region has its own specific design of national costume. The most famous French traditional clothing could be the Breton costume or the Alsatian costume; commonly associated French items of clothing are the beret and the Breton shirt.
- Germany – Every state has its own specific design of a national costume. For example, Bavaria's well-known Tracht: Lederhosen and Dirndl.
- Liechtenstein – Tracht, Dirndl
- Netherlands – Dutch cap, Klompen
- Switzerland – Every canton has its own specific design of a national costume.
- Cuba – Guayabera, panama hat (male), guarachera (female)
- Dominican Republic – Chacabana
- Dominica – Madras
- Haiti – Karabela dress (female), Shirt jacket (male)
- Jamaica – Bandanna cloth Quadrille dress (female), Bandanna cloth shirt and white trousers (male), Jamaican Tam
- Puerto Rico – Guayabera, panama hat (male), enaguas (female)
- St. Lucia – Madras
- Trinidad and Tobago – Afro-Trinidadians and Tobagonians - Shirt jacket (male), Booboo (female) Bélé costume (female); Indo-Trinidadian and Tobagonian - Kurta, Dhoti, Sherwani (male), Sari, Choli, Lehenga (female)
- Bermuda – Bermuda shorts
- Alberta - Canadian tuxedoes (denim jacket with denim shirt and blue jeans) and western wear are common on folk events such as the Calgary Stampede. They’re often worn with Calgary White Hats.
- First Nations – button blanket, buckskins, moccasins, Chilkat blanket, Cowichan sweater, war bonnet. The use of the term costume to denote traditional dress may be considered derogatory in First Nations communities. Regalia is the preferred term.
- Lumberjacks - Traditional logging wear includes mackinaw jackets or flannel shirts, with headgear being a tuque or trapper hat; a good example is seen with folk characters like Big Joe Mufferaw.
- Métis – Ceinture fléchée, Capote, Moccasins
- Newfoundland - Traditional mummers dress in masks and baggy clothes in Christmas season celebrations; the Cornish influence has also brought yellow oilskins and sou'westers as typical wear in coastal areas.
- Nunavut and other Inuit communities – Parka, mukluks, amauti
- Quebec and French Canadians – Ceinture fléchée, Capote, tuque
- Mexico – Charro outfit, Sarape, Sombrero (male), Rebozo, China Poblana dress (female); every state has a typical folk dress, for example:
- Chihuahua and Coahuila – cowboy hats, cowboy boots, bandanna
- Oaxaca: Tehuana
- Sonora - Yaqui or Seri clothes; Sonora is unique among Mexican states to not have a representative costume, yet the indigenous clothing, especially the Deer dance costume of the Yaqui, is very popular.
- Tamaulipas Cuera tamaulipeca
- Veracruz - Guayabera
- Yucatán – Guayabera (male), Huipil (female)
- United States:
- Alaska – Kuspuks, worn with dark pants and mukluks, as well as parkas are traditional native wear.
- American Southwest, Texas and rural areas in the Midwestern and Western US – Western wear, derived from original Mexican vaquero and American pioneer garb is traditional dress in Texas, the Southwestern US, and many rural communities, including cowboy hats, Western shirts, cowboy boots, jeans, chaps, prairie skirts, and bolo ties.
- American Upper Midwest, Pacific Northwest, the northern portions of the Great Lakes Basin and northern New England (especially Maine) – Due to the cold weather, the garb in rural areas tends to more closely adhere to heavier materials, such as flannel or Buffalo plaid mackinaw jackets, the occasional parka, and trapper hat. A good example is seen in the typical attire of Paul Bunyan, a folk hero popular in areas where logging was a common occupation, as well as lumberjacks working in the area.
- Amish (notably in Pennsylvania, Indiana and Ohio), the Pennsylvania Dutch and some sects of Mormon fundamentalism (especially in Utah) preserve traditional 19th century clothing styles.
- American South – Traditional Southern US wear includes seersucker suits for men, and sun hats and large Southern belle-style dresses for women. Seersucker suits are also commonplace in the District of Columbia on Seersucker Thursday.
- Nantucket – Summer residents of Nantucket will often wear Nantucket Reds.
- Various styles of Native American clothing; for example, traditional pow-wow regalia for Plains Indians: Moccasins, buckskins, glass beads, breech clouts, and war bonnets or roaches. The use of the term costume to denote traditional dress may be considered derogatory in Native American communities. Regalia is the preferred term.
- New York City – According to folklorist Washington Irving, knickerbockers similar to the breeches of the Pilgrims and Founding Fathers were traditionally worn by many wealthy Dutch families in 19th century New York. These short pants remained commonplace among young urban American boys until the mid 20th century.
Australia and New Zealand
- New Zealand
- Argentina – Gaucho costume
- Bolivia – Poncho, Chullo, Andean pollera
- Brazil – Each region has its own traditional costume.
- Bahia – Baiana and Abadá
- Brazilian carnival or Samba costumes for Rio de Janeiro.
- Caipiras (Brazilian country folk) in Sao Paulo, Goiás and other nearby states conserve traditional folk styles of clothing, imitated by participants of festa juninas.
- Gaúcho costumes for Rio Grande Do Sul.
- Indigenous clothes for many states within the Amazônia Legal area
- Northeastern sertão (desert) – Vaqueiro clothing
- Chile – Huaso costume: Chamanto, Chupalla
- Colombia – Sombrero Vueltiao, ruana, white shirt, trousers and alpargatas (male), blouse, Cumbia pollera, Sombrero vueltiao and alpargatas (female); every region has a distinct costume.
- Ecuador – Poncho, Panama hat
- Paraguay – Ao po'i
- Peru – Chullo, Poncho, Andean pollera
- Suriname – Kotomisse, Pangi cloth
- Uruguay – Gaucho costume
- Venezuela – Llanero costume (Liqui liqui and pelo e' guama hat; men), Joropo dress (women)
|Wikimedia Commons has media related to National costumes.|
- "Носиите – Жеравна 2014". Nosia.bg. 2013-06-16. Retrieved 2014-08-27.
- "Български народни носии – България в стари снимки и пощенски картички". Retrobulgaria.com. Retrieved 2014-08-27.
- Condra, Jill, ed. (2013). Encyclopedia of National Dress, Vol. I. Santa Barbara, CA: ABC-CLIO. p. 123.
- Condra, Jill, ed. (2013). Encyclopedia of National Dress, Vol. I. Santa Barbara, CA: ABC-CLIO. p. 123. | <urn:uuid:bf3d8659-0966-4317-b744-3f4727d07f5d> | CC-MAIN-2022-33 | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Folk_costume | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00496.warc.gz | en | 0.812146 | 4,803 | 3.53125 | 4 |
With this article we begin the third and final section of the Hebrew Scriptures, the Writings. In this series we have generally followed the order of the books as found in the current Hebrew Bible, the Tanakh. But we have chosen to discuss the two books of Chronicles first in this section, rather than last, because they trace the origins of Israel and Judah from the beginning to the reestablishment of the Jewish polity following the Babylonian captivity. They also share the genre of history with the preceding books of the Former Prophets: Samuel and Kings.
Organizing the Books
The order of the books of the Hebrew Scriptures, as Christ would have known them, was according to the tripartite division: the Law, the Prophets, the Writings.
The Law (Torah)
The Law includes the five books of the Pentateuch (Genesis, Exodus, Leviticus, Numbers, Deuteronomy).
The Prophets (Nevi’im)
The Prophets comprises the Former Prophets (Joshua, Judges, 1 and 2 Samuel [counted as one book], 1 and 2 Kings [counted as one book]) and the Latter Prophets (Isaiah, Jeremiah, Ezekiel, the Twelve Minor Prophets [counted as one book]).
The Writings (Kethuvim)
The Writings consists of Psalms, Proverbs, Job, the Song of Songs, Ruth, Lamentations, Ecclesiastes, Esther, Daniel, Ezra-Nehemiah (counted as one book), 1 and 2 Chronicles (counted as one book)—24 books in total.
There is precedent for this approach. Two important versions of the Hebrew Bible—the oldest complete manuscript, the Codex Leningrad (1008 CE), and the slightly older but incomplete Aleppo Codex (930 CE)—place both parts of Chronicles as a single book at the beginning of the Writings. Similarly, the Greek version of the Hebrew Scriptures, the Septuagint (300–200 BCE)—the first to separate Chronicles into two books and the model in this for many subsequent versions, including those in Hebrew—positions Chronicles among the historical books, following Kings.
But why a second history of Israel? Surely it’s redundant to simply repeat what is already known.
The word Chronicles is taken from the Greek chronikon, the term the translator Jerome used in describing the book’s contents in the late fourth to fifth century. His title for the book was another Greek word, paralipomenon, based on its name in the Septuagint. It means “things left out,” “left over” or “omitted.” The idea was that Chronicles supplied information that was missing from earlier versions of biblical history. We’ll find that it’s far more than that, and that the chronicler himself left out things.
In Hebrew the title is dibrē hayyāmīm, “the events (or the words) of the days”; that’s to say, “annals” or “a history.” This Hebrew phrase occurs in several other references: “the chronicles of the kings of Israel” (1 Kings 14:19); “the chronicles of the kings of Judah” (verse 29); “the chronicles of the kings of Media and Persia” (Esther 10:2); and “the chronicles of King David” (1 Chronicles 27:24).
Scholars debate the authorship and date of Chronicles. The author is unknown from the text itself, though rabbinic and medieval tradition attributed not only Chronicles but also the books of Ezra and Nehemiah to the postexilic priest Ezra. This idea has been rejected by many scholars, who no longer accept a common authorship. Speculations about the date range from ca. 515 to ca. 150 BCE. But as we’ll see, there are several reasons to believe that the anonymous chronicler was writing toward the end of the Persian period, or even as the Greek period began.
“Chronicles and Ezra-Nehemiah constitute two different works by two different authors. . . . When considered in their totality they represent two varieties of biblical historical writing during the Persian-Hellenistic period.”
The book is structured as follows:
- 1 Chronicles 1 through 9: introduction;
- 1 Chronicles 10 through 2 Chronicles 9: the history of Israel under David and Solomon;
- 2 Chronicles 10 through 36: the history of the kingdom of Judah from the departure of the northern Israelite tribes to Assyria.
The author names or refers to various biblical sources for his personalized version of Israel’s history—among them the five books of Moses, Joshua, the books of Samuel and Kings, Ezra-Nehemiah, and some psalms.
The chronicler’s reconstruction seems to be a deliberate attempt to bring a new perspective for his postexilic times. For example, he pays special attention to David, Solomon and several succeeding righteous kings of Judah, including Asa, Hezekiah and Josiah; to the centrality of Jerusalem, the temple and its rituals; and to the positive response of the people to God’s leadership. These emphases can be understood as his way of encouraging the returnees in the renewal of the entire nation following their liberation from Babylon.
Unlike the book of Kings, which he otherwise follows carefully, the chronicler does not give a synchronized history of the kings of the northern tribes but only of Judah’s monarchs. This is not to say that he excludes northerners from his account entirely. Beyond the history of their separation from the southern kingdom, he highlights them several times as part of Israel of old, when the tribes were undivided.
Establishing the Historical Framework
The nine-chapter introduction begins with Creation and the first human being. Material for the first section (1 Chronicles 1–2:2) is taken from the main genealogical blocks available in Genesis (chapters 5, 10–11, 25, 35–36). The thrust of the introduction is to narrow the focus from humanity in general to the sons of Jacob, renamed Israel, as the line that God had elected. This is achieved by listing only some descendants of each line. For example, excluded from Adam’s are the sons of Cain, Nahor (Abraham’s brother) and Lot.
In addition, material from Genesis is sometimes presented in reverse order so the line of Jacob is preeminent. In Genesis 35–36, for example, Jacob precedes Esau; but the opposite is the case in 1 Chronicles, where Esau’s line is briefly mentioned (1:34–37) ahead of a lengthy outline of his brother’s, thus emphasizing Jacob.
“As the record approaches more closely ‘the sons of Israel’ (2.1), it becomes progressively more detailed, and the main genealogical line receives full treatment in chapters 2–9.”
Throughout Chronicles Jacob is referred to as Israel, with two exceptions (both in 1 Chronicles 16) where the writer is quoting Psalm 105. This again emphasizes that it is the Israelite descendants of Jacob through whom God is working.
Over the next several chapters, the focus narrows to Judah and the family of Israel’s king David. Even though Judah was not the firstborn, his descendants are presented first in the genealogical tables of Jacob’s sons. This emphasis, however, does not rule out the importance of the descendants of the other sons in the chronicler’s mind. Judah is named as the line from which the rulership would come, but the birthright was assigned to the northern tribes of Joseph (see 1 Chronicles 5:2).
The structure of this section of the introduction (2:3–9:2) follows a geographic pathway, beginning with the tribes of Judah and Simeon in the center of the land. David’s genealogy is introduced (2:13–15), followed in chapter 3 by a list of his descendants. Chapter 4 returns to the family of Judah and then reviews Simeon’s line.
Moving eastward across the Jordan, chapter 5 recounts—from south to north—the tribes of Reuben, Gad and the half tribe of Manasseh. Chapter 6 details Levi’s descendants, more or less at the center of the tribal lists—a fitting position, considering their role in serving all the tribes. The next chapter groups the tribes of Issachar, Naphtali and Benjamin. Turning southward, in chapter 8 we come to Manasseh, Ephraim and Benjamin as a central group, with Asher attached to them. Since Benjamin is covered again in this chapter, and since Dan’s name is omitted altogether, some have suggested scribal errors resulting in a corrupted manuscript at this point.
Chapter 9 begins by summarizing the current situation in the chronicler’s time: “So all Israel was recorded by genealogies, and indeed, they were inscribed in the book of the kings of Israel. But Judah was carried away captive to Babylon because of their unfaithfulness. And the first inhabitants [to resettle after captivity] who dwelt in their possessions in their cities were Israelites, priests, Levites, and the Nethinim [temple servants]” (verses 1–2).
The final introductory section covers chapter 9:3–34, beginning with a reminder of those resettled in the capital at the return: “Now in Jerusalem the children of Judah dwelt, and some of the children of Benjamin, and of the children of Ephraim and Manasseh” (verse 3). Here Judah and Jerusalem are the center for the renewal of the children of Israel in the land to which they have returned. The emphasis in Chronicles is on all of Israel being represented in the newly restored order.
Focus on David and Solomon
In the second section of the book (1 Chronicles 10 to 2 Chronicles 9) we have the history of Israel under David and Solomon.
Chapter 10 begins the section that deals with David’s role in detail. With the failure and death of the Benjamite king, Saul, David is anointed to replace him. A feature of the chronicler’s accounts is that he presents God as punishing for sin and rewarding for faithfulness: “So Saul died for his unfaithfulness which he had committed against the Lord, because he did not keep the word of the Lord, and also because he consulted a medium for guidance. But he did not inquire of the Lord; therefore He killed him, and turned the kingdom over to David the son of Jesse” (verses 13–14).
“The writer . . . wants his readers, and us, to understand the blessings which flow from faithful obedience to the Lord.”
Samuel’s account of the rule of Saul’s son Ishbosheth over some of the tribes while David ruled Judah from Hebron (2 Samuel 2–4) is not mentioned by the chronicler. The two records come together, with minor variations, in the account of David’s acceptance by all the tribes: “Then all Israel came together to David at Hebron, saying, ‘Indeed we are your bone and your flesh. . . .’ Therefore all the elders of Israel came to the king at Hebron, and David made a covenant with them at Hebron before the Lord. And they anointed David king over Israel, according to the word of the Lord by Samuel” (1 Chronicles 11:1–3; compare 2 Samuel 5:1–3).
Unique to Chronicles is an emphasis on the term “all Israel.” Though it appears in many instances when quoting biblical sources, it also appears 20 times in passages with no equivalent in the source texts. In the context of David and Solomon, “all Israel” signals that these founders ruled all 12 tribes and should therefore be an example of unity for the returnees from Babylon. For example: “David gathered all Israel together, from Shihor in Egypt to as far as the entrance of Hamath [the largest extent of the land], to bring the ark of God from Kirjath Jearim”; “Then Solomon sat on the throne of the Lord as king instead of David his father, and prospered; and all Israel obeyed him. . . . So the Lord exalted Solomon exceedingly in the sight of all Israel, and bestowed on him such royal majesty as had not been on any king before him in Israel. Thus David the son of Jesse reigned over all Israel” (1 Chronicles 13:5; 29:23, 25–26).
Not included in Chronicles are some of David’s and Solomon’s more egregious sins, such as David’s adultery with Bathsheba and his role in the premeditated death of Uriah (compare 2 Samuel 11 with 1 Chronicles 20). The reason for this is presumably to shield the chronicler’s audience from David’s poor example and to keep them focused on his achievements. Similarly, David’s weakness over the rape of Tamar and the rebellion of Absalom (2 Samuel 13, 15–19) goes unrecorded.
Neither is there any mention of the problems Solomon brought on himself toward the end of his life. The book of Kings catalogues his departures from God’s ways: “King Solomon loved many foreign women, as well as the daughter of Pharaoh: women of the Moabites, Ammonites, Edomites, Sidonians, and Hittites—from the nations of whom the Lord had said to the children of Israel, ‘You shall not intermarry with them, nor they with you. Surely they will turn away your hearts after their gods.’ Solomon clung to these in love. And he had seven hundred wives, princesses, and three hundred concubines; and his wives turned away his heart. For it was so, when Solomon was old, that his wives turned his heart after other gods; and his heart was not loyal to the Lord his God, as was the heart of his father David” (1 Kings 11:1–4). Chronicles says nothing about this. Again, we see that the chronicler maintains David and Solomon as ideal kings.
In the third section of Chronicles (2 Chronicles 10–36), dealing with the breakup of the kingdom into northern and southern parts, the emphasis is on Judah’s kingdom and the positive impact of her righteous kings, and on the fact that the southern kingdom, under Rehoboam, represents “all Israel.” Whereas 1 Kings 12:23 says, “Speak to Rehoboam the son of Solomon, king of Judah, to all the house of Judah and Benjamin, and to the rest of the people,” the writer of Chronicles says, “Speak to Rehoboam the son of Solomon, king of Judah, and to all Israel in Judah and Benjamin” (2 Chronicles 11:3, emphasis added).
The chronicler reminds his audience that members of the other tribes, including the key religious leadership, were strongly represented in the southern kingdom: “From all their territories the priests and the Levites who were in all Israel took their stand with him [Rehoboam]. . . . And after the Levites left, those from all the tribes of Israel, such as set their heart to seek the Lord God of Israel, came to Jerusalem to sacrifice to the Lord God of their fathers. So they strengthened the kingdom of Judah, and made Rehoboam the son of Solomon strong for three years, because they walked in the way of David and Solomon for three years” (verses 13, 16–17). The book of Kings omits these details.
Similarly, the Judean king Asa brought several tribes together: “Then he gathered all Judah and Benjamin, and those who dwelt with them from Ephraim, Manasseh, and Simeon, for they came over to him in great numbers from Israel when they saw that the Lord his God was with him” (15:9). Again, there is no parallel passage in Kings.
During the later reign of Hezekiah, the chronicler tells us that there was much contact among the tribes: “Hezekiah sent to all Israel and Judah, and also wrote letters to Ephraim and Manasseh, that they should come to the house of the Lord at Jerusalem, to keep the Passover to the Lord God of Israel. . . . So they resolved to make a proclamation throughout all Israel, from Beersheba to Dan, that they should come to keep the Passover to the Lord God of Israel at Jerusalem, since they had not done it for a long time in the prescribed manner” (30:1, 5).
“Following the Chronicler’s portrayal of the restoration of Israel’s unity under Hezekiah, he is anxious to emphasise that the later community was representative of all Israel, not just the former southern kingdom alone.”
We also find such references in the account of Judah’s final righteous king, Josiah, who rid the land of idolatry. He “cleansed Judah and Jerusalem. And so he did in the cities of Manasseh, Ephraim, and Simeon, as far as Naphtali and all around, with axes. When he had broken down the altars and the wooden images, had beaten the carved images into powder, and cut down all the incense altars throughout all the land of Israel, he returned to Jerusalem” (34:5–7).
Writing With Purpose
So why write what seems like a second history?
The chronicler’s message was intended to awaken his people to their identity. To do so, he emphasized aspects of their heritage. His summary history centered on the God of the Hebrew Scriptures being the one true God. It emphasized His choosing Israel, their tribal affiliations, the righteous among its kingly line, their unity as a people, the blessings for prayerful obedience, and the organization of their worship. Part of awakening them was to show that they represented all of Israel, given a new start; hence the stress laid on all the tribes under Judah’s leadership, as in the days of David and Solomon and reflected in the acts of other righteous kings of Judah, such as Asa, Hezekiah and Josiah.
Writing to descendants of those who had returned to restore the temple and the land of Israel and to follow the God of Israel, yet who at times had grown lukewarm, the chronicler was filled with zeal for stirring up his people. The postexilic prophets (Haggai, Zechariah and Malachi) had spoken over a period of almost a century, beginning in 520, pointing out the weakness of the returnees’ response to restoration. By the time Malachi gave God’s warnings, around 430, the temple had been rebuilt; but the priesthood was corrupted, the people once again adrift from God.
As evidence that this is a considerably later work than Samuel and Kings, the chronicler mentions several generations of descendants of the king Jehoiachin, whom Nebuchadnezzar exiled in Babylon after the fall of Jerusalem. The chronicler also includes descriptions of a well-developed temple system beyond that of Ezra and Nehemiah’s time—with 24 priestly courses, Levites, singers and gatekeepers—as well as extensive references to ceremony. In addition, he wrote in Late Biblical Hebrew, whereas most of his obvious sources were composed in earlier Hebrew. Taken together, this dates his work to around 350 BCE, making it one of the last books of the Hebrew Scriptures.
Chronicles ends with a reminder of the opportunity they had been given in their new beginning some 200 years earlier: “Thus says Cyrus king of Persia: All the kingdoms of the earth the Lord God of heaven has given me. And He has commanded me to build Him a house at Jerusalem which is in Judah. Who is among you of all His people? May the Lord his God be with him, and let him go up!” (2 Chronicles 36:23).
Next time, we’ll go back to examine the fall of Jerusalem to the Babylonians in what is traditionally thought of as the work of the prophet Jeremiah: the book of Lamentations. | <urn:uuid:fb024974-0dd5-46c6-8683-fd90367dfb52> | CC-MAIN-2022-33 | https://www.vision.org/fr/node/9039 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00497.warc.gz | en | 0.962098 | 4,326 | 3.640625 | 4 |
- Open Access
The relationship between students’ use of ICT for social communication and their computer and information literacy
Large-scale Assessments in Education volume 4, Article number: 15 (2016)
This study investigates the relationship between students’ use of information and communication technology (ICT) for social communication and their computer and information literacy (CIL) scores. It also examines whether gender and socioeconomic background moderates this relationship. We utilized student data from IEA’s International Computer and Information Study (ICILS) to build multivariate regression models for answering the research questions, and accounted for the complex sample structure of the data by using weights for all statistical analyses, employing jackknife repeated replication for variance estimation. Students who frequently use the internet for messaging and participation in social networks (i.e., at least once a week) scored on average 44 points higher than those who use ICT for the same purpose only less than once a week or never. The direction of this effect was the same in all 21 participating educational systems, the difference ranging from 19 to 75 points (always statistically significant). We continued the analysis by testing whether the relationship is moderated by gender; as girls use more often ICT for social communication and have higher CIL scores on average. After controlling for the gender effect the CIL scores between the two examined groups decreased only by 2 points on average. Even after including students’ socio-economic background into the model, the difference in CIL between the two groups of interest declined only little—to 32 points on average across all countries. The difference remained to be statistically significant in all countries but one. The results suggest a strong relationship between students’ CIL proficiency level and the frequency of their use of electronic devices for social communication; hence, respective skills needed at schools and later on at the workplace are reflected in their use outside of school and for socializing.
Purpose, significance of research and theoretical frame work
In the last decades we encountered rapid developments in information and communication technologies. The inclusion of the worldwide web into daily life brought new and important implications also for education. Most of the schools and educational systems started providing extensive computer networks for their students and these are increasingly becoming main components of the teaching and learning environment, but so far little is known about the effectiveness and use of these technologies (Fraillon et al. 2014). Conclusions from research carried out in the field are partly contradictory. Many authors who examined computer use and student achievement found they were positively related (e.g., Becker 1994; Hativa 1994; Kozma 1991; Kulik and Kulik 1987; Liao 1992; Osunade 2003; Ryan 1991; Van Dusen and Worthren 1994; James and Lamb 2000; Attewell and Battle 1999; Sivin-Kachala 1998; Weaver 2000; Weller 1996; Wenglinsky 1998). Wen et al. (2002) suggest that there is a positive relationship between the number of computers available at school and students’ science achievement. Alspaugh (1999) reports that computer use has no effect on students’ achievement in reading, mathematics, science or social studies. There is also a number of studies that identified negative relationships between computer use and student achievement (Ravitz et al. 2002; Papanastasiou 2002, 2003). Papanastasiou (2002) who analysed the results of TIMSS, found a negative relationship between computer use and achievement in a number of countries such as Cyprus, Hong Kong and United States of America. According to this study, students who use computers most frequently in the classroom were lowest achievers in TIMSS in 1995. Papanastasiou (2003) and Papanastasiou et al. (2005) found that computer use does not have a positive nor negative effect on students’ science achievement based on PISA results, but the way of computer use affects science achievement.
Most of the international studies focused so far on the relation of ICT use and students’ competencies in reading, science and mathematics. The amount of research dedicated on computer and information literacy is very limited and most studies examine mainly internet access and online use (Olafsson et al. 2014). In the computer and information literacy (CIL) area, the first cross-national study is ICILS (Fraillon et al. 2014). It assesses the extent to which students know about, understand, and are able to use information and communication technology (ICT). The main purpose of ICILS is to determine how well students are prepared for study, work and life in the digital age. With the information age the term “digital natives” was coined for the generation born in the early 1980s, also referred to as the first members of the millennial generation (Prensky 2001). In his article, Prensky claimed that “the arrival and rapid dissemination of digital technology in the last decade of the twentieth century” had changed the way students think and process information, making it difficult for them to excel academically being exposed to outdated teaching methods. However, according to the ICILS results, although students have had an increased amount of exposure to technology, it does not necessarily imply that they are digital natives. In all the participating countries, on average 17 % of the students did not even achieve the lowest level of CIL determined by the study. On average, only 2 % of the students achieved the highest level with a maximum of 5 % in Korea (Fraillon et al. 2014).Footnote 1
This finding raises the question how so called digital natives use twenty first century technology in daily life. It is known from the literature that age plays a significant role in the usage of computers and internet. As shown in Fig. 1 (Zichuhr and Madden 2012), and Fig. 2 (TurkStat 2014) below, there was a steady increase in internet use across all age groups in Turkey and the US. In the beginning of the current century, however, the younger age groups use internet more often compared to the older age groups in both countries.
In most European countries, as shown in Fig. 3, more than 80 % of young people (aged 16–29) used a computer on a daily basis. In all countries, percentages of the daily use of computers among young people is higher than for the whole population (Eurostat 2014).
Further, literature suggests that many children engage in a wide range of online activities. ICT use by students has expanded to Internet, e-mail, chat, programming, graphics, spreadsheet, online shopping, online searching for literature and other educational materials. The students mostly use ICT for general purposes, i.e., communication, word processing, entertainment, etc. rather than for educational means (Mahmood 2009). According to Olafsson et al. (2014), the most common online activities of 9–16 years olds in Europe are: using internet for school work (85 %), playing games (83 %), watching video clips (76 %) and instant messaging (62 %). Communication via the internet is ubiquitous; often schoolwork is accompanied by chatting and texting. A study published by Gokcearslan and Seferoglu (2005) showed that—at that time—Turkish students’ main focus is on playing games instead on learning activities.
The internet use has high rates among young people when it is compared to the whole population in the EU-28 for basic skills such as using a search engine (94 %) or sending an e-mail with attachments (87 %), while more than two-thirds of young people posted messages online (72 %), just over half used the internet for calling people (53 %) and around one-third (32 %) used peer-to-peer file sharing services. The proportion of young people of posting messages online was 34 percentage points higher than the average for the whole population (Eurostat 2014; Fig. 4).
Already in 2003 Prensky reported that young Americans talk more than 10.000 h on the phone and send more than 200.000 e-mails and text messages until the age of 21. A study conducted in the US found that 80 % of online teens use social network sites, Facebook being the most popular, with 93 % of those teens reporting its use (Lenhart 2012). In 2014, according to number of active users, Facebook is the most popular social media platform with 1184 billion users (Digital/Ajanslar 2014). In 2015, Facebook is still most popular social media platform among young people and 71 % of all teens from 13 to 17 use Facebook, 52 % of them use Instagram and 41 % use Snapchat. (Pew Research Center 2015)
“The use of social networks among children research report” focused on the use of social media among 9–16 year olds in Turkey showed that 85 % of students have computers at home, 70 % of all students get online at least once a day and 66 % use social media at least once a day, spending 72 min on average. This shows that most of the time spent on internet is dedicated to social media. The same study shows that 99 % of the children who have a social media account use Facebook. 60 % of the children reported that they don’t study enough because of spending too much time on Facebook, 25 % of them said that they spend less time with their parents and friends (TIB 2011).
The most common online social activities for young people in the EU-28 in 2014 included sending and receiving e-mails (86 %) and participating on social networking sites (82 %)—for example, Facebook or Twitter, by creating a user profile, posting messages or making other contributions—while close to half (47 %) of all young people in the EU-28 uploaded self-created content, such as photos, videos or text to the internet (Eurostat 2014).
Summarizing the literature, the high importance of students’ use of ICT for social communication in their daily life is evident. But does this type of ICT use enhance students’ CIL skills? Or, does it even rather have a negative effect, because less time remains for “worthwhile” computer usage, such as learning activities? This study examines the relationship between students’ use of ICT for social communication and their computer and information literacy and attempts to contribute to a deeper understanding of this relationship.
Methods and data sources
Students’ data of ICILS was used to explore the hypotheses. ICILS gathered data from almost 60,000 Grade 8 (or equivalent) students and 35,000 teachers in more than 3300 schools from 21 countries or education systems within countries. These data were augmented by contextual data collected from school ICT-coordinators, school principals, and the ICILS national research centres.
Students completed a computer-based test of CIL that consisted of questions and tasks presented in four 30-min modules. Each student completed two modules randomly allocated from the set of four so that the total assessment time for each student was 1 h.
After completing the two test modules, students answered (again on computer) a 30-min questionnaire. It included questions relating to students’ background characteristics, their experience and use of computers and ICT to complete a range of different tasks in school and out of school, and their attitudes toward using computers and ICT (Fraillon et al. 2014).
IEA’s IDB Analyzer was utilized for all statistical analyses, including the estimation of percentages, means and regression models. The IDB analyzer takes the complex data structure of ICILS data into account by applying sampling weights and employing jackknife repeated replication for variance estimation. Comparisons between dependent samples were conducted using regression models in order to account for the covariance between the comparative groups.
We first analysed the relationship between students’ CIL score and their use of ICT for social communication. In the ICILS study, the student questionnaire included three questions that require students to rate the frequencies of their use of ICT applications. From these questions four scales were derived. One of them was “Students’ use of ICT for Social Communication” (S_USECOM). The students were asked to identify the frequency with which they were using the internet for various communication and information exchange activities outside of school. The response categories were “never”, “less than once a month”, “at least once a week but not every day” and “every day”. S_USECOM had an average reliability of 0.74 (Fraillon et al. 2015).
The index variable (“S_USECOM”) consists of the following items:
How often do you use the Internet outside of school for each of the following activities?
Posting comments to online profiles or blogs.
Uploading images or videos to an [online profile] or [online community] (for example. Facebook or YouTube).
Using voice chat (for example Skype) to chat with friends or family online.
Communicating with others using messaging or social networks [for example instant messaging or (status updates)].
We could identify indeed a relationship between students’ CIL score and their use of ICT for social communication: in all educational systems participating in ICILS (further for simplicity referred to as “countries”), the CIL score increased along with an increase of students’ scale score in “Use of ICT for social communication”. This relationship was statistically significant in 16 out of 21 countries. However, the relation was weak; the explained variance of the CIL score was less than 10 % in most countries. We continued the analysis by investigating further the relationship between CIL and each of the four variables constructing the scale score for “Use of ICT for social communication”.
Posting comments to online profiles or blogs
There were no consistent patterns for relations between the reported frequencies for this variable in most countries except for Chile, Thailand and Turkey—the countries with relatively low CIL average scores. In these three countries, the CIL score increased along with an increasing frequency of postings.
Uploading images or videos to an [online profile] or [online community] (for example. facebook or youtube)
Interestingly, students with a medium frequency of ICT use for uploading images or videos had an average CIL score of 20 more points than those who reported to either never do that or do it every day. This pattern could be observed in all countries and was statistically significant in all countries but three (Republic of Korea, Turkey, Canada—Newfoundland and Labrador).
Using voice chat (for example Skype) to chat with friends or family online
No clear patterns could be identified for relationships between the CIL scores and frequencies of ICT usage for voice chats.
Communicating with others using messaging or social networks [for example instant messaging or (status updates)]
Apparently this variable had the closest relationship with CIL among the variables constructing the index variable (“S_USECOM”): as shown in Fig. 5, the more frequent students use ICT for communication using messaging or social networks the higher was their CIL score, a finding that generally holds in all countries. Looking at the cross-country average, mean CIL scores of students who never use the internet for communication are as low as 463 points while are as high as 522 points for students who do that on a daily basis (see Table 1).
For further in-depth analysis we decided to simplify the data by collapsing categories, resulting in a dichotomous variable. The split was taken between the response categories where the difference in CIL scores was the greatest. Referring to the patterns visible in Fig. 5, CIL scores of students reporting to use ICT for communication at least once a week or even every day were rather close to each other; also, no large differences in CIL scores occurred for students using ICT for communication less than once a week (or never). Therefore we collapsed the respective categories accordingly. This procedure split the countries’ target populations into two groups of varying proportions, as can be seen in Fig. 6. On average, three-fourth of the students use the Internet for communication more than once a week. This proportion is less in Thailand and Turkey.
Comparing the resulting two groups of students, we found an average difference in CIL scores of 44 points on favor of students using ICT for social communication more frequently. The direction of the effect was the same in all countries and ranged from 19 points difference in Switzerland to as much as 75 points in the Slovak Republic (refer to Table 2, Model 1, coefficients of E-communication). In all countries, the difference was found to be statistically significant. Since these results were rather striking, we wondered if this effect was moderated by other variables. Consequently we set up various multivariate regression models in order to control for such effects.
Gender as moderating variable
It is known from the literature that girls spend on average more time on social network sites and use them more actively than boys (Duggan and Brenner 2013). Lenhart (2012) reported that some 95 % of teenagers use the internet in the US. 42 % of girls who use the internet report to video-chat, while only about a third of boys engage in that activity. Girls are also more active in their texting and mobile communication behaviours (Lenhart et al. 2010). Our own study confirms this finding for all ICILS countries as can be seen in Fig. 7 —except for Turkey. Interestingly, in Turkey (highlighted by the black arrow in Fig. 7) boys report to use the Internet for social communication more often than girls. The differences of the gender group percentages are statistically significant in all countries.
Although gender is a major determinant in CIL scores of ICILS, it did hardly moderate the difference in CIL scores between the two groups presented in Fig. 5. The group differences remained significant in all countries (see Model 2 in Table 2, coefficients of E-communication.
Socio-economic background as moderating variable
In a next step we included the national index of students’ socio-economic background (variable “S_NISB”) into the model, reasoning that the availability of internet access and communication devices may depend on the socio-economic status (SES) of the students.
The “digital divide”—referring to the gap between those who do and those who do not have access to ICT’s (Warschauer 2003)—generally affects individuals who are unemployed or in low-skilled occupations, and who have a low income and/or a low level of education. Students from families with a lower SES tend to be less confident and capable in navigating the Web to find credible information (Adler 2014). Also Adegoke and Osoyoko (2015) support the theory that SES influences students’ access (exposure) to ICT and internet. The findings of Hargittai (2010) suggest that even when controlling for basic Internet access, among a group of young adults, SES is an important predictor of how people are incorporating the Web into their everyday lives. Bozionelos (2004) showed that SES had a direct positive relationship with computer experience and an indirect negative relationship with computer anxiety. The findings are supportive of the digital divide and they imply that information technology may in fact be increasing inequalities among social strata in their access to employment opportunities.
After controlling for both, gender and SES, the difference in CIL between our two groups of interest declined to 32 points on average across all countries. However, the difference remained to be statistically significant in all countries but one (Denmark).
Table 2 presents regression coefficients of all three discussed models; Fig. 8 presents the differences in CIL scores of students using ICT for social communication more vs. less than once a week for all three considered models (coefficient of “E-communication” in Table 2). Evidently, this difference is hardly moderated in any country by gender, while the socio-economic status plays a larger role. In twelve out of twenty countries, after controlling for gender and SES, the examined difference in the CIL score decreases by more than 10 points. Only in Switzerland neither SES nor gender seemed to be associated with the difference in CIL scores between the two groups of interest, i.e., the coefficient of E-communication remains constant across the three models.
Further variables with potential moderating effects
We also investigated the effect of further variables that may have moderated the found relationship and thereby could have affected the presented relationship in significant ways. We identified such variables based on evidence from the literature, evidence from ICILS (Fraillon et al. 2014) or simply by applying common sense. It would exceed the purpose of this paper to present all details of these analyses; however, the following paragraphs give some major findings.
While girls use ICT more often for social communication, boys use it more often for playing games (Rideout and Foehr 2010). This is also evident from ICILS data and is presented as cross-country average in Fig. 9. The patterns are similar for all participating countries. However, there was no general relation between using ICT for playing games and CIL except for Turkey and Thailand, where an increased frequency of gaming was related with increasing CIL scores.
Further, one may argue that the overall use of computers could have a moderating effect on the studied relationship. However, including the respective variable into the regression model proofed to not change much the effect of ICT use for social communication on CIL and also did not enhance the explained variance of the CIL score significantly.
Discussion and conclusions
The arrival and rapid dissemination of digital technology in the last decade of the twentieth century raises the question how so called digital natives use technology in daily life and what relevant skills they need to develop in order to participate effectively in the digital age. From the literature, the high importance of students’ use of ICT for social communication in their daily life is evident. In this paper we tried to answer the question if this type of ICT use enhances students’ CIL skills or if it—on the opposite—perhaps even rather has a negative effect, because less time remains for “worthwhile” computer usage, such as learning activities.
We first analyzed the relationship between students’ CIL score and their use of ICT for social communication. The CIL score increased along with an increase of students’ scale score in “Use of ICT for social communication” in all educational systems participating in ICILS. This relationship was statistically significant in 16 out of 21 countries. However, the relation was weak. We continued the analysis by investigating further the relationship between CIL and each of the four variables constructing the index “Use of ICT for social communication”. We found out that the variable which has the closest relationship with CIL was “Communicating with others using messaging or social networks [for example instant messaging or (status updates)]”, while other variables comprising the index showed different or no patterns related with CIL.
For accommodating further analysis on this variable, we decided to split students’ data into two groups. We collapsed the five original categories of the variable into two categories, reflecting the use of messaging or social networks “at least once a week or even every day” versus “less than once a week (or never)”.
Comparing the resulting two groups of students, we found a large average difference in CIL scores (44 points) favoring students using ICT for social communication more frequently. The direction of the effect was the same in all countries; the difference ranged from 19 points in Switzerland to as much as 75 points in the Slovak Republic. Since these results were rather striking, we examined whether this effect was moderated by other variables such as SES and Gender. We found however that the moderating effect of these variables on the observed relationship was weak or even negligible in all participating countries. In other words, the relation between the use of ICT for communicating with others using messaging or social networks and CIL scores was still high and consistent across countries when controlling for SES and Gender.
This positive and cross-nationally observed relationship was rather unexpected, especially because the relationship between the communication index created by ICILS and the CIL scores was weak. Trying to understand this phenomenon, we considered the nature of messaging and participation in social networks. We see that it actually includes posting comments, uploading and downloading images and videos—hence, these features are no different than the separate items creating the social communication index. In fact the single item basically contains the other index items. Possibly the written communication portion included makes the difference, or the actual widespread of activities involved in messaging/electronic social networking explains the indistinct positive relationship with CIL. In future cycles of ICILS it may be worthwhile to review the index items accordingly.
To explore this phenomenon further, we also should focus on the CIL construct. As Fraillon et al. (2014) pointed out in the ICILS international report, the CIL construct was conceptualized in terms of two strands:
Strand 1; collecting and managing information, focuses on the receptive and organizational elements of information processing and management,
Strand 2; producing and exchanging information, focuses on using computers as productive tools for thinking, creating, and communicating.
When we consider the interactive nature of social media, it can be assumed that they provide students with a medium for collecting and managing information as anticipated in Strand 1 and also for producing and exchanging information as conceptualized in Strand 2. Hence, this item seems truly be related with both strands of the CIL construct, which may be one reason for the close relationship. Lacking of an experimental design, this study cannot make causal inferences on the relation between CIL and e-communication. Therefore we cannot conclude if frequent use of ICT for communication enhances CIL skills, or if in turn students with high CIL use more frequently ICT for social communication.
Future studies should also monitor the use of social networks in education further. Students should not be expected to accomplish high skills in using information and computer technology and at the same time expect them to keep this aspect of their personality outside of their social life. Rather, it is worth to explore the additional learning opportunities arising from electronic tools and media out- but also and especially inside schools. According to findings from Fraillon et al. (2014), there is a need in many countries to equip teachers with the respective knowledge to use ICT (including social communication tools) in their teaching. Utilizing social media for teaching may hold the potential to increase CIL for all students independently from their gender and SES backgrounds; and thereby avoid that students with low CIL or limited access to ICT may increasingly lack opportunities to actively participate in the modern society.
As a matter of fact, nowadays messaging and Facebook or other social networks became a part of students’ daily life. As parents, teachers and educators, our responsibility is to help our children to benefit from social networks educationally.
See Fraillon et al. 2014 for detailed explanations of the determined CIL levels.
Digital/Ajanslar, (2014). http://www.dijitalajanslar.com/internet-ve-sosyal-medya-kullanici-istatistikleri-2014/.
Adegoke, S., & Osoyoko, M. (2015). Socio-economic background and access to internet as correlates of students achievement in agricultural science. International Journal of Evaluation and Research in Education (IJERE), 4(1), 16–21.
Adler, B., (2014). News literacy declines with socioeconomic status. Colombia Journalism Review, http://www.cjr.org/news_literacy/teen_digital_literacy_divide.php.
Alspaugh, J. W. (1999). The relationship between the number of students per computer and educational outcomes. Journal of Educational Computing Research, 21(2), 141–150.
Attewell, P., & Battle, J. (1999). Home computers and school performance. Information Society, 15, 1–10.
Becker, H. J. (1994). Mindless or mindful use of integrated learning systems. International Journal of Educational Research, 21, 65–79.
Bozionelos, N. (2004). Socio-economic background and computer use: the role of computer anxiety and computer experience in their relationship. International Journal of Human Computer Studies, 61(5), 725–746.
Duggan, M., Brenner, J., (2013). The demographics of social media Users—2012. Pew internet and American life project. http://www.pewinternet.org/Reports/2013/Social-media-users.aspx.
Eurostat, (2014). Being young in Europe today-digital world. http://www.ec.europa.eu/eurostat/statistics-explained/index.php/Being_young_in_Europe_today_-_digital_world
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2014). Preparing for life in a digital age: The IEA International Computer and Information Literacy Study international report. Berlin: Springer.
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2015). ICILS 2013 Technical Report.
Gokcearslan, S., & Seferoglu, S. (2005). Öğrencilerin evde bilgisayar kullanımına ilişkin bir çalışma. Pamukkale: Eğitim Bilimleri Kongresi.
Hargittai, E. (2010). Digital Na(t)ives? Variation in internet skills and uses among members of the “Net Generation”. Sociological Inquiry, 80, 92–113. doi:10.1111/j.1475-682X.2009.00317.x.
Hativa, N. (1994). What you design is not what you get (WYDINWYG): Cognitive, affective, and social impacts of learning with ILS—an integration of findings from six-years of qualitative and quantitative studies. International Journal of Educational Research, 21, 81–111.
IEA. (2014). Press Release, Brussels.
James, R., & Lamb, C. (2000). Integrating science, mathematics, and technology in middle school technology-rich environments: A study of implementation and change. School Science and Mathematics, 100, 27–36.
Kozma, R. B. (1991). Learning with media. Review of Educational Research, 61, 179–211.
Kulik, J. A., & Kulik, C. L. C. (1987). Review of recent literature on computer-based instruction. Contemporary Education Review, 12, 222–230.
Lenhart, A., 2012. Teens and video. Pew Internet and American Life Project. http://www.pewinternet.org/2012/05/03/teens-online-video/.
Lenhart, A., Purcell, K., Smith, A., Zickuhr, K., (2010). Social media and mobile Internet use among teens and young adults. Pew Internet and American Life Project. http://www.pewinternet.org/~/media//Files/Reports/2010/PIP_Social_Media_and_Young_Adults_Report_Final_with_toplines.pdf.
Liao, Y. K. (1992). Effects of computer-assisted instruction on cognitive outcomes: A meta-analysis. Journal of Research on Computing and Education, 24, 367–380.
Mahmood, K. (2009). Gender, subject and degree differences in university students’ access, use and attitudes toward information and communication technology (ICT). International Journal of Education and Development using Information and Communication Technology (IJEDICT), 5(3), 206–216.
Olafsson, K., Livingstone, S., Haddon, L. (2014). Children’s use of online technologies in Europe, a review of the European evidence base, http://www.eukidsonline.net
Osunade O., (2003). An Evaluation of the Impact of Internet Browsing on Students’ Academic Performance at the Tertiary Level of Education in Nigeria http://www.rocare.org/smallgrant_nigeria2003.pdf
Papanastasiou, E. (2002). Factors that differentiate mathematics students in Cyprus, Hong Kong, and the USA. Educational Research and Evaluation, 8, 129–146.
Papanastasiou, E. (2003). Science literacy by technology by country: USA, Finland and Mexico. Making sense of it all. Research in Science and Technological Education, 21(2), 129–146.
Papanastasiou, E. C., Zembylas, M., & Vrasidas, C. (2005). An examination of the PISA database to explore the relationship between computer use and science achievement. Educational Research and Evaluation, 11(6), 529–543.
Pew Research Center, (2015). http://www.pewinternet.org/2015/04/09/teens-social-media-technology-2015/pi_2015-04-09_teensandtech_01/
Prensky, M. (2001). Digital natives, digital immigrants, on the horizon. Bradford: MCB University Press.
Ravitz, J., Mergendoller, J., & Rush, W. (2002). Cautionary tales about correlations between student computer use and academic achievement. Paper Presented at Annual Meeting of the American Educational Research Association, New Orleans
Rideout, V.J., Foehr, U.G., Roberts D.F. (2010). Generation M: Media in the lives of 8-to 18-year-olds. Henry J. Kaiser Family Foundation. http://www.files.eric.ed.gov/fulltext/ED527859.pdf
Ryan, A. W. (1991). Meta-analysis of achievement effects of microcomputer applications in elementary schools. Educational Administration Quarterly, 27, 161–184.
Sivin-Kachala, J. (1998). Report on the Effectiveness of Technology in Schools, 1990–1997. Washington, DC: Software Publisher’s Association.
TIB, (2011). Çocukların Sosyal Paylaşım Sitelerini Kullanım Alışkanlıkları Araştırması, http://www.guvenliweb.org.tr/istatistikler/files/Cocuk_sosyal_paylasim_arastirma_raporu.pdf
TurkStat, (2014). Information and Communication Technology (ICT) usage survey in households & individuals. http://www.tuik.gov.tr/PreTabloArama.do
Van Dusen, L. M., & Worthren, B. R. (1994). The impact of integrated learning system implementation on student outcomes: Implications for research and evaluation. International Journal of Educational Research, 21, 13–24.
Warschauer, M. (2003). Dissecting the “digital divide”: A case Study in Egypt. The Information Society: An International Journal, 19(4), 1.
Weaver, G. C. (2000). An examination of the National Educational Longitudinal Study (NELS: 88) Database to probe the correlation between computer use in school and improvement in test scores. Journal of Science Education and Technology, 9, 121–133.
Weller, H. (1996). Assessing the impact of computer-based learning in science. Journal of Research on Computing in Education, 28, 461–486.
Wen, M. L., Barrow, L. H. & Alspaugh, J. (2002). How Does Computer Availability Influence Science Achievement. Paper presented at the Annual Meeting of the National Association for Research in Science Teaching, New Orleans.
Wenglinsky, H. (1998). Does it compute? The relationship between educational technology and student achievement in mathematics. Princeton: Policy Information Center, Educational Testing Service.
Zichuhr, K., Madden, M. (2012). Older adults and internet use. http://www.pewinternet.org/2012/06/06/older-adults-and-internet-use/
MA developed the research questions, conducted the literature research and drafted significant parts of the manuscript. SM developed the research design, conducted data compilation, the statistical analysis and interpretation of results and drafted significant parts of the manuscript. Both authors have given final approval of the manuscript version to be published and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors read and approved the final manuscript.
The authors are thankful to Diego Cortes for his very useful comments while reviewing this paper.
The authors declare that they have no competing interests.
About this article
Cite this article
Alkan, M., Meinck, S. The relationship between students’ use of ICT for social communication and their computer and information literacy. Large-scale Assess Educ 4, 15 (2016). https://doi.org/10.1186/s40536-016-0029-z
- Social Network
- Social Medium
- Social Communication
- Social Network Site
- Instant Messaging | <urn:uuid:47e1d1eb-3d3a-48df-9ed7-64cc61274060> | CC-MAIN-2022-33 | https://largescaleassessmentsineducation.springeropen.com/articles/10.1186/s40536-016-0029-z | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00695.warc.gz | en | 0.917631 | 7,992 | 2.859375 | 3 |
September 20th, 2013
Malware infects your computer and affects its performance. Or is that a computer virus? Are they the same thing?
Lincoln Spector, of PC World, writes that the difference between malware and virus is ambiguous at best. Technically, a virus is a form of malware. That’s not always the way it’s used today, however.
Not only is a virus a form of malware, but trojans, worms and rootkits are also. Malware is classified as a piece of code that infects your computer and performs actions independent of the user, which is you. To simplify, it’s something that has found a way onto your computer, by way of a download, upload, or a number of other ways, and is doing things without your knowledge, like monitoring your activity, harvesting data or spamming your address book.
A virus falls into the malware category because it infects your computer and is capable of performing independent actions. A virus infects an existing file and corrupts it. But, there aren’t many viruses around today because they’re seen as inefficient by cyber criminals.
The reason the terms malware and virus have become interchangeable is because computers and malicious programs existed before ‘malware’ became a term. So, whenever anyone spotted one of these malicious programs and in the 1980’s and 90’s, they referred to them as a virus. That’s been a hard habit to break even as we now understand the differences between unique forms of malware.
While your security software is called ‘antivirus’, it likely protects you from a variety of malware. To simplify security, call Geek Rescue at 918-369-4335. We understand malware and viruses and, more importantly, know how to keep you safe and secure.
September 19th, 2013
Regardless of how many safeguards you have in place, your company’s data is never completely secure. Security tools like antivirus software and firewalls are helpful, but they can’t guarantee your safety.
Sam Narisi, of IT Manager Daily, points out that data breaches and cyber attacks create a number of negative results beyond just the loss of data. Employee and system downtime, money lost, damage to a brand’s credibility and compliance failure are all possible when your security is compromised.
One step towards improving security is to understand how your current security infrastructure is being infiltrated. Here’s some of the latest hacker tactics.
Everyone is aware of the dangers online so most companies focus their security to protect them on that front. However, 25-percent of companies victimized by a malware attack say it originated from an individual’s USB device. To accomplish this, cyber criminals send out complimentary USB devices, which are disguised as promotional material for a company and infected with malware. They also leave USB devices sitting in coffee shops, bars, restaurants or on the street. Eventually, someone picks it up and tries to use it.
An employee working at the office on your secure network is well protected. That employee may take his laptop or smartphone elsewhere to work, however. Especially if connected to a free WiFi network, that employee would now be vulnerable. Hackers could gain access to anything stored on their device, and then gain access to the company’s network when they return to work.
- Holes in Security Software
Even with antivirus software in place, you’re vulnerable. 40-percent of companies who have experienced a malware attack say the threat slipped through security software already in place. That software has a difficult time keeping up with new malware, even when it is regularly updated. Since hackers have such a deep understanding of how antivirus programs work, they are developing malware that stays undetected.
Having the right tools in place is still a good place to start to avoid a malware infection. Proper training for employees is another necessary precaution. If you still find that your network has been infiltrated, call Geek Rescue at 918-369-4335. We will disable the threat and also keep you better protected for the future.
September 12th, 2013
For users of the web browser Google Chrome, a new malware threat has emerged. This threat looks a lot like Candy Crush and Super Mario.
Eric Johnson, of All Things Digital, describes the “wild west” atmosphere of the Chrome Web App store. Unlike Google Play, the app store for Android mobile devices, Chrome’s Web App store is much less regulated.
This lack of regulation has lead to a number of knock-off apps. Mostly, these apps are recreations of famous games like Super Mario, Candy Crush Saga, Fruit Ninja, Doodle Jump and Sonic the Hedgehog. These games aren’t licensed by their original creators and many are suspected to contain malware.
It’s not hard to understand why malware is included in these recognizable games. Users see a game they played in their youth, or a game they’ve heard is popular now, and want to try it out. It’s a naturally attractive app for what seems like no obligation. However, the apps are usually poor quality and infect your computer with malware.
The key to spotting these knock-off, malicious apps is simple. First, understand that Nintendo, Sega and other giant game companies aren’t making officially licensed apps for Chrome. If you have any further questions, look at the website associated with the app. In the case of a Candy Crush Saga knock-off, the website was listed as candycrushsaga.blogspot.com, which is not associated with King, the game’s developer.
If you have added one of these apps or another app you think contained malware, run your fully updated virus scan after you remove the app from Chrome.
For additional security on any of your devices, contact Geek Rescue at 918-369-4335. We offer security solutions to keep you safe from malware, spam email, viruses and more.
September 11th, 2013
Many small business owners believe that they won’t be the target of a cyber attack simply because there are larger companies that present more value to hackers. However, this belief leads to more relaxed security protocols, which makes small businesses an attractive target because of their ease of access.
Susan Solovic posted on the AT&T Small Business blog how to immediately improve your company’s security without having extensive expertise.
As with any account, you need to protect your business by having each employee log-in with a secure password. This password should be long, have upper and lower case letters and symbols and numbers and be changed often.
It’s a basic step that pays big dividends. Don’t make it easy for a criminal to steal your information or infiltrate your network. When you’re not sitting at your computer, sign out. This erases the possibility that someone in the area could walk by and immediately access valuable data. This is especially important for mobile devices.
There’s a reason your antivirus software requires regular updates. Hackers are constantly changing tactics and using new techniques. Each update is an attempt to stay ahead of the curve. So, when any of your regularly used applications prompts you to update, do it.
Nothing keeps you 100-percent secure. Even if you are able to avoid a cyber attack, natural disasters could still wipe out data. Regularly backing up vital data is important in order to avoid a catastrophe. Should any of your files be lost or corrupted, you’ll have back-ups to replace them quickly without suffering any down time.
Each employee and each position at your company is different. Some will require different access to different applications. Think of it like a government security clearance. There are different levels depending on your pay grade. For your business, give employees the access necessary for them to do their job, but no more. This way, if their account is compromised, you won’t be allowing access to your entire network.
Keeping your business secure is an important and time consuming job. For help, contact Geek Rescue at 918-369-4335. We offer data storage and back-up, security solutions and more.
September 6th, 2013
Most everyone has heard of a firewall, but few really know what it is and what it does. The first thing you need to know is that you need one.
A firewall is a line of defense that monitors and filters data entering and leaving your network or computer. Andy O’Donnell describes a firewall for About.com as a “network traffic cop”.
It’s simple to understand that there are criminals outside of your network that want to get in and steal your data. Keeping them out is important, just as keeping criminals out of your home is important. A firewall is the first line of defense for keeping the criminals out and your data safe.
The other job of a firewall is ensuring that outbound traffic of a malicious nature is also blocked. This is a little harder to understand. Outbound data usually refers to what you are sending out of your own network, so why would you want to limit that direction of traffic? Well, if you do get a malware infection or allow access to your network to a malicious program, data can be sent from your computer to download more malware. A hacker is much more limited if the data sent from the infecting malware is limited by your firewall.
There are hardware-based firewalls that exist outside your computer. It would be a dedicated piece of hardware you add on to boost security. Many people already have a hardware firewall contained in their wireless router. To make sure it’s active, you’ll want to check the router’s settings.
There are also software-based firewalls. Most operating systems, like Windows for example, come with a standard firewall that is active by default. There are also a number of antivirus programs that also include software-based firewalls.
If you don’t have an active firewall, your operating system has probably alerted you to that fact. To improve your system’s security, contact Geek Rescue at 918-369-4335. We have a variety of security solutions to keep all of your devices safe.
September 4th, 2013
If you own a computer, or any device really, you’re likely to encounter problems from time to time. But, as Ben Kim of CIO points out, some of the more common problems have easy fixes that you can handle yourself.
Regardless of the problem and before you try anything else, restart your computer. There’s a reason this is cliched advice. For many issues, a restart will put everything right.
Your system will slow down when your hard drive gets too full. If you’ve noticed a sluggish performance, try clearing some space. Windows users will also want to use Microsoft’s System Configuration tool to trim down the number of applications that open automatically on start-up. To access it, press Windows-R, type “msconfig” and hit Enter.
If your downloads are taking longer than they should, test your connection speed. You can do this on a number of websites. Resetting your modem and router is also a good idea before contacting your Internet Service Provider.
If you’re seeing a high number of pop-ups ads, you’ll want to make sure you have a pop-up blocker enabled in your browser. If they appear when you aren’t even surfing the internet, you’ve got adware. This usually stems from you installing a program that had adware hidden in it. To remove it, try running any security software you may have, or installing new adware-specific programs.
If you’re sitting in range of your wireless router, but you still get a weak signal or constant disconnects, there are a couple of fixes. First, try resetting the modem and router. Then, let Windows troubleshoot the problem for you by right-clicking on the Wi-Fi icon in the taskbar and selecting ‘Diagnose Problem” or “Troubleshoot Problems”.
We’ve all had our share of printer-related headaches. Check to make sure there’s enough ink, toner and paper and the notification light isn’t blinking. Turn the printer off, then back on. You can even completely unplug the power supply and wait a few seconds before plugging it back in. If you still can’t print, check to see if the “Use Printer Offline” option is enabled. Windows will switch this automatically in some circumstances so make sure to uncheck it.
If these fixes don’t work or you have a more serious issue, call Geek Rescue at 918-369-4335. Our team of techs fix any problem your device may have. Give us a call, or bring your device to one of our convenient locations.
August 30th, 2013
Protecting your security and keeping your privacy online is possible. It takes more of a commitment than just keeping your antivirus software updated, however.
John Okoye, of Techopedia, suggests that your own browsing habits have as much to do with security as your security software. Here are some of the ways you can protect yourself.
Do a little research and discover how the internet browser you’re using stores your data. It may be tracking your history and selling it to advertisers without your knowledge. However, many browsers have options to surf privately without saving your history or data.
Even if you are extremely careful about who you give your email address out to, you’ll still receive your fair share of spam emails. When one appears in your inbox, don’t respond. That includes following the ‘unsubscribe’ link. Once spammers learn that your email is active, you’ll actually receive more spam than before. Also, be sure to mark the email as spam, rather than just deleting. it. If you find that more spam emails are making through your spam filter, consider adding additional rules, or changing email providers.
- Be Careful With Social Networks
Social media profiles are a resource for hackers. By learning your birthday, address, phone number and email address, they can intelligently hack into other accounts, or send you phishing scams. Be sure to take advantage of security options to keep your information private and don’t over share. There’s usually no reason to include a phone number on your Facebook page.
Do some research and find an secure email provider. One that protects you from spam and doesn’t save your emails in a log. Your email should also be encrypted to ensure that no one but the intended recipient is reading them. You may also consider having multiple email accounts. That way, when registering for accounts on ecommerce sites or anywhere that you don’t want to have your primary or business email, you can use a secondary account.
These are just some of the ways you can take action to stay safer and more secure online. To beef up the security for your home PC or your business network, call Geek Rescue at 918-369-4335.
August 22nd, 2013
A new spear phishing attack has prompted a public service announcement from from the FBI’s Cyber Division. The attack uses an email made to look like it’s from the National Center for Missing and Exploited Children.
Spear phishing is a targeted attack that attempts to gain access to accounts or data. Their targeted nature usually suggests those responsible are trying to steal something specific from those receiving the email. Put another way, if you receive the email, you have something the hackers want.
This particular attack contains the subject “Search For Missing Children” and has a .zip file attached. This file contains three malicious files included, which are harmful to your computer and could steal or log your information.
Implementing better security is a great step in avoiding these types of attacks, but practicing better internet habits is key. Regardless of who it’s from, you should be wary of any unsolicited email with attachments that arrives in your inbox. Some of these attack emails also contain links that should also be avoided.
If you’ve seen this specific email spear phishing attack, or one similar, you’re urged to report it to the FBI.
To safeguard yourself or your company against these attacks and other malicious attempts to infiltrate your network, contact Geek Rescue at 918-369-4335. We have a variety of security solutions to help you and will educate you on how to stay secure.
August 21st, 2013
The term antivirus gets used a lot, but what does it actually refer to? Do you know what you’re protected against when you install antivirus software?
Unfortunately, antivirus has become a general term for security software. Some protect you against different threats than others.
Alan Henry wrote about the specifics of antivirus and anti-malware protection in his article for Lifehacker.
The term virus specifically doesn’t cover other threats like worms, spyware or adware. Your anitvirus software, however, likely covers some of these other threats.
A virus falls under the category of malware, but that doesn’t mean anti-malware protection keeps you fully secure. Your anti-malware program may not prevent hacks and the loss of data.
It’s all confusing because of the vague language being employed. What you should know is what specifically your chosen security software protects you from. Do the research, ask questions and understand what the software does and, just as important, what it doesn’t do.
Regardless of the money spent and the research done, your security won’t be impregnable. You’ll still be susceptible to some threats. Installing two different security tools helps. One to scan your system continuously and keep out malicious threats. The other to scan from time to time to make sure nothing has gotten through that first line of defense.
Even with two measures in place, you might encounter a problem. That’s why your third security tool should be your own browsing habits. Don’t click on fishy looking links or spam email. Don’t download anything that doesn’t come from a verified, reliable source. Change passwords often and make them strong. These habits keep you away from potential problems and make your security software’s job easier.
Keeping your data secure and your PC clean is a difficult job. To ensure you are fully equipped to handle it, contact Geek Rescue at 918-369-4335. We have the security solutions you need and will advise you on safe surfing.
August 14th, 2013
When your computer is infected with malware, it is usually easy to spot. It may not be that easy to fix.
Malware makes your computer do some strange things. It will seem to working hard at some task even when you’re not doing anything. Windows will open seemingly by themselves. The effects of malware on your system are generally not clandestine.
Once you’ve diagnosed a malware infection, what’s your next step? Matt Egan has some good ideas at PC Advisor.
The malware infecting your computer may use your internet connection against you, so disable that immediately. Unplug any wired connections and turn off your WiFi connection.
Next, assuming you’re using a Windows operating system, boot into Safe Mode. When restarting your machine, hit F8 to use Safe Mode.
This allows you to work freely without doing any more damage to your PC. Safe Mode doesn’t enable many of Windows processes and programs to run and, more importantly, malware doesn’t run either.
While in Safe Mode, you’ll want to scan for malware. If you already have antivirus software installed, that’s great but you’ll need a different program. After all, that software didn’t stop malware from infecting your computer.
Since your first step was to disconnect from the internet, you’ll have two options for installing a new malware scanner. You can either reconnect to the internet and disconnect once you’ve downloaded a new program, or download on a different computer and transfer the software via a USB drive.
Once it’s installed, run the scan and remove any malware it finds. There are some obstacles you may still have to deal with, however.
Some types of malware are capable of killing antivirus programs, even in Safe Mode. If you find the scan doesn’t finish and the program closes on its own, that’s the problem. You’ll need to call in the professionals. Geek Rescue is available to clean your machine and install heartier security provisions.
The scan may also come up empty. If this happens but your PC continues to act funny, you can try a different antivirus scan, or take it to Geek Rescue.
Even with the malware gone, you may have some lingering effects. Your browser may have a toolbar installed on it or your homepage may have changed. Fixing these issues is usually pretty simple, but you’ll also want to change your passwords and log-in details. Malware often harvests this information. Don’t limit the log-in changes to just your bank account and email either. Change any account you log-in to regularly, including social media.
If the issues with your computer persist, call Geek Rescue at 918-369-4335. We’re happy to help with any computer problems and help you to prevent them from happening in the future. | <urn:uuid:a4dcd4e9-b99c-4105-8cab-0b69718c070d> | CC-MAIN-2022-33 | https://www.geekrescue.com/blog/tag/antivirus/page/6/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00097.warc.gz | en | 0.942925 | 4,532 | 3.21875 | 3 |
A digital object identifier (DOI) is a character string used to uniquely identify an electronic document or other object. Metadata about the object is stored in association with the DOI name and this metadata may include a location, such as a URL, where the object can be found. The DOI for a document is permanent, whereas its location and other metadata may change. Referring to an online document by its DOI provides more stable linking than simply referring to it by its URL, because if its URL changes, the publisher need only update the metadata for the DOI to link to the new URL.
However, unlike URLs, the DOI system is not open to all comers; only organizations that can meet the contractual obligations of the DOI system and that are willing to pay to become a member of the system can assign DOIs. The DOI system is implemented through a federation of registration agencies coordinated by the International DOI Foundation, which developed and controls the system. The DOI system has been developed and implemented in a range of publishing applications since 2000; by late 2009 approximately 43 million DOI names had been assigned by some 4,000 organizations.
A DOI name takes the form of a character string divided into two parts: a prefix and a suffix. The prefix identifies the registrant of the name, and the suffix is chosen by the registrant and identifies the specific object associated with that DOI. Most legal Unicode characters are allowed in these strings, which are interpreted in a case-insensitive manner.
For example, in the DOI name
10.1000/182, the prefix is
10.1000 and the suffix is
182. All DOI names start with "10.", and the characters
1000 in the prefix identify the registrant; in this case the registrant is the International DOI Foundation itself.
182 is the suffix, or item ID, identifying a single object (in this case, the latest version of the DOI Handbook). Citations using DOI names should be printed as doi:10.1000/182. When the citation is a hypertext link, it is recommended to embed the link as a URL by concatenating "http://dx.doi.org/" to the DOI name, omitting its "doi:" prefix; e.g., the DOI name doi:10.1000/182 is linked as http://dx.doi.org/10.1000/182. This URL provides the location of an HTTP proxy server which will redirect web accesses to the correct online location of the linked item.
DOI names can identify creative works (such as texts, images, audio or video items, and software) in both electronic and physical forms, performances, and abstract works such as licenses, parties to a transaction, etc. They can be applied to objects at varying levels of detail: DOI names can identify a journal, an individual issue of a journal, an individual article in the journal, or a single table in that article. The choice of level of detail is left to the assigner, but in the DOI system it must be declared as part of the metadata that is associated to a DOI name, using a data dictionary based on the indecs Content Model.
Major applications of the DOI system currently include:
- persistent citations in scholarly materials (journal articles, books, etc.) through CrossRef, a consortium of around 3,000 publishers;
- scientific data sets through DataCite, a consortium of leading research libraries, technical information providers, and scientific data centers;
- European Union official publications through the EU publications office.
In the Organisation for Economic Co-operation and Development's publication service SourceOECD, each table or graph in an OECD publication is shown with a DOI name that leads to an Excel file of data underlying the tables and graphs. Further development of such services is planned.
A multilingual European DOI registration agency activity, mEDRA, and a Chinese registration agency, Wanfang Data, are active in non-English language markets. Expansion to other sectors is planned by the International DOI Foundation.
Features and benefits
The DOI system was designed to provide a form of persistent identification, in which each DOI name unequivocally and permanently identifies the object to which it is associated. And, it associates metadata with objects, allowing it to provide users with relevant pieces of information about the objects and their relationships. Included as part of this metadata are network actions that allow DOI names to be resolved to web locations where the objects they describe can be found. To achieve its goals, the DOI system combines the Handle System and the indecs Content Model with a social infrastructure.
The Handle System ensures that the DOI name for an object is not based on any changeable attributes of the object such as its physical location or ownership, that the attributes of the object are encoded in its metadata rather than in its DOI name, and that no two objects are assigned the same DOI name. Because DOI names are short character strings, they are human-readable, may be copied and pasted as text, and fit into the URI specification. The DOI name resolution mechanism acts behind the scenes, so that users communicate with it in the same way as with any other web service; it is built on open architectures, incorporates trust mechanisms, and is engineered to operate reliably and flexibly so that it can be adapted to changing demands and new applications of the DOI system. DOI name resolution may be used with OpenURL to select the most appropriate among multiple locations for a given object, according to the location of the user making the request. However, despite this ability, the DOI system has drawn criticism from librarians for directing users to non-free copies of documents that would have been available for no additional fee from alternative locations.
The indecs Content Model is used within the DOI system to associate metadata with objects. A small kernel of common metadata is shared by all DOI names and can be optionally extended with other relevant data, which may be public or restricted. Registrants may update the metadata for their DOI names at any time, such as when publication information changes or when an object moves to a different URL.
The International DOI Foundation (IDF) oversees the integration of these technologies and operation of the system through a technical and social infrastructure. The social infrastructure of a federation of independent registration agencies offering DOI services was modelled on existing successful federated deployments of identifiers such as GS1 and ISBN.
Comparison with other identifier schemes
A DOI name differs from commonly used Internet pointers to material, such as the Uniform Resource Locator (URL), in that it identifies an object as a first-class entity, not simply the place where the object is located. It implements the Uniform Resource Identifier (Uniform Resource Name) concept and adds to it a data model and social infrastructure.
A DOI name also differs from standard identifier registries such as the ISBN, ISRC, etc. The purpose of an identifier registry is to manage a given collection of identifiers, whereas the primary purpose of the DOI system is to make a collection of identifiers actionable and interoperable, where that collection can include identifiers from many other controlled collections.
The DOI system offers persistent, semantically interoperable resolution to related current data, and is best suited to material that will be used in services outside the direct control of the issuing assigner (e.g., public citation, or managing content of value). It uses a managed registry (providing social and technical infrastructure). It does not assume any specific business model for the provision of identifiers or services, and enables other existing services to link to it in defined ways. Several approaches for making identifiers persistent have been proposed. The comparison of persistent identifier approaches is difficult because they are not all doing the same thing. Imprecisely referring to a set of schemes as "identifiers" doesn't mean that they can be compared easily. Other "identifier systems" may be enabling technologies with low barriers to entry, providing an easy to use labeling mechanism that allows anyone to set up a new instance (examples include Persistent Uniform Resource Locator (PURL), URLs, Globally Unique Identifiers (GUIDs), etc.), but may lack some of the functionality of a registry-controlled scheme and will usually lack accompanying metadata in a controlled scheme. The DOI system does not have this approach and should not be compared directly to such identifier schemes. Various applications using such enabling technologies with added features have been devised that meet some of the features offered by the DOI system for specific sectors (e.g., ARK).
A DOI name does not depend on the object's location and, in this way, is similar to a Uniform Resource Name (URN) or PURL but differs from an ordinary URL. URLs are often used as substitute identifiers for documents on the Internet (better characterised as Uniform Resource Identifiers) although the same document at two different locations has two URLs. By contrast, persistent identifiers such as DOI names identify objects as first class entities: two instances of the same object would have the same DOI name.
DOI name resolution is provided through the Handle System, developed by Corporation for National Research Initiatives, and is freely available to any user encountering a DOI name. Resolution redirects the user from a DOI name to one or more pieces of typed data: URLs representing instances of the object, services such as e-mail, or one or more items of metadata. To the Handle System, a DOI name is a handle, and so has a set of values assigned to it and may be thought of as a record that consists of a group of fields. Each handle value must have a data type specified in its "<type>" field, that defines the syntax and semantics of its data.
To resolve a DOI name, it may be input to a DOI resolver (e.g., at www.doi.org) or may be represented as an HTTP string by preceding the DOI name by the string
For example, the DOI name 10.1000/182 can be resolved at the address "http://dx.doi.org/10.1000/182". Web pages or other hypertext documents can include hypertext links in this form. Some browsers allow the direct resolution of a DOI (or other handles) with an add-on, e.g., CNRI Handle Extension for Firefox. The CNRI Handle Extension for Firefox enables the browser to access handle or DOI URIs like hdl:4263537/4000 or doi:10.1000/1 using the native Handle System protocol. It will even replace references to web-to-handle proxy servers with native resolution.
The International DOI Foundation (IDF), a non-profit organisation created in 1998, is the governance body of the DOI system. It safeguards all intellectual property rights relating to the DOI system, manages common operational features, and supports the development and promotion of the DOI system. The IDF ensures that any improvements made to the DOI system (including creation, maintenance, registration, resolution and policymaking of DOI names) are available to any DOI registrant. It also prevents third parties from imposing additional licensing requirements beyond those of the IDF on users of the DOI system.
The IDF is controlled by a Board elected by the members of the Foundation, with an appointed Managing Agent who is responsible for co-ordinating and planning its activities. Membership is open to all organizations with an interest in electronic publishing and related enabling technologies. The IDF holds annual open meetings on the topics of DOI and related issues: the 2010 meeting is provisionally scheduled to be held in Hannover, Germany in mid year.
Registration agencies, appointed by the IDF, provide services to DOI registrants: they allocate DOI prefixes, register DOI names, and provide the necessary infrastructure to allow registrants to declare and maintain metadata and state data. Registration agencies are also expected to actively promote the widespread adoption of the DOI system, to cooperate with the IDF in the development of the DOI system as a whole, and to provide services on behalf of their specific user community. A list of current RAs is maintained by the International DOI Foundation.
Registration agencies generally charge a fee to assign a new DOI name; parts of these fees are used to support the IDF. The DOI system overall, through the IDF, operates on a not-for-profit cost recovery basis.
The DOI system is currently being standardised through the International Organization for Standardization, in its technical committee on identification and description TC46/SC9. The Draft International Standard ISO/DIS 26324, Information and documentation - Digital Object Identifier System met the ISO requirements for approval. The relevant ISO Working Group has now submitted an edited version to ISO for distribution as an FDIS (Final Draft International Standard) ballot. DOI is a registered URI under the infoURI specification (IETF RFC4452), “The "info" URI Scheme for Information Assets with Identifiers in Public Namespaces”. info:doi/ is the infoURI Namespace of Digital Object Identifiers. The DOI syntax is a NISO standard, first standardised in 2000, ANSI/NISO Z39.84-2005 Syntax for the Digital Object Identifier
- Digital identity
- Object identifier
- Universally Unique Identifier (UUID)
- Metadata standards
- Publisher Item Identifier (PII)
- Persistent Uniform Resource Locator
Notes and references
- Witten, Ian H., David Bainbridge and David M. Nichols (2010). How to Build a Digital Library (2nd ed.). Amsterdam; Boston: Morgan Kaufmann. pp. 352–253. ISBN 978-0-12-374857-7.
- Langston, Marc; Tyler, James (2004). "Linking to journal articles in an online teaching environment: The persistent link, DOI, and OpenURL". The Internet and Higher Education. 7 (1): 51–58. doi:10.1016/j.iheduc.2003.11.004.
- "How the 'Digital Object Identifier' works". BusinessWeek. BusinessWeek. 2001-07-23. Retrieved 2010-04-20.
Assuming the publishers do their job of maintaining the databases, these centralized references, unlike current Web links, should never become outdated or broken.
- Lua error in ...ribunto/includes/engines/LuaCommon/lualib/mwInit.lua at line 17: bad argument #1 to 'old_pairs' (table expected, got nil).
- "Welcome to the DOI System". Doi.org. 2010-06-28. Retrieved 2010-08-07.
- "DOI System Factsheet". Doi.org. 2009-12-15. Retrieved 2010-02-02.
- Powell, Andy (June 1998). "Resolving DOI Based URNs Using Squid: An Experimental System at UKOLN". D-Lib Magazine. ISSN 1082-9873.
- "Frequently asked questions about the DOI system: 2. What can be identified by a DOI name?". International DOI Foundation. Updated 17 February 2010. Retrieved 23 April 2010. Cite journal requires
|journal=(help); Check date values in:
- "OECD Publishing White Paper". doi:10.1787/603233448430. Missing or empty
- DeRisi, Susanne; Kennison, Rebecca; Twyman, Nick (2003). "Editorial: The what and whys of DOIs". PLoS Biology. 1 (2): e57. doi:10.1371/journal.pbio.0000057. PMC 261894. PMID 14624257.
- Franklin, Jack (2003). "Open access to scientific and technical information: the state of the art". In Grüttemeier, Herbert; Mahon, Barry (eds.). Open access to scientific and technical information: state of the art and future trends. IOS Press. p. 74. ISBN 9781586033774.
- "DOI System and Internet Identifier Specifications". Doi.org. 2010-05-18. Retrieved 2010-08-07.
- "DOI System and standard identifier registries". Doi.org. Retrieved 2010-08-07.
- "DOI Handbook, Chapter 7: The International DOI Foundation". Doi.org. 1997-10-10. Retrieved 2010-08-07.
- "about_the_doi.html DOI Standards and Specifications". Doi.org. 2010-06-28. Retrieved 2010-08-07.
- "About "info" URIs - Frequently Asked Questions". Info-uri.info. Retrieved 2010-08-07.
- "The "info" URI Scheme for Information Assets with Identifiers in Public Namespaces". Retrieved 2010-08-07.
- "ANSI/NISO Z39.84-2000 Syntax for the Digital Object Identifier". Techstreet.com. Retrieved 2010-08-07.
- The DOI system
- Factsheet: DOI System and Internet Identifier Specifications
- The Handle System
- DOI-Registration agency for data
af:Digitale objek-identifiseerder ar:معرف الوثيقة الرقمية az:Rəqəmli obyektin identifikatoru bn:ডিজিটাল অবজেক্ট আইডেন্টিফায়ার zh-min-nan:Digital object identifier bg:Идентификатор на дигитален обект ca:DOI cs:Digital Object Identifier da:Digital object identifier de:Digital Object Identifier el:Digital Object Identifier es:Digital object identifier eu:Digital object identifier fa:نشانگر دیجیتالی شی fr:Digital Object Identifier gl:Digital object identifier ko:디지털 객체 식별자 id:Pengenal objek digital it:Digital object identifier he:מזהה עצם דיגיטלי jv:Pangidhèntifikasi Obyèk Digital lv:Digitālais objektu identifikators lt:Digital object identifier hu:Digital object identifier nl:Digital object identifier ja:デジタルオブジェクト識別子 no:Digital object identifier pl:DOI (identyfikator cyfrowy) pt:Digital object identifier ro:Digital object identifier ru:Идентификатор цифрового объекта simple:Digital object identifier sk:Digital Object Identifier sl:Identifikator digitalnega objekta sr:Digitalni identifikator objekta fi:DOI sv:Digital object identifier th:Digital object identifier tr:Sayısal nesne tanımlayıcısı uk:Цифровий ідентифікатор об'єкта vi:DOI zh:DOI | <urn:uuid:87ae5c83-519c-4203-8e7e-07862c892c8b> | CC-MAIN-2022-33 | https://www.worldafropedia.com/w/Digital_object_identifier | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00296.warc.gz | en | 0.807537 | 4,149 | 3.171875 | 3 |
Distance decay is an essential concept in geography. At its core, distance decay describes how the relationship between two entities generally gets weaker as the separation between them increases. Inspired by long-standing ideas in physics, the concept of distance decay is used by geographers to analyze two kinds of relationships. First, the term expresses how measured interactions (such as trade volume or migration flow) generally decrease as the separation between entities increases, as is analyzed by spatial interaction models. Second, the term is used to describe how the implicit similarity between observations changes with separation, as measured by variograms. For either type of relationship, we discuss how "separation" must be clearly articulated according to the mechanism of the relationship under study. In doing this, we suggest that separation need not refer to positions in space or time, but can involve social or behavioral perceptions of separation, too. To close, we present how the "death of distance" is transforming distance decay in uneven ways.
- Two Senses of Distant Decay
- Understanding "Distance" in Distance Decay
distance decay: the relationship between two entities decreases as the separation between them increases.
gravity model: a long-standing theoretical model that predicts an explicitly measured interaction, such as trade or migration, as a function of separation.
variogram: an empirical construct used to measure distance decay in similarity among measurements in geographical data.
travel cost: a method of defining separation that emphasizes how separation is often measured in terms of what is spent to move between destinations.
Distance decay is a common phenomenon in social, economic, and natural systems. It is also a fundamental concept in geography. Distance decay describes how the strength of a relationship between people, places, or systems decreases as the separation between them increases. The strength of relationship is usually some kind of measured interaction or implicit similarity between entities in a spatial process. For interaction, we often think of trade between countries or social networks between friends in a city. For similarity, we might think of how the temperature in your front yard is nearly identical to that of your next-door neighbor's. Separation, then, is usually defined according to the relationship being studied. For example, if the relationship being studied is trade between countries, a relevant separation could reflect the total cost of sending a shipping container between countries. But, to understand the similarity of temperature between parts of a city, straight-line/Euclidean distance may suffice. Our definition of "distance decay" uses these abstract concepts of relation and separation, though, to allow for the many alternative definitions that emerge in geography. Even though the rise and intensification of information technology has radically transformed how distance is experienced in social systems, distance decay still emerges in useful ways for both theories of interaction and empirical measures of similarity.
In the following sections, we discuss theses two senses of distance decay. We present a theoretical model of distance decay in interaction, the gravity model, and an empirical measure of distance decay in similarity, the variogram. We close by discussing how alternative notions of separation can be useful in more general contexts, too.
Distance decay is one of the oldest concepts in geography. The effect of distance on social interaction phenomena was recognized in late 19th century, when Ravenstein (1885) examined migration in the British Isles. Although Ravenstein only used "the simplest form of arithmetic" (Tobler, 1995), his observations that emigrants from Ireland largely moved to the closest county in Great Britain (p.177) inspired future researchers to investigate this relationship between interaction (as migration) and distance. Debate continues about Ravenstein's "proper" role in this, however. Grigg (1977) notes Ravenstein's interest in distance decay, but also claims "[Ravenstein] made no explicit discussion of the influence of distance." (p.46), and indeed Ravenstein only refers to "border effects" and "nearest neighbors" in his discussion while also emphasizing the decidedly non-local appeal of "London, Lancashire, and the West Riding."
An example of this is in Figure 1, which shows migration out of the London ward of Islington. In this image, darker colors indicate a greater number of migrants from Islington. Thus, the map shows that people who leave Islington tend to move to places near Islington, rather than to more distant locales. This principle was later explored by Reilly (1931), who developed an explicit specification of distance decay for migration models, and was then applied to trade flows by Isard (1954), flows of information by Gould and White (1985), and now sees extensive application for many different kinds of interaction (Oshan, 2021).
Figure 1. Out-migration from the central London borough of Islington. Darker colors indicate more families have moved from Islington to other boroughs in and around London, showing that people in Islington are more likely to move to closer boroughs than more distant boroughs. Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL. Source: authors.
In this sense, distance decay refers to interactions that we directly measure and predict to decrease as distance increases. Separation is one predictor of interaction, possibly among many others. Together, this means that distance decay, the existence and strength of a negative relationship between separation and interaction, is a core question in this literature.
As a reflection of this, most studies in this literature all share the same basic traits that stem from their adaptation of (or reaction to) properties of distance decay in physics, which will be discussed in Section 3.1. Specifically, distance decay is usually expected to"exist;" in some sense. If it does not, then the model is assumed to be incorrectly specified, distance in-artfully measured, or parameters incorrectly estimated. Thus, in this spatial interaction modelling perspective, distance decay is a theoretical construct embedded in models that may "hold" to varying degrees.
While spatial interaction models were being formalized, empirical interest in distance decay in similarity was also growing. For example, Stephan (1934), was concerned that most statistical analyses ignored how nearby census tracts tended to have similar sociodemographic characteristics. The chief concern was not about explicit interaction between census tracts, but rather an implicit similarity between tracts that decayed with distance. To measure this, Matheron (1963) developed the variogram, which describes the distance decay in covariation for geographic measurements. In most geographic analyses, co-variation does tend to decrease as separation between measurements increases. However, this is not always the case, and there is often something useful revealed about the process generating the data when distance decay in similarity does not hold (Griffith, 2019). Thus, work using the variogram has become widespread in both social and physical sciences, driven by how useful the variogram (and related tools and concepts) are in describing and controlling for the statistical effects of distance decay in the similarity of measurements.
Thus, "distance decay" as used by geographers refers to one of these two relationships: either similarity implicit in the structure of geographic data (i.e. measured by a variogram) or interactions explicit between geographic entities (i.e. modeled by a spatial interaction model). Thus, these two senses of "distance decay" reflect separate perspectives for how distance decay emerges in geography (Wolf et al., 2020). The following sections discuss these two perspectives in detail, illustrating their fundamental similarities and showing how separation or relation may change, but distance decay will likely remain.
3.1 Distance Decay in Interaction: the Gravity Model
At its core, spatial interaction models seek to predict the interaction between two entities, like trade between countries, migration between regions, or instant messages between friends. The interaction (sometimes also called a "flow" in trade or migration studies) originates somewhere (we'll call this site the origin, i) and ends up somewhere else (the destination, j). This flow is is denoted fij.
Then, interaction models predict the size of each flow using three factors. First, "pull" factors (sometimes called "attractiveness," Aj) encourages agents to interact with a destination j. As a complement, the "push" factors (known also as "propulsiveness," spelled Pj) encourage interaction from an origin. These are generally different for different places, and will be defined differently problem-to-problem depending on the interaction being studied. For example, in studies of migration, a push factor might be an economic collapse or invasion in possible origin countries, while pull factors might be low unemployment or high standards of living in possible destination countries.
Figure 2. Distance decay functions for a gravity model of spatial interaction from Equation 1. Four different values of ß are shown to indicate different "speeds" of distance decay. For a small value of ß = - 0.1, interaction between an origin and a destination decays very slowly they get further apart. For a larger value of ß = -1, interaction decays very quickly. Source: authors.
The next part of the classic interaction model is a measure of distance between i and j, generally spelled dij. Again, this may be a physical distance, or may be measured using a more abstract notion of "separation." We will discuss this in more detail later.
Finally, to assemble the model, we need to weight each of these factors according to how important they are in generating interaction. Unfortunately, the proper values of these weights are unknown, so they have to be estimated from interaction data. Here, we use Greek symbols to stand-in for these unknown values: reflects an unknown "baseline" level of interaction, adjusts the importance of attractiveness Ak, adjusts propulsiveness, and reflects the importance of distance to flow. The gravity model of spatial interaction brings these factors together to predict interaction:
All of the unknown values () have to be estimated from the data, and the values of will depend on the structure of the push and pull factors. However, a negative estimate of ß is usually obtained from the data, suggesting that interaction between i and j decreases as distance increases. Indeed, a large literature focuses on understanding and adjusting for things that might "confound" distance decay, causing ß estimates to be positive and implying interaction increases with distance. See Oshan (2021) for a detailed discussion. As a result, contemporary analyses of interaction may use more sophisticated models that control for confounding effects (such as accessibility (Fotheringham, 1983)) and yield ß estimates that represent distance decay.
The structure of this decay is quite flexible in practice. Figure 2 shows three values of ß where the expected interaction strength (on the vertical axis) is plotted against the distance between origins and destinations (on the horizontal axis). All interaction fades as separation increases, but interaction decays more quickly as gets more negative. Alternatively, Figure 3 shows how different shapes of distance decay functions will "spread" interaction differently. The standard "power" decay function (used in Eq. 1, for example) will tend to predict that interactions between observations are highly concentrated among the closest pairs, while a linear decay will spread interactions out over more distant pairs.
Figure 3. Three distance decay functions that "spread" interaction differently among observations. The "power" decay function is generally the most concentrated, while linear (or logarithmic forms, not pictured) are the least concentrated. Source: authors.
Unfortunately, deciding how to measure "distance" or what form of "decay" to use can be quite difficult. Theoretical arguments about the form of decay suggest that the importance of distance decreases with the size of the trip (Haggett, 1979). For example, a walk that takes ten minutes will be fairly disrupted by a detour that takes ten extra minutes, but the same detour will matter much less in a three-hour trek. But, this might not be the same in all processes. Therefore, choosing between forms of distance decay is often a data-intensive process where many models are fit with different distance decay functions, and then compared based on their predictive ability.
In addition to the strength and the form of decay, the measurement of separation is also quite important. We have avoided this topic so far because it can be very complex. Note that the detour example above expressed distance in terms of time, not space. Other kinds of distance may not refer to space-time locations at all! Therefore, given its theoretical importance, we dedicate an entire section later to thinking about how to measure "distance" in distance decay, and discuss the social and material forces that are re-shaping distance itself.
3.2 Distance Decay in Similarity: the Variogram
Distance decay is also used in geography to refer to how nearby things tend to be more similar than distant things. This distance decay in similarity is intimately related to the idea of spatial autocorrelation, which applies in a much more general sense than the distance decay we see in interaction. This is because distance decay in similarity is not limited to studies of interaction (like trade, social networks, or migration), and so is frequently used to describe any set of geographic measurements. So, let us consider one of the most common ways of thinking about distance decay in similarity: the variogram.
At its most basic, the variogram measures the similarity between pairs of observations separated by a given distance. Variograms define similarity as the variance of the differences between measurements. It calculates this variance for sets of sites separated by some distance, h. For now, let us assume that we have some number of sites Nh that are separated exactly by h, meaning dij = h. Then, the value of the variogram for distance h, spelled , is defined as the sum of squared differences between pairs of sites i, j that are separated by exactly h:
In this instance, we can compute for any separation h, and plot the value of this function as h goes from describing very close measurements to very distant measurements. In practice, we usually compute ) for pairs of observations separated by approximately h. For example, if h = 2 kilometers, we might use pairs of observations separated by between 1.8 and 2.2 kilometers. This is because there are only a xed number of observation pairs, but infinitely many values of h for which the variogram has to be evaluated. A "best fit" curve is usually estimated to describe the shape of computed values of . As in spatial interaction models, the form of this curve is flexible. Regardless of this form, large values of indicate that observations separated by distance h are very different, and small values of indicate that they are very similar. So, the faster the variogram increases, the stronger the distance decay in similarity. Figure 4 shows this for many different "ranges," or separations at which the variogram attains its "stable" long-run value (called the "sill").
Figure 4.Three variograms plotted from simulated data. Three different "ranges" are shown that influence the speed of distance decay. The variogram with fastest distance decay reaches its stable long-run value (the "sill") first. Source: authors.
For variograms (and other related methods), distance decay means that similarity between measurements decreases as separation increases. Because variograms are not limited to studying distance decay in geographic data about interactions, they are widely used and heavily customized. As a consequence, many methods build upon the basic premise of the variogram. For example, spatial correlograms measure similarity using the correlation between pairs of values separated by a given distance. Alternatively, some scholars build variograms for distances in an ordinal sense, computing similarity among some number of nearest neighbors to each measurement. As is the case with gravity models in spatial interaction, variograms are only one part of a broad and flexible framework used throughout geographical analysis.
In previous sections, we have examined how distance decay is used and analyzed in two different senses. In both cases, distance decay describes how a kind of relationship gets smaller as the separation between things gets larger. However, spatial interaction models think of "relationship" as some kind of interaction that we directly measure, while variograms think of "relationship" as an implicit similarity between measurements. We also discussed how different mathematical forms can change how decay is distributed among pairs of observations.
This leaves one last term undefined: how should we measure "separation"? The previous sections simply referred to a "distance" or "separation," and did not specify how this was to be measured. This section will provide more detail on how separation may (or may not) be usefully defined and measured, as well as discuss how new technological and social developments affect how we conceptualize distance. Most of this discussion refers explicitly to ideas of distance decay in interaction, since these are questions where the presence (or absence) of distance decay is directly under study in a measurement of interaction. However, more abstract notions of distance and the \death of distance" affect studies of distance decay in separation, too.
4.1 Measuring Distance in Space and Time
Our initial definition of distance decay used the term "separation" to indicate that "distance" may need to be defined in very abstract ways for social or economic processes. Put simply, distance decay in a process might not be apparent if you use straight-line (a.k.a. "Euclidean") distance. In a very basic sense, humans live on a curved Earth; straight-line distance distorts this, and it is more accurate to measure distances using paths along the surface of a curved and bumpy planet. But, this is only one way that simple physical ideas of distance fail to capture our complex socially mediated reality. Paths that interactions take from their origins to their destinations are rarely straight, and often involve changes in the mode of travel. For example, a migrant might take a train to an airport, fly north to a layover, and then east to their final destination. Together, this implies a more abstract socially-mediated notion of distance, called travel cost, that reflects how expensive interaction is using some common resource like time or money. Travel cost definitions of separation have long been used to study distance decay (e.g. Harris, 1954), but can be very hard to measure or represent. In urban contexts (for example), some individuals prefer to chose paths that take less time (time costs), are straighter (turn costs), less expensive (monetary costs), more environmentally friendly (climate costs), or involve fewer changes between modes of transit (transfer costs). The considerations that any agent might make is too nebulous to be useful, so travel costs are usually measured using a weighted combination of a limited set of costs.
This method of analysis only gets us so far, however. Travel cost can vary from person to person, and individuals might strongly disagree about the "cost" of a route because they weight costs differently from one another. Costs may depend on the context in which actors make decisions (Eldridge and Jones III, 1991; Fotheringham, 1983). This means that the structure of travel costs can change within a system, or distance decay itself can behave differently depending on where in the system it is studied. Travel costs can also change over time, as individuals' judgments evolve or systems of transport are transformed. Thus, while a physical distance between two places may be fairly consistent, the relevant socially-mediated travel costs can and do vary widely between people, places, and times.
4.2 Beyond Space and Time Distance Decay
Going further, distance decay can even apply in situations where neither physical nor travel distance matters. Instead, behavioral or social measures of relation may be important when understanding how people move through their city or select where they want to live. Foundational work in geography shows that individuals perceive space in ways that may not map very well to physical or travel definitions of separation (Gould and White, 1985; Golledge and Hubert, 1982). For example, people may think of familiar or more desirable destinations as "closer," regardless of travel cost or distance. This more subjective notion of separation undoubtedly affects the way individuals reason about interaction, and thus affects how distance decay works in practice.
Alternatively, social or economic similarity itself can act as a separation over which distance decay acts (Gatrell, 1983). As an example, the idea of homophily in sociology describes how contact between similar people is more common than contact between dissimilar people (McPherson et al., 2001). Here, distance is defined as a social separation reflecting the demographic or psychological dissimilarity between people. In this case, as social separation between people increases, their interactions decrease - a social distance decay.
This removes the idea of distance decay from space and time entirely. As in our discussion of the two most common definitions of "relation," decisions about how to define "distance" or "relation" will determine how distance decay is made meaningful in a given study. This, in turn, structures any inquiry into distance decay itself. Therefore, it is important to critically articulate and defend the mechanism (or mechanisms) by which separation affects the relationships under study, and then measure separation and relation as accurately as possible.
4.3 The Present and Future Transformations of Distance Decay
As a consequence of the socially-mediated (and at times deeply personal) nature of separation, the way that distance "works" has changed substantially over time. This has been well-studied in historical work on economic geography and the urban system (Borchert, 1967). The continued relevance of distance (and, as a consequence, the existence of distance decay) is seriously debated today.
Some have speculated that physical distance is becoming less important because of the rise of online interaction (Cairncross, 2001). An instant message is approximately instant, whether you talk to your next-door neighbor or someone on the other side of the world. As a behavioral example, Lendle et al. (2016) shows that the internet supplants consumers' local knowledge networks. This decreases the importance of nearby people and/or marketplaces in learning about products and thus weakens distance decay in consumer interactions with sellers. Thus, the power of distance decay may weaken in consumer decision-making.
Ultimately, however, distance decay remains strong in many social processes. The death of distance (decay) in studies of urban transit systems may be less radical than anticipated (Rietveld and Vickerman, 2004). For studies in trade, Buch et al. (2004) discusses the "distance puzzle": even as transportation costs have been slashed dramatically (in part due to digital technologies), estimates of the strength of distance decay in trade remain stable. Buch et al. (2004) finds that this is because spatial interaction models estimate distance decay as a relative propensity for "near" things to interact more than "distant" things, not as an absolute determinant of interaction. Distant trade partners still pay more transit costs than near trade partners even though everyone pays less overall. Thus, despite the "death of distance," distance decay still exists.
Another contemporary debate about distance decay focuses on the COVID-19 pandemic. Because of "stay at home" orders and the attendant rise in remote working, some believe we are seeing structural changes in workforce composition may accelerate the "death of distance." However, there are good conceptual reasons to believe that distance decay will remain important in peoples' location and consumption decisions (Reades and Crookston, 2021). Indeed, as the internet increasingly mediates human interaction, distance decay is still quite strong according to digitally-relevant notions of separation (Couclelis, 1996; Tranos and Nijkamp, 2013). Distance decay may increase in relevance for human geography, but with digitally- or socially-mediated distances playing an increasingly important role. However, these structural changes in the geographic patterns of work and residence are still evolving rapidly as the COVID-19 pandemic evolves, so the ultimate effect of these changes remain unclear (Rose-Redwood et al., 2020).
Distance decay is a fundamental concept in geography. At its core, distance decay describes how the relationships between entities get weaker as separation increases; nearer things will tend to be more related than distant things. In practice, this applies both to cases where we see the relationships directly (like trade or migration), and cases where the relationships are only indirectly measured. We discuss how the gravity model of spatial interaction and the variogram represent these two cases, respectively. Both senses of distance decay are important to understand, as either (or both) are present in many processes. We discuss how the idea of "distance" may be more than physical, reflecting socially-mediated travel costs, individual perceptions of distance, or socioeconomic similarity. In addition to being one of the oldest concepts in geographic analysis, distance decay will remain relevant even as new technologies and world events transform how distance is experienced.
Borchert, J. R. (1967). American Metropolitan Evolution. Geographical Review, 57(3):301. doi: 10.2307/212637.
Buch, C. M., JKleinert, J., and Toubal, F. (2004). The distance puzzle: on the interpretation of the distance coefficient in gravity equations. Economics Letters, 83(3):293-298. doi: 10.1016/j.econlet.2003.10.022.
Cairncross, F. (2001). The Death of Distance: How the Communications Revolution Is Changing Our Lives. Harvard Business School Press, Boston.
Couclelis, H. (1996). The Death of Distance. Environment and Planning B: Planning and Design, 23(4):387-389, August 1996. doi: 10.1068/b230387.
Eldridge, J. D. and Jones III, J. P. (1991). Warped Space: A Geography of Distance Decay. The Professional Geographer, 43(4):500-511. doi: 10.1111/j.0033-0124.1991.00500.x.
Fotheringham, A. S. (1983). A New Set of Spatial-Interaction Models: The Theory of Competing Destinations. Environment and Planning A: Economy and Space, 15:15-36.
Gatrell, A. C. (1983). Distance and space: a geographical perspective. Oxford University Press, USA.
Golledge, R. G., and L J Hubert, L. J. (1982). Some Comments on Non-Euclidean Mental Maps. Environment and Planning A: Economy and Space, 14(1):107-118. doi:10.1068/a140107.
Gould, P. and White, R. (1985). Mental Maps. Penguin Books, Hoboken.
Griffith, D. A. (2019). Negative Spatial Autocorrelation: One of the Most Neglected Concepts in Spatial Statistics. Stats, 2(3):388-415. doi: 10.3390/stats2030027. https://www.mdpi.com/2571-905X/2/3/27.
Grigg, D. B. (1977). E.G. Ravenstein and the "laws" of migration. Journal of Historical Geography, 3:41-54,1977.
Haggett, P. (1979). Geography: a modern synthesis. Harper & Row series in geography. Harper and Row, New York ; London, 3rd ed. edition.
Harris, C. D. (1954). The Market as a Factor in the Localization of Industry in the United States. Annals of the Association of American Geographers, 44(4):315-348. doi: 10.2307/2561395.
Isard, W.. Location theory and trade theory: Short-run analysis. Quarterly Journal of Economics, 68:305-320, 1954.
Lendle, A., Olarreaga, M., Schropp, S., and Vezina, P-L. (2016). There Goes Gravity: eBay and the Death of Distance. The Economic Journal, 126(591):406-441. doi: 10.1111/ecoj.12286.
Matheron, G. (1963). Principles of geostatistics. Economic Geology, 58:1246-1266.
McPherson, M., Smith-Lovin, L., and Cook, J. M. (2001). Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology, 27(1):415-444.
Oshan, T. M. (2021). The spatial structure debate in spatial interaction modeling: 50 years on. Progress in Human Geography, 45(5):925-950. doi: 10.1177/0309132520968134.
Ravenstein, E. G. (1885). The laws of migration. Journal of the Statistical Society of London, 48:167-235.
Reades, J. and Crookston, M. (2021). Why Face-to-Face Still Matters: The Persistent Power of Cities in the Post-Pandemic Era. Policy Press. Google-Books-ID: YI8kEAAAQBAJ.
Reilly, W. J. (1931). The law of retail gravitation. Knickerbocker Press.
Rietveld, P. and Roger Vickerman. (2004). Transport in regional science: The "death of distance" is premature*. Papers in Regional Science, 83(1):229-248, 2004. doi: 10.1007/s10110-003-0184-9.
Rose-Redwood, R., Rob Kitchin, Elia Apostolopoulou, Lauren Rickards, Tyler Blackman, Jeremy Crampton, Ugo Rossi, and Michelle Buckley. (2020). Geographies of the COVID-19 pandemic. Dialogues in Human Geography, 10(2):97-106.. doi: 10.1177/2043820620936050.
Stephan, F. F. (1934). Sampling Errors and Interpretations of Social Data Ordered in Time and Space. Journal of the American Statistical Association, 29(185A):165-166. doi: 10.1080/01621459.1934.10506245.
Tobler, W. R. (1995). Migration: Ravenstein, Thornthwaite, and beyond. Urban Geography, 16:327-343. doi: 10.2747/0272-36126.96.36.1997.
Tranos. E. and Nijkamp. P. (2013). The Death of Distance Revisited: Cyber-Place, Physical and Relational Proximities. Journal of Regional Science, 53(5):855-873. doi: 10.1111/jors.12021.
Wolf, L. J., Sean Fox, Rich Harris, Ron Johnston, Kelvyn Jones, David Manley, Emmanouil Tranos, and Wenfei Winnie Wang. (2020). Quantitative geography III: Future challenges and challenging futures. Progress in Human Geography, page 0309132520924722 . doi: 10.1177/0309132520924722.
- Explain the history of distance decay as a concept in geography
- Distinguish the kind of relationship being measured (such as interaction or similarity) in analyses of distance decay
- Interpret estimates of the strength of distance decay in spatial interaction models
- Compare the properties of variograms graphically
- Explain how distance decay may use more notions of \separation" that are relevant in geographic analysis, including travel costs, perceived distance, or socio-economic distance.
- What physical (or social) processes were influential in geographers' thinking about distance decay?
- What kinds of processes might exhibit very dramatic (that is, \fast") distance decay? What processes might not exhibit distance decay at all?
- What kinds of social processes might be well represented by the gravity model of spatial interaction? Which may not?
- Distance is important when measuring or modelling distance decay. What kinds of interaction costs do you experience in your daily life? | <urn:uuid:cc0e8e47-2990-4c65-bd4c-628a869a56d0> | CC-MAIN-2022-33 | https://gistbok.ucgis.org/bok-topics/proximity-and-distance-decay | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00297.warc.gz | en | 0.917952 | 6,687 | 3.46875 | 3 |
After reading this article individuals should be able to:
The transition from childhood to adulthood is a time of great change, both emotionally and physically, for any young person, but particularly for those living with a long-term condition. In England, more than 80,000 young people aged under 18 years are living with a life-threatening or life-limiting condition. In addition, 23% of those aged 11–15 years report having a long-term illness or disability.
The need to transition from paediatric to adult healthcare services is a unique challenge to young people. This transition, without the appropriate planning and communication between healthcare providers, is associated with increased morbidity and mortality[3–6].
Transition is defined by the National Institute for Health and Care Excellence (NICE) as “a purposeful, planned process that addresses the medical, psychosocial and educational/vocational needs of adolescents and young adults with chronic physical and medical conditions as they move from child-centred to adult-oriented healthcare systems”. Studies show that the transition process should begin at around 11–12 years of age as this leads to improved knowledge, skills and the confidence required to move to adult services[3,8]. A successful transition between paediatric and adult care improves long-term outcomes for patients and their families[4,9]. However, the quality of transition services within the UK vary, with only 50% of young people and their carers stating that they have received support from a healthcare professional overseeing the transition process prior to, during and immediately after transition to adult care.
This article aims to improve understanding of the issues young people face when transitioning to adult services and provides practical advice for pharmacy teams to help manage this process.
Guideline-based best practice
Good transition is crucial to ensure young people and their carers feel supported and empowered during a potentially unsettling time[9,11,12]. A period of overlap with paediatric and adult care services allows good collaboration and effective communication between care providers, the young person and their family. It also ensures the young person feels a part of the decision-making process, and allows time for any issues or concerns to be highlighted[9,11].
National Institute for Health and Care Excellence quality statements
In 2016, NICE published five quality statements to help improve care for young people transitioning to adult services. The quality statements cover the period before, during and after transfer of care for patients aged up to 25 years and include all health and social care settings. In summary:
- For young people with long-term conditions, transition planning should start by school year 9 (aged 13–14 years), or immediately if the young person enters paediatric services after school year 9;
- There should be an annual meeting to review transition planning;
- Each young person should have a named worker to coordinate care and support before, during and after transition;
- Young people should meet a practitioner from each of the adult services they will move to prior to transition;
- Young people who do not attend their first meeting or appointment should be contacted by adult services and given further opportunities to engage.
The transition team
Good transition to adult services requires a multidisciplinary team (MDT) approach to ensure holistic, well-rounded care by considering the medical, psychosocial and vocational needs of the young person[11,13–15]. The young person’s GP should be involved throughout the entire process, especially in situations where there is no equivalent adult service for patients who may be under a paediatric specialist team (e.g. cystic fibrosis). There should also be a named worker from both paediatric and adult services to coordinate the development and implementation of the transition care plan as described below[7,10].
Paediatric nurse specialists (e.g. diabetes or epilepsy) should be part of the transition process as they are able to highlight key issues to the adult team. Many hospitals now employ transition nurses, whose role it is to act as the named worker for the young person and to support the patient to take responsibility for their health needs in preparation for adult services.
Commissioner and/or service planner involvement is necessary to provide resource support for transition planning and to monitor the effectiveness of transition services to drive improvements, such as readmission rates or patient experience.
At the core of the MDT should be the young person, their family and/or carers, who should be consulted and kept informed throughout the process[2,7].
The pharmacist’s role
The role of the pharmacist in transition is not well understood and needs development within the UK. Currently, there is no standardised guidance outlining pharmacist involvement in the transition process. Poor transition can result in problems with medication adherence and self-management — issues where pharmacists should be involved, regardless of sector or specialty.
Pharmacists should aim to inform and empower young people to take ownership of their treatment by addressing questions directly to the young person, where possible, and including them in decisions about their health and medications. Information and advice on medicine adherence, side effects, interactions and supply should also be provided. Pharmacists should allow young people and their families the opportunity to express any concerns and, if necessary, be willing to liaise with other healthcare providers on behalf of the patient. Pharmacists should also encourage patients to ask questions and to seek pharmacist advice whenever needed.
Along with the usual patient consultation questions, pharmacists can ask the following to determine a young person’s understanding of their condition and medication therapy:
- Has someone explained to you what this medication is for and how to take it?
- What information have you been provided?
- How do you usually take your medicine?
- Do you know what to do if you miss a dose?
- How do you find taking your medication?
- What concerns you most about your condition and taking your medication?
These questions allow pharmacists to address knowledge gaps and tailor advice to the individual patient and their family.
Community pharmacists can actively engage in conversation with young people, particularly around knowledge and perception of their condition and medications, as well as adherence and supply issues. Young people should be encouraged to be present in the pharmacy when their medicines are being supplied to create opportunities for conversation and to enable medicines optimisation.
Within a hospital setting, pharmacists working in specialist centres may already be involved in transition (e.g. cystic fibrosis, diabetes and transplant) and actively contribute to meetings with the young person, their family and members of the paediatric and adult care MDTs. This provides the opportunity for paediatric pharmacists to communicate the type of support patients and their families need to the pharmacy team within the adult service. In most instances, paediatric pharmacists can liaise with GPs or practice pharmacists to organise repeat prescriptions for patients and communicate any concerns. This ensures medicines supply is not interrupted and creates a dialogue between specialist and primary care. However, patients on high-cost (e.g. those commissioned by NHS England) or difficult to source medications may still be linked to their paediatric centres after transition because of medicine supply issues. Prescribing responsibility for such medications (e.g. tacrolimus) is usually transferred from specialist to adult or local services when it is deemed possible or appropriate. Paediatric pharmacists can ensure any changes to medications are communicated to relevant services to minimise the risk of medication-related harm. For example, a paediatric pharmacist is able to ensure the local adult service will provide ongoing supply of as many medications as possible for a particular patient. This is important because it can be difficult for families to obtain medications from multiple pharmacies.
Paediatric pharmacists can also coordinate temporary supply of certain drugs for a patient from their paediatric centre after their transition. This is to allow time for an adult service to apply for funding to supply those medications. It is crucial that pharmacists highlight this change to young people and their families to avoid confusion and minimise any disturbance to medication supply.
Unfortunately, established transition programmes are not guaranteed in district general hospitals where a gap in service provision may exist. Regardless, pharmacists within paediatric and adult care services can still contribute at an individual patient level by providing medicines education to young people and their families to promote independent therapy management while in hospital and at home. For example, preparing a take-home medication card (see Table 2) for a young patient newly diagnosed with a long-term condition (e.g. hyperthyroidism), which sets out their new and existing prescribed medications, indication, dose, administration times and any other relevant information, such as how to differentiate between tablets and what side effects to look out for. Young patients may find their new diagnosis overwhelming and pharmacists are able to allay some of the patient’s concerns by using the card to facilitate discussion.
An effective, well-planned transition is essential for a smooth transfer into adult care. An ideal transition plan meets the needs of all involved, considers the thoughts and opinions of the young person and their family and allows the young person to manage their healthcare successfully.
Below is a suggested plan based on the work of Nagra et al. and current NICE guidance.
- Timing and review
Around the age of 12 or 13 years, the paediatric team should begin planning for transition[3,7,8,10]. The exact time for when this should happen differs for each child and should occur during a period of relative stability as opposed to depending on age.
Classically, it is thought that transition planning should start when a young person is aged 13–14 years, but other evidence suggests that it should start sooner and be a more gradual process[3,7,8,17]. Studies show that starting transition planning at around 11–12 years of age leads to better patient knowledge and skills[3,8]. It also means the young person and their family have time to gain the confidence needed to move to adult services.
Meetings should be held at least annually with parents/caregivers to review transition planning, with the outcome of each meeting shared with all those involved[7,10]. These meetings aim to:
- Facilitate communication between care providers;
- Involve the young person and their family in planning and decision making;
- Inform all parties of updates to care or the transition plan.
2. Named worker
A named worker should be appointed to coordinate and oversee transition[7,10]. They are the point of contact for the young person and their parents, and the link between healthcare providers.
It is the responsibility of the named worker to liaise with the MDT, and ensure details and information regarding care are up to date. This is in addition to ensuring the young person remains engaged with the process by providing support and advice relating to education, employment, independent living, health and wellbeing.
Named workers should be chosen based on the needs of the patient and can be from any health or social care background (e.g. a doctor, nurse or social worker). Therefore, a pharmacy professional actively involved in helping a young person struggling to independently manage their own therapy could be a named worker.
3. Building independence
A good transition includes encouraging young people to build social and recreational networks to aid transition and provide opportunities to gain independence[7,10]. The young person’s ability to manage their long-term condition should be assessed and support offered where appropriate (e.g. peer support groups, coaching and education, advocacy or mobile technology/digital health passports). See below for additional resources.
4. Before transition
At least three months prior to transfer of care, young people and their families should have the opportunity to meet with relevant health and social care workers from adult services[7,10]. This can be done via joint clinics with paediatric healthcare providers. If possible, the young person should be encouraged to share information about themselves in a manner of their choosing (e.g. written document). The information will be used to create a personal folder for the patient, which will facilitate development of the patient-clinician relationship prior to transfer of care and inform adult services of the health and social care needs of the young person, including emergency/contingency plans, history of unplanned admissions, the individual’s strengths and future goals, as well as the young person’s preferences for parent and/or carer involvement. It is important to respect these preferences while considering their ability to make such decisions as per the Mental Capacity Act[7,18].
5. After transition
The named worker should continue to play an active role in the young person’s care for up to six months post transition or for as long as the young person requires support. For consistency, it is also important that the young person sees the same adult services practitioner for the first two appointments. Should there be any issues with engagement, care or attendance at appointments, a discussion between the young person, their family and relevant members of the transition team should occur (e.g. GP and social care services).
A case of poor transition
The following anonymised case study is an example of poor transition. It highlights many of the issues young people face when moving to adult care services. These issues are explored further in Table 1. Consent for this case to be used was obtained from the patient’s parents.
Case study: RX is a male patient, aged 19 years, with a very rare genetic degenerative brain condition. He is registered blind, and has partial hearing, a gastrostomy, a suprapubic catheter, lung disease and is oxygen dependent. He is cared for full time by his parents and siblings. Owing to his medical conditions, RX is on several complex medications with significant input from pharmacists.
At the age of 18 years, RX’s care was moved from paediatric to adult services. At the time, his paediatric care was excellent and his parents felt they were experts on their child’s condition and their opinions were always considered. However, no transition plan was put in place for RX and he was discharged from the paediatric service.
Under the paediatric service, RX’s health needs were the responsibility of one named paediatric consultant; however, he is now under the care of multiple adult care physicians but does not have a primary consultant or a named worker. This causes complications for both RX’s care and his medications. RX’s parents feel the move has been a rollercoaster of emotions, which has left them feeling both alone and unheard. They feel lost in the adult world and feel disengaged from RX’s care.
The table below highlights the issues within the case study that could have been avoided if a thorough transition plan had been in place for RX.
Established transition programmes
While service provision is ad hoc in the UK, there are several established transition programmes, which have shown positive outcomes for young patients and their families:
1. ‘Ready Steady Go’ programme
Southampton Children’s Hospital has developed a resource, ‘Ready Steady Go’, which is a generic transition programme for patients aged 11 years and over. The programme engages young people from the beginning of their transition journey through to their move to adult services and supports young people to manage their healthcare successfully in any adult service across the country. The ‘Ready Steady Go’ programme has three questionnaires, which young people complete over their transition journey: 1) the getting ‘Ready’ questionnaire which is started at around the age of 11 years; 2) the ‘Steady’ questionnaire, which covers topics in greater depth at around age 13–14 years; 3) the ‘Go’ questionnaire which is started at around age 16 years to ensure that young people have all the skills and knowledge in place to ‘Go’ to adult services. The programme has been validated in the UK population and has been designed with the involvement of young people. This is a tool that could be used by the MDT to deliver a transition service.
2. 10 Steps Transition Pathway
Alder Hey Children’s Hospital has developed a 10 Steps Transition Pathway that describes the important steps for a young person, their parents and healthcare professionals as the young person moves to adult services. Several different transition plan formats are in use at Alder Hey, including ‘Ready Steady Go’ and some condition-specific transition plans. All young people at Alder Hey should have access to a handheld, personalised transition plan, which is commenced before their 15th birthday. The young person should also have a circle of support identified — the people who are there to help the young person, including family / friends and healthcare professionals, in addition to a named worker. A pharmacist could be part of this circle of support.
3. Great Ormond Street Hospital
A transition workbook has been curated by the Great Ormond Street Hospital to aid the young person’s journey and to maintain engagement. The workbook uses colours and shapes to help young people express their emotions, including their concerns and worries. In addition, each young person is expected to create a medication log to encourage them to be in charge of their own medication. At present, not all pharmacy team members are actively involved in this aspect of a patient’s journey; however, with more pharmacists taking on prescribing roles and carrying out clinics, this is a potential area for change.
- The Medicines for Children website has patient information leaflets on how to use medicines;
- Patient information leaflets (PILs) supplied with medications — however, pharmacists should be aware that often children and young people can be prescribed off-label or unlicensed medications, and therefore the use of a PIL is not always suitable;
- The Cystic Fibrosis Trust has a range of resources, including a transition booklet for young people with cystic fibrosis;
- The Royal College of Paediatrics and Child Health has information online, including webinars and transition resources, which look at the transition experiences of patients and their families with links to specific conditions (e.g. asthma, epilepsy and HIV);
- Great Ormond Street Hospital ‘Growing up, gaining independence’ resource for patients and parents/carers;
- Advocacy group Together for Short Lives has created a planning tool for young people with a life-limiting or life-threatening condition, and their families.
A good transition experience should not be a challenge for young people and their families. Pharmacists are medicines experts and therefore should be an integral part of the transition experience. Transition is something that should be considered in everyday practice and, although the role of the pharmacist in transition is still emerging, pharmacists should aim to harness their skills and knowledge to proactively provide medication-related support and establish themselves as members of the transition MDT.
We need your help to better understand the role of the pharmacist in transition services. The aims of this research project are:
- To establish the skills pharmacists have that can fulfil the needs of children and young people transitioning to adult services;
- To establish the potential barriers or issues there are with pharmacist involvement in transition services.
If you are interested in taking part in this project please click here. The questionnaire should take you no longer than ten minutes to complete. Participation is voluntary and all data collected will remain anonymous and confidential. There will be no identifiable data attached to your responses and only the research team will view your responses. Everyone involved in this study will keep your data safe and secure. All privacy rules will be followed. The findings of the study will be submitted for publication in an appropriate journal. Direct quotations of your responses that you provide may be used. For any questions, please feel free to contact: firstname.lastname@example.org.
The following acknowledgement statement must be included in all publications which make reference to the ‘Ready Steady Go’ programme: ‘Ready Steady Go’ and ‘Hello to adult services’ were developed by the Transition Steering Group led by Arvind Nagra, paediatric nephrologist and clinical lead for transitional care at Southampton Children’s Hospital, University Hospital Southampton NHS Foundation Trust and are based on the work of S Whitehouse, MC Paone et al. and J.E. McDonagh et al.[13–15]. Further information can be found at www.uhs.nhs.uk/readysteadygo.
- 1‘Make Every Child Count’: Estimating current and future prevalence of children and young people with life-limiting conditions in the United Kingdom. Together for short lives. 2020.https://www.togetherforshortlives.org.uk/wp-content/uploads/2020/04/Prevalence-reportFinal_28_04_2020.pdf (accessed Jun 2021).
- 2Hagell A, Shah R. Key data on young people 2019. Association for Young People’s Health. 2019.https://www.youngpeopleshealth.org.uk/wp-content/uploads/2019/09/AYPH_KDYP2019_FullVersion.pdf (accessed Jun 2021).
- 3Nagra A, McGinnity PM, Davis N, et al. Implementing transition: Ready Steady Go. Arch Dis Child Educ Pract Ed 2015;100:313–20. doi:10.1136/archdischild-2014-307423
- 4Kipps S, Bahu T, Ong K, et al. Current methods of transfer of young people with Type 1 diabetes to adult services. Diabet Med 2002;19:649–54. doi:10.1046/j.1464-5491.2002.00757.x
- 5Bryden KS, Dunger DB, Mayou RA, et al. Poor Prognosis of Young Adults With Type 1 Diabetes: A longitudinal study. Diabetes Care 2003;26:1052–7. doi:10.2337/diacare.26.4.1052
- 6Watson AR. Non-compliance and transfer from paediatric to adult transplant unit. Pediatric Nephrology 2000;14:0469–72. doi:10.1007/s004670050794
- 7Transition from children’s to adults’ services for young people using health and social care services. NICE guideline (NG43). National Institute for Health and Care Excellence. 2016.https://www.nice.org.uk/guidance/ng43 (accessed Jun 2021).
- 8Shaw KL, Southwood TR, McDonagh JE. Young people’s satisfaction of transitional care in adolescent rheumatology in the UK. Child Care Health Dev 2007;33:368–79. doi:10.1111/j.1365-2214.2006.00698.x
- 9Harden PN, Walsh G, Bandler N, et al. Bridging the gap: an integrated paediatric to adult clinical service for young adults with kidney failure. BMJ 2012;344:e3718–e3718. doi:10.1136/bmj.e3718
- 10From the pond into the sea: children’s transition to adult health services. Care Quality Commission. 2014.http://www.cqc.org.uk/content/teenagers-complex-health-needs-lacksupport-they-approach-adulthood (accessed Jun 2021).
- 11Shaw KL, Watanabe A, Rankin E, et al. Walking the talk. Implementation of transitional care guidance in a UK paediatric and a neighbouring adult facility. Child Care Health Dev 2013;40:663–70. doi:10.1111/cch.12110
- 12Duguépéroux I, Tamalet A, Sermet-Gaudelus I, et al. Clinical Changes of Patients with Cystic Fibrosis during Transition from Pediatric to Adult Care. Journal of Adolescent Health 2008;43:459–65. doi:10.1016/j.jadohealth.2008.03.005
- 13Whitehouse S, Paone M. Bridging the gap from youth to adulthood. Contemporary Pediatrics 1998;:13–16.
- 14Paone MC, Wigle M, Saewyc E. The ON TRAC Model for Transitional Care of Adolescents. Prog Transpl 2006;16:291–302. doi:10.1177/152692480601600403
- 15McDonagh JE, Shaw KL, Southwood TR. Growing up and moving on in rheumatology: development and preliminary evaluation of a transitional care programme for a multicentre cohort of adolescents with juvenile idiopathic arthritis. J Child Health Care 2006;10:22–42. doi:10.1177/1367493506060203
- 16Communicating with parents and involving children in medicines optimisation. The Pharmaceutical Journal Published Online First: 2017. doi:10.1211/pj.2017.20203683
- 17Hobart CB, Phan H. Pediatric-to-adult healthcare transitions: Current challenges and recommended practices. American Journal of Health-System Pharmacy 2019;76:1544–54. doi:10.1093/ajhp/zxz165
- 18Mental Capacity Act. UK government. 2006.https://www.legislation.gov.uk/ukpga/2005/9/contents (accessed Jun 2021).
- 1910 Steps Transition to Adult Services. Alder Hey Children’s NHS Foundation Trust. 2021.https://alderhey.nhs.uk/services/transition-adult-services (accessed Jun 2021).
- 20Transition workbook. Great Ormond Street Hospital for Children. 2021.https://media.gosh.nhs.uk/documents/SH-255-GOSH-booklet-FINAL.pdf (accessed Jun 2021).
- 21Carbimazole 5mg tablets. Summary of product characteristics. 2021.https://www.medicines.org.uk/emc/product/4267/smpc#gref (accessed Jun 2021).
- 22Propanolol 10mg tablets BP. Summary of product characteristics. 2021.https://www.medicines.org.uk/emc/product/5888/smpc (accessed Jun 2021).
- 23Carbimazole. British National Formulary for Children. 2021.https://bnfc.nice.org.uk/drug/carbimazole.html (accessed Jun 2021).
- 24Propranolol. British National Formulary for Children. 2021.https://bnfc.nice.org.uk/drug/propranolol-hydrochloride.html (accessed Jun 2021). | <urn:uuid:5e843b4c-317a-4249-81fd-648556eab7c5> | CC-MAIN-2022-33 | https://pharmaceutical-journal.com/article/ld/big-shoes-little-feet-transition-of-care-for-young-people-moving-to-adult-healthcare-services | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00696.warc.gz | en | 0.9297 | 5,685 | 2.671875 | 3 |
Last Updated on by Admin LB
This article titled ‘Essential Elements of Valid Custom.’ is written by Sanjana Shikhar and covers custom as a source of law and aims to explain the concept and the essentials of custom through detailed explanation.
Have you ever envisioned a situation where there were no written laws? You could wonder if it would result in anarchy or how you would govern and regulate the particular class and sect.
People were ruled by the customs of their local society in ancient times when there were no laws. Because of its historical significance, the custom is a very authentic and binding source of law. There are several fundamental grounds that qualify custom as a genuine one and thus allow them to get recognised by the judiciary and legislature. This can also be called the tests required for a custom to be valid.
There are not only one or two essentials or tests required for customs to be valid, but there is a number of them. But before delving into that we should have a good knowledge of exactly what is custom and the entire concept behind it.
The term ‘custom’ comes from the French word ‘Coustume’. Some claim the term ‘custom’ is derived from the Latin word ‘Consuetudo’ others say it is a part of the phrase ‘Consuescere’ which means ‘accustom.’ According to some, it is derived from two words: ‘con’ means “to express intense energy,” and ‘suescere’ means “to become acclimated.”
A custom is a continual series of behaviour that has come to be considered as fixing the norm of conduct for members of society due to the acquiescence or express agreement of the community observing it.
When people find an act to be desirable and helpful, apt and pleasant to their nature and disposition, they use and practice it on a regular basis, and the custom is formed via repeated usage and multiplication of the act. A ‘rule of conduct, binding on those within its scope, established by lengthy usage,’ according to one definition.
Custom can simply be defined as long-standing customs or unwritten regulations that have taken on a binding or compulsory status. Valid custom must be of immemorial antiquity, certain and reasonable, compulsory, and not in conflict with Statute Law, even if it deviates from the common law.
General customs are those that apply to the entire country, such as merchants’ general customs. The use of specific characteristics is referred to as a custom. Local customs are customs of certain parts of the country. It is often said that ‘Custom is to Society, what Law is to the State’, meaning thereby that persuasion of custom on society is parallel to that of law on the State and refers to the sum of behaviour patterns carried by practice and lodged in the cluster.
Theorists have defined custom as :
Allen – Custom is defined as the regularity of people’s habits or actions in similar circumstances.
Salmond: – Custom embodies those values that have been recognised by the national conscience as principles of fairness and public benefit.
Holland: – A frequently observed line of conduct, he defined custom as.
Austin: – Custom is a rule of conduct that the governed adopt voluntarily rather than in obedience to a law imposed by a political superior, Austin writes.
Each and every custom cannot be legally enforced. Before they may have the power of law, they must be proven in court, just like anything else. Antiquity, reasonableness, morality, continuation, peaceful enjoyment, consistency, conformity with statute law, certainty and obligatory force are some of the judicial tests that have evolved for customs, to be legally recognized by the courts and to gain the binding power of law.
II. Custom as a source of law
A custom is a set of rules that people follow consistently and freely. In practically all communities, custom plays a significant role in regulating human behaviour. It is, in fact, one of the earliest legal authority sources. Customs, on the other hand, fade away as society evolves, and legislation and legal precedents become the dominant source of information.
A custom is a form of the specific norm that has existed from the beginning of time. Customary law is a type of law that is founded on custom. People’s customary law has been regarded as the main pillar of their legal identity.
“A custom is a specific concept that has existed either actually or hypothetically from time immemorial and has obtained the authority of law in a specific territory, despite or in opposition to the general precedent-based law of the community,” according to Halsbury law.
III. Origin of Custom
“Custom is idea subsequent to that of Themistes or judgments,” writes Sir Henry Maine. Themistes were judicial prizes given to the King by the goddess of justice in Greece.
He described the development in distinct steps. They are:
- Law by rulers under divine inspiration: At early, legislation was enacted by monarchs who sought divine approval for their decrees. They were thought to be God’s messengers, laying down the law for the people.
- Developing of Customs: People gradually develop a habit of following their rulers’ mandates, which eventually becomes customary law and a part of people’s daily lives.
- Knowledge of law in the hands of priests: A small group of people, mostly religious people, study the understanding of customs and practices. This is feasible due to the rulers’ power over the people eroding. Priests, research customs, discover patterns, comprehend their significance, and formalize them.
- Codification: The codification of these laws is the last and final step. Priests meticulously research customs and record them on paper. After that, the code is marketed and spread to other locations and territories.
IV. View of the Scholars
- John Salmond: “Custom embodies those values that have been recognised by the national conscience as principles of fairness and public benefit.” A valid custom, according to Salmond, possesses absolute legal authority and has the force of law in and of itself.
- J.L. Austin: “Custom is a norm of conduct that the governed observe spontaneously rather than in accordance with a law established by a political superior,” says J.L. Austin.” Austin’s theories were frequently perceived as contradicting customary law because he believed that the political superior was the sole source of law, and that traditions were not true law. To be considered law, they needed the Sovereign’s agreement and direction.
V. Theories of Custom
1. Historical Theory
As this school points out, the custom has its own legitimacy because it would not exist unless it met some fundamental demands of the general population or met some specific societal needs.
The formation of law is independent of any individual’s subjective will. It is due to our understanding of the various groups and civilizations that have existed throughout history. The whole public’s common conscience is used to create custom. It comes from a deep sense of rightness. The collective will of the people gives law its existence.
It’s dubbed ‘Volkgeist’ by Savigny.
2. Analytical Theory
The Analytical theory’s main proponent was Austin. Customs, in his opinion, lacked any legal authority. Their legal status is always contingent on the Sovereign’s approval. Customs, he believed, were only a mirror of the law, not the ‘true law.’ For customs to have any binding force on people, they must be modified and approved by judges, jurists, or rulers. This corresponds to his belief that all law is the ‘Will of the Sovereign.’
VI. Essential elements of valid custom
Each and every custom cannot be legally enforced. Before they may have the power of law, they must be proven in court, just like anything else. Some judicial standards have arisen in order for custom to be legally recognised by the courts and gain the binding force of law.
These tests given by Blackstone are as follows:
1. Immemorial Antiquity
A valid custom must be immemorial to be considered valid. It must be old or ancient, and not recently created. Allen, Paton, Salmond, and all other jurists believe that in order for a custom to have legal validity, it must be proven to be of ancient age or origin. “A custom, in order to be legitimate and binding, must have been employed as long as man’s memory runneth not to the contrary,” writes Blackstone.”
The idea of immemorial custom was derived by the law of England from the Cannon law, and by the Cannon law from the Civil law. In Civil law and Cannon law, and the systems derived from them, time immemorial refers to a period of time so distant that no living person can recall it or give evidence about it.
A custom in England must date from the reign of Richard I, King of England. In England, the time of legal memory for a custom to be considered legitimate is 1189. The year 1189 was the first year of Richard I’s reign. However, in India, the English notion of ‘immemorial origin’ is not fully maintained.
In Kuar Sen v. Mamman the Allahabad High Court decided in 1895 that applying the English ‘law of 1189’ in India would be inefficient since it would abolish many customary rights to contemporary expansion in villages and other areas. In Ambalika Dasi v. Aparna Dasi, the Calcutta High Court held that either 1773 A.D. or 1793 A.D. is the appropriate period for treating a custom that has existed since time immemorial.
If examples of an alleged custom have been acknowledged within the last 20 years, the assumption is that it is of immemorial antiquity, according to the Bombay High Court. Similarly, the Andhra Pradesh High Court, citing the Bombay High Court, held in Venkata Subba Rao v. Bhujangyya that a 40-year-old custom is enforceable.
The Supreme Court, however, ultimately decided the matter once and for all in Gokalchand v. Parvin Kumari, stating that the English laws of custom must have been in use for so long that man’s memory runneth not to the contrary and should not be strictly applied to Indian customs.
In India, it is believed that a custom must be of ancient origin, although, unlike English law, there is no set era during which it must have existed. The reason for not enforcing a modern custom is that many of the novel traditions would become law if they were enforced. The law recognizes that current or irrational custom should not be accepted in order to preserve the force and strength of precedent.
According to the second important legal criterion, a legal custom must also be reasonable. It can’t possibly be unreasonable. It must be beneficial to society and practical. When a party challenges a custom, it must show the court that the custom is unreasonable.
That is, the individual challenging the custom bears the burden of proof. To determine whether a custom is acceptable, it must be traced back to its inception. A custom’s unreasonableness must be such that enforcing it causes more harm than if there were no custom at all. As a result, a custom is invalid if it appears to be contrary to right and reason, and if it is enforced, it is likely to cause more harm than good.
A custom is contradictory to reason, according to Sir Edward Coke, if it contradicts the principles of justice, equity, and good conscience. Salmond is correct in suggesting that before a custom is denied legal status, it must be determined if the harm caused by its enforcement outweighs the harm caused by the multiplication of people’s natural expectations.
It has been held, that a tradition that is not reasonable is illegal in law and not obligatory. As a result, the sati pratha could not replace a legal custom because it ran against to man’s logical sense of justice and goodness. It was established in Newcastle-under-Lyme Corporation v. Wolstanton that courts will not enforce unreasonable customs since the law will not accept what is unreasonable and inequitable.
In the case of Lutchmeeput v. Sadaulla, the plaintiff, a zamindar, sued to prevent defendants from fishing in certain bhils (ponds) that were part of his zamindari, and the defendants contended that they had a prescriptive right to fish under a custom according to which all zamindari residents have the right to fish in the bhils. Because the purported custom was found to be irrational, the defendants were granted permission to take away all of the fishing rights in the bhils, leaving nothing for the plaintiff, who was admittedly the owner of them.
When a custom is not opposed to the fundamental principle of morality, the law of the state in which it exists, or the principles of justice, equity, and good conscience, it should be considered adequately reasonable… In Produce Brokers co. v. Olympia oil and coke co., the Divisional Court of the King’s Bench defined the criterion as ‘fair and proper, and such as reasonable, honest, and fair-minded 11 men would accept.’
In the case of Budansa v. Fatima bi, a custom that would allow a woman to marry again during her husband’s lifetime without any established laws requiring the first marriage to be dissolved before the second marriage is contracted was found to be against public policy.
The criteria were articulated even more broadly in Robinson v. Mollett, by Brett, J: ‘whether or not it is in accordance with fundamental principles of right and wrong.’ It is not the validity of a custom that must be reasonable, but the duration of validity of a custom must be reasonable; yet, a precedent that is clearly and substantially unreasonable may be overruled rather than followed.
According to Prof. Allen, “The true rule appears to be that a tradition will be accepted unless it is absurd, rather than if it is reasonable”.
The third criterion for a valid custom is that it must not be unethical. A well-established rule is that a custom should not be incompatible with decency and morality. The Bombay High Court ruled in Mathura Natkin Plaintiff v. Esu Naikin, that the practice of adopting girls for immoral purposes, like dancing, is prohibited because it was aimed to prolong the profession.
The habit of marrying a daughter’s daughter has also been declared unethical in the case of Balusami v. Balakrishna. In Gopi v. Jaggo, the Privy Council acknowledged and sanctioned a tradition that recognized and sanctioned a woman’s remarriage after her husband had abandoned and deserted her.
In the case of Narayan v. Laving, the Bombay High Court ruled that a custom allowing a woman to quit her husband at her leisure and marry again without his consent was immoral. In Keshav Hargovan v. Bai Gandi, the same Court ruled that a custom in which a marriage connection might be severed by either husband or wife against the divorced party’s wishes for a fee was immoral.
The fourth criterion for a genuine custom is that it must have been followed without fail for a long time. The general rule is that if a custom has not been followed constantly and uninterruptedly for a long time, it was never really there. It had to have existed and been acknowledged by the community for a period of time that might be considered reasonable under the circumstances.
It was established in Muhammad Hussain Faroki v. Syed Mian Saheb that there is no custom unless there is continuity. A custom, if it abrogates another custom, the latter no longer exists. Blackstone distinguished between the interruption of a ‘right’ and the interruption of simple ‘possession.’
The custom comes to an end when the ‘right’ is no longer exercised, no matter how briefly. It indicates that even if possession is disrupted for a period of time, but they claim to enjoy the custom is not abandoned, and the custom remains. The custom will come to an end if the right is revoked, even if it is only for a day.
5. Peaceable Enjoyment
The next important test is that custom must have been enjoyed peaceably. The presumption that a custom began by consent, as most customs do, will be disproved if it has been questioned in a court of law for a long time. As a result, in order for a custom to be enforced, it must be demonstrated that the custom has been followed without interruption or competition. A custom is founded on consent or habit, and we cannot argue that it was based on the universal consent of the people until the tradition existed undisturbed.
A custom must not be a conflict with other prevailing customs. The custom must be consistent with other customs. Different rules of behaviour for a given scenario will result from differences in custom. It is therefore that, one custom cannot be set in opposition to other customs.
7. Conformity with Statute Law
The only way to tell if a custom is valid is if it complies with the law. It should not be in violation of the law. This rule is observed as a positive legal concept in England and other nations that follow English law, such as India. This rule, however, is not followed by Roman law or many continental systems. Justinian lists various statutes in his corpus juris that have fallen out of favour due to later opposite custom.
Savigny commented on this subject, stating that customs and statutes are treated equally in terms of legal efficiency, and that customary law can alter or repeal a statute, as well as create a new norm to replace a statutory rule that has been repealed. In Scotland and ancient Greece, a statute may be rendered obsolete by subsequent conflicting custom.
However, in India, the position is clear that custom cannot be in conflict with statute law, as the Indian Supreme Court declared in Mohammad Baqar and Ors. v. Naim-Un-Nisa Bibi. Custom, obviously, cannot override newly adopted legislation. For example, the established legislation governing such issues has abolished all customary forms of marriage, adoption, succession, or property among Hindus. As a result, an ancient inconvenient and unjust custom cannot be used to justify a violation of the law.
According to Coke, “No custom or prescription can take away the force of an Act of parliament.” However, it should be noted that different writers hold opposing viewpoints on this subject. If the legislated legislation comes first, a later custom can repeal or modify it.
If customary law is the earlier, it can also be addressed by later passed legislation. “If we evaluate conventions and legislation in terms of their legal efficacy, we must put them on the same level,” Savigny says. “Customary law has the power to amend or repeal a statute, as well as to create a new rule to replace a statutory rule that has been repealed.” “The power of customary law is equal to that of statute law,” Windshield claims. “As a result, it has the power to not only enhance but even to override current legislation.”
A valid custom requires certainty as a precondition. An ancient custom, on the other hand, cannot be undefined and uncertain. Wilson v. Wills established that a custom must be specific and not ambiguous. It is impossible to recognize a custom that is ambiguous or indefinite.
It’s more of an evidentiary rule than anything else. A clear proof that a custom exists as a matter of fact, or as a legal presumption of fact, must be presented to the court. The plaintiff in a particular case claimed a customary right of easement for the shadow cast by the branches of trees hanging from the neighbour’s field. Mr Justice Pandalai of the Madras High Court declared that a custom relating to the shade of trees could not exist since it is so vague, ambiguous, and ephemeral that it cannot give rise to any customary right.
9. Obligatory Force
To be legally recognized as a valid custom, a custom must be followed as a right. It indicates that custom had to be followed by all parties involved without the use of force or the need for permission from those who were negatively affected by it. It must be considered as an obligatory or binding rule of behaviour by those who are impacted by it, not only as an optional guideline. These requirements are encapsulated in the rule that the user must be not by force, stealth, or will.
In Hammerton v. Honey, the court found that if a custom is not observed for a lengthy period of time, it is presumed that the custom never existed.
In conclusion, it can be said that customs are the most significant, and in some cases, the only source of law, in the early stages of society. All legal systems are built on the foundation of customs. They are created as a result of society’s existence. The primitive society’s recurring habit is known as custom.
A custom is a norm or practice that has been observed by humans since the beginning of time. Customs are rationalized, and legal principles are assimilated and embodied. Any legal system can be traced back to the impact of custom.
The creative rule of the magistrates in Roman law, equity judges in English law, a galaxy of great legal writers like Blackstone, and the Smritikars, Commentators, and Privy Council rulings in Hindu law have all had a significant impact on the form and substance of the conventions.
Custom is an important source of law. But it must be a valid custom. We have discussed all the essentials of a valid custom. Each and every element of valid custom is important. They are the prerequisites for a custom to be valid.
There was a time when most components of law were based on customs and not codified, but such laws are now created through the due process of law. The advancement of science and technology evolves new mechanisms and techniques. New claims are recognized as new faculties of life develop, and these are given the character of statutory rights at the same time.
So if we see, the tests required for a valid custom is very necessary because if meaningless practices become or are certified as customs then it would be a dangerous situation because then there is a probability of them being codified as law. There were many practices that were rejected by the courts on the above grounds. As stated above, some of the examples:
The habit of marrying a daughter’s daughter has been declared unethical in the case of Balusami v. Bala Krishna.It was established in Newcastle-under-Lyme Corporation v. Wolstanton that courts will not enforce unreasonable customs since the law will not accept what is unreasonable and inequitable.
As we all know, the custom has always been an important source of law. Irrelevant and meaningless practices would have an overall adverse impact on people. In this way, we can say that the essentials of customs are very significant because they have an impact on the formation of law and in turn on the people’s lives.
DAVID J. BEDERMAN, CUSTOM AS A SOURCE OF LAW 25 (Cambridge University Press 2010).
Supra note 4.
Supra note 9.
Kuar Sen v. Mamman And Ors., (1895) ILR 17 All 87.
Srimati Ambalika Dasi And Ors. v. Srimati Arpana Dasi And Ors., 47 Ind Cas 402.
Nannapaneni Venkata Subba Rao v. Thummala Bhujangayya (Died) And Ors., AIR 1960 AP 412.
Thakur Gokalchand v. Parvin Kumari, 1952 AIR 231.
Supra note 1 at 10.
Newcastle-under-Lyme Corporation v. Wolstanton, Ch 92.
Lutchmeeput Singh v. Sadaulla Nushyo And Ors., (1883) ILR 9 Cal 698.
Produce Brokers co. v. Olympia oil and coke co., A. C. 314 ; 85 L. J. (K. B.) 160.
Budansa Rowther And Anr. v. Fatma Bi And Ors., 22 Ind Cas 697.
Robinson v. Mollett, (1875) LR 7 HL 802.
Supra note 1 at 11.
Mathura Natkin v. Esu Naikin And Ors., (1880) ILR 4 Bom 545.
Balusami v. Balakrishna, AIR 1957 Mad 97.
Gopi Krishna Kasaudhan v. Jaggo, (1936) 38 BOMLR 751.
Narayan Bharthi v. Laving Bharthi And Ors., (1878) ILR 2 Bom 140.
Keshav Hargovan v. Bai Gandi, (1915) 17 BOMLR 584.
Muhammad Mahmood Hussain Faroki alias Chan Bash v. Syed Abdul Huq alias Sabju Saheb, minor by guardian Syed Miah Saheb, (1942) 1 MLJ 564.
Supra note 1 at 12-13.
Mohammad Baqar And Ors. v. Naim-Un-Nisa Bibi And Ors., AIR 1956 SC 548.
Wilson v. Wills, 69 S.E. 755, 154 N.C. 105 (N.C. 1910).
Hammerton v. Honey, 24 WR 603.
Supra note 27.
Supra note 20. | <urn:uuid:1f3fb5b7-3ec7-41f6-ab02-0d2a1f835398> | CC-MAIN-2022-33 | https://www.legalbites.in/essential-elements-of-valid-custom/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00297.warc.gz | en | 0.958628 | 5,666 | 3.296875 | 3 |
APPENDIX 2 Convict Labor at the Ashio Copper Mine
The beginning of convict labor
Convict labor was first used at Ashio before the Furukawa takeover of the mine. Little is known about the precise details, but it would seem that convict labor was introduced around the time of the opening of the Ashio prison on 10th December 1872 1). The prison was established in accordance with the Prisons Law passed in April the same year which provided for terms of imprisonment in place of the floggings that had previously been the norm.
What is certain is that convict labor was being used when the Furukawa company took control of the mine in 1877. The following is an extract from "Buildings at Honzan Dezawa", dated March 1877, an inventory of buildings taken over by the Furukawa company from the contractor Fukuda Kin'ichi3).
1. Prefectural penal station 1 building (width: 2 ken, length: 3 ken)
That some 50 convicts were at Ashio in August 1877, we know from a report sent from Ashio to the Furukawa head office which states that as part of the customary greetings to be offered to the employees for the Obon festival "three packs of vermicelli noodles (sômen)" (are to be given to) each prisoner; (there are) 50 prisoners, therefore 150 packs"4). The "Autobiography of Kimura Chôshichi" also records that the contractor Kamiyama Seiya "used convicts from Tochigi prefecture. This was how Ashio came to acquire a bad reputation as a place where convicts worked"5). Kamiyama Seiya employed 41 miners and hauliers and was the most powerful of the shitakaseginin. It is evident, then, that convict labor was in use at Ashio before Furukawa took over. Yet far from wishing it to cease, the company actually had plans for its expansion, as the following extract from a letter (8th October 1880) of the newly appointed mine director, Kimura Chôshichi, makes clear:6)
"We are curently employing a total of 750 workers at the mine, and in order to expand our operations, we have requested the Tochigi Prefectural authorities to provide us with a further 200 convict laborers whom we intend to put to work hauling firewood and charcoal. By this means we intend gradually to expand our operations to the point where we shall obtain a most profitable result".
Further evidence of the expansion of convict labor under the new company regime is the increase in the size of the prisoners' quarters at Ashio, from about 94 sq. meters in 1877 when Furukawa took over to 344 sq. meters) in December 1886 7).
What kind of jobs were the convicts made to do? As noted from Kimura Chôshichi's letter quoted above, they were mainly laboring jobs, particularly those which were transport-related such as "hauling firewood and charcoal". They were relied on, for instance, to haul the fuel supplies of wood and charcoal needed in the refining process. Frequently cited in this connection is an episode from the "Life of Furukawa Ichibei"8):
At that time (1881) our greatest operational handicap was a shortage of timber and charcoal as well as a shortage of labor." How Ashio welcomed the arrival of the convicts can be seen from a report sent to (Furukawa) : "In addition, following requests to the prefectural office, the despatch of the convicts proceeded with unexpected ease, and on the first of the month we were sent 20 hardened criminals who had been convicted of serious crimes, for which we were extremely grateful". However, it was not an easy matter to put the convicts to work. One day in 1881, five convicts working in pit 53 tied up their guards, escaped via pit no. 1, crossed the mountain leading to Sunokobashi and headed for Jôshû. Four were later recaptured, but the fifth evaded arrest, so we hired 18 hunters from Jôshû and ordered them to shoot (the fugitive) on sight. Because of the disturbances, the other convicts were not allowed to work for two days, as a result of which, there were not enough men to haul charcoal, and the furnace workers' holidays had to be cancelled".
This episode suggests that convicts worked underground as hauliers and that the transport of timber and charcoal depended entirely on their labor.
In the early period of Furukawa control, convict labor supplied a large part of the unskilled workforce. But although such labor was cheap, the amount of it was limited and difficulties arose over the varying numbers sent at the convenience of the penal authorities. Chart 4 shows "Tochigi Prefecture Statistics" figures for the annual numbers of inmates at Ashio prison in the years 1882 to 1888 as well as the dates in each year of the maximum and minimum numbers of prisoners.
[Source] compiled from "Tochigi Prefecture Statistics" 1886, 1887, 1888
The table shows that only in the years 1882 and 1883 were the numbers of inmates stable, whereas after 1884 the gap between the maximum and minimum numbers went from twofold to threefold. The numbers of arrivals as against those of discharges also fluctuated greatly, and the number of prisoners entering the prison in 1886 was 4.3 times the daily average prison population for the year. The period when the prison held the fewest inmates was between September and November, which was the busiest time of year for farmers and the time when Ashio suffered most from labor shortages. There were two reasons for this situation. Firstly, there was the effect of the two 'regional concentration prisons' built at Tôkyô and Miyagi in 1879. The Interior Ministry's Directive No. 20 of March that year stated that all prisoners in the prefectures in the Kantô and Chûbu regions, which included Tochigi, "serving sentences in excess of 1 and a half years are to be held at special concentration prisons"9). This meant that the only prisoners left in Tochigi prisons were those serving less than 1 year and a 6months, which reduced the period of time they were available for work at Ashio.
The second reason was the restrictions laid down by the Penal Code of 1872, which specified five categories of prison sentence in a class system based on the length of sentence ranging from sentences which involved fixed periods of hard labor down to those which incurred penalties of 'light' labor 10). According to the Code, a prisoner who received a class 5 sentence (hard labor) would be required to perform "such tasks as hauling earth and stone, clearing uncultivated land, pounding rice, pressing oil, and breaking rocks". After 100 days of this, however long the sentence, he would progress to class 4 which involved such jobs as "the construction of official buildings, repairing roads". As mining clearly involved class 5 labor, it was not possible for mining companies to use convicts for more than 100 days and comply with the law. The Penal Code was revised in 1881, and the class system of sentencing was abolished as it had been found to be impractical 11). Thereafter, it was permissible to use convict labor in mines for long periods, but its use was still subject to a number of official restrictions. For example, Article 42 of the 1881 Penal Code stipulated that in all cases where convict labor was employed outside prisons, a fixed number of prison wardens and overseers had to be present 12): "In cases where prisoners are engaged in labor beyond the confines of the prison establishment, the group must number no fewer than 10 and no more than 15 prisoners and must be supervised by a minimum of one warden and two overseers". Even when the Ashio gaol held many prisoners, not all could be put to work. The largest number of convicts which the 9 wardens, the 7 or 8 overseers and the 1 or 2 assistant overseers at the Ashio 13) gaol could therefore legitimately manage at work was 5 groups of 75 men. Even assuming that nearly all the gaol staff were engaged at any one time in outside supervision, there would still have had to be a number who remained at the gaol to guard those prisoners not working outside, so the maximum number of convicts able to work outside on any one day would have been in the order of 4 groups, or 60 men in total.
The situation was mentioned in a letter of 1881 from Kimura Chôbei to head office 14):
"We recently sent 15 serious offenders back to Tochigi, and of the 103 men currently here, we can use no more than 60 outside the gaol. There is therefore a shortage of convict labor here and the situation is very difficult. Consequently, we have decided to purchase three or four horses to transport charcoal from Kuzôzan."
The convicts' working conditions
We have no precise details as the working conditions under which the convicts at Ashio were forced to labor. The only evidence of the pay they received is given in the "Conditions at the Ashio Copper Mine" (1884) by Ôhara Junnosuke who notes that "the transport (of timber and charcoal) is mostly carried out by convict labor from Tochigi Prefecture (who receive 13 sen a day)"15). However, it would not be right, on the basis of this single piece of evidence, to assert that each and every convict was paid 13 sen a day, since we know of the payment regulations for prisoners16) instituted in July 1881 which decreed that "after 100 days of a prisoner's fixed sentence have elapsed, a schedule of remuneration will be assessed for each prisoner who will be allowed to keep 1/10th of the amount allotted to him; the remainder will be retained by the prison authorities". This regulation was incorporated as it stands into the 1881 Penal Code adopted in September that year. Prisoners thus had to work for nothing for the first 100 days of their sentence and thereafter received only 1/10th of their remuneration which was itself in any case far lower than the rate paid to non-convict workers17).
The regulation was, however, soon revised: "those (prisoners) serving under the present regulations will be paid 1/10th of their wage if serving sentences in excess of seven years; those serving under five years will be paid 2/10ths"18). This was indeed an improvement, but not much of one. The 13 sen a day paid out by the Furukawa company was not a wage paid directly to the prisoner but remuneration paid to the prison authorities. While the prisoner's meals were of course provided by the state, he was paid nothing for his first 100 days' labor, and the 1 sen 3 rin or 2 sen 6 rin a day he received in hand thereafter represented only 1/15th or 2/15ths of the daily wage of the ordinary worker.
Convicts' working hours were prescribed in detail by the Penal Code and were not as long as might be expected if seen in the light of the conventional image of convict labor, although others might think it only reasonable that convicts should not have had to work unduly long hours. We have already noted that, under the 1872 Penal Code, class 5 serious offenders' sentences only incurred 100 days of hard labor. The less serious class 4 offenders had to work 260 days in a 5 year period if serving a life sentence and 100 days if serving a one year sentence. Compulsory labor was further commutated for class 3 prisoners. Prisoners worked an 8 hour day, from 7 a.m. to 5 p.m. with a break from 11 a.m. to 1 p.m. The break was extended to three hours, from 11 a.m. to 2 p.m., between the 1st May and the 31st July. Work then ended at 6 p.m., but it was still an 8 hour day19). This regime was revised in 1881 when, as has been described above, the various classes of forced labor were abolished. This was in itself certainly a change for the worse, but working hours per month were prescribed in great detail, and the average working day, at 7 hours 41 minutes, was actually 20 minutes shorter than before20). From the point of view of working hours therefore, convicts were better off than ordinary workers. One can, of course, question the extent to which the new regulations were respected, but there is evidence from the Furukawa silver mine at Karuizawa in Fukushima Prefecture which refers to "reduced working hours for convicts"; the Penal Code could not be completely ignored. Although, strictly speaking, not a matter of 'working conditions', it should not be overlooked that, when working outside the jail, pairs of prisoners were chained to each other to prevent escape attempts, and there were numerous punishments for those who broke prison regulations. At the Ashio jail, 10 prisoners in 1886, 4 in 1887 and 3 in 1888 were punished by having their meals cut21). This penalty, which meant that the size of a prisoner's meal would "be cut by 1/2 or 2/3 of the normal amount and (would) not include soy sauce (shôyu) or two portions of vegetables"22) was particularly severe for those doing hard labor. However, the following two sources indicate more clearly than anything else what the working conditions of convicts at Ashio were really like. The first is part of an official account of the convicts' escape attempt referred to earlier in a quotation from "The Life of Furukawa Ichibei"
"[July 1881] Four convicts attempted to escape from Ashio jail. After rearrest, they resisted efforts to put them back in their cells and violently attempted to escape once again so that the authorities, in their efforts to restrain them, were forced to use staves which happened to be to hand with the result that three were beaten to death and one was seriously injured"23).
This short account clearly indicates the nature of the harsh treatment that was meted out to convicts who tried to escape. Killing three escapees and seriously wounding another was tantamount to lynching them. At a glance, it would seem that the action was a justifiable case of the authorities being 'forced to' use violent restraint in the face of violence on the part of the convicts, but in fact, the account makes it clear that violence was used against the convicts not in the attempt to arrest them, but after they had been arrested and were being put back in their cells. It is hard to imagine that men who had already lost their freedom could not be 'restrained' without beating them to death. Even if the convicts did indeed 'violently attempt to escape once again', they could easily have been prevented from doing so by the wardens and overseers who were normally armed with guns and swords. However, these were not the weapons used against them; they were killed with staves. This was not a case of guns and swords being used to restrain convicts who were trying to escape, but of the killing of three men and the wounding of a fourth with staves that just happened to be 'to hand'. The fact that this violence was committed in front of the prison cells, and the convicts were 'restrained' with "staves which happened to be to hand" is evidence that this was, in effect, a lynch party, a demonstration for the benefit of the other prisoners.
The second source which reveals something of the true manner in which convicts were put to work at Ashio is the statistics shown in chart 5 below. These are figures for the number of deaths in the three Tochigi Prefecture prisons.
[source] "Tochigi Prefecture Statistics" 1886, 1887, 1888. Death sentence inmates omitted.
The fact that the Ashio prison's death rate is considerably higher than those of Utsunomiya and Tochigi is telling evidence of the real nature of convict labor, especially in mines, despite the seemingly 'reasonable' conditions laid down for it in the Penal Code.
The significance of convict labor in the development of the mining industry
Much has been made of the importance of the role played by convict labor in the development of Japanese capitalism and particularly in that of the mining industry. Previous studies into the use of convict labor in the Miike and Horonai coal mines have greatly emphasized the significance of such forced labor in the mining industry, but after a detailed examination of the use of convict labor at Ashio, it is my view that such conclusions deserve to be challenged.
"The most expedient way of improving productivity (was) with the use of convicts 'on loan' from the government. That such labor was used at Takashima, Miike, Nakakosaka and Horonai is well-known, but it was also employed at Asato and Besshi. In particular, convict labor enabled the exploitation of the Ônaori (bonanza) at the Ashio copper mine in 1884. Convict labor can be said to have underpinned the development of the Japanese mining industry in its early phase"24).
As Tsuda affirms, the use of convict labor was certainly widespread throughout the mining industry. In addition to the places he mentions, from the 1870s to the 1890s, convict labor was also employed at Kanehiramura copper mine in Kanazawa Prefecture, a mine in Hôjô Prefecture, Seishozan mine in Shiga Prefecture, Yoshioka copper mine in Okayama Prefecture, the state-owned mine of Ikuno, the Handa and Karuizawa silver mines in Fukushima Prefecture, Chikuhô coal mines, and by the Nishitani mining company in Fukui Prefecture25). A high proportion of court sentences which included hard labor outside the prison specified that such labor was to be performed in mines. But was the importance of such labor in the development of the Japanese mining industry sufficient to warrant the claim that it "underpinned the development" of the industry?
At the Miike and Horonai coal mines convict labor was heavily employed in the actual process of extraction, a situation which continued at Miike for many years26), but as Hidemura Senzô points out, the convicts used at Miike and Horonai were men serving long-term sentences from the regional 'concentration' prisons (shûjikan) which took prisoners from various different prefectures. "They were from the beginning inserted into a system of control which aimed to supplement the workforce in coal mines with convict labor"27).
The first year for which we have reliable figures for the total number of mineworkers in the industry is 1893 when at the end of that year, as the author has shown elsewhere, there were 86,91730). Convict labor even in that early period therefore amounted to only 2.5% of the total labor force engaged in coal and metal mines nationawide, and the major proportion of them were concentrated in specially designated mines such as Miike. It is therefore an exaggerated and unjustified generalisation to assert that "convict labor...underpinned the development of the Japanese mining industry in its early phase".
The end of convict labor at Ashio
The use of convict labor at Ashio reached a peak in 1884. 249 inmates, the highest ever daily figure, were recorded on 1st July that year. The total number for the year was 68,216, a daily average of 187, and another all-time high. On the other hand, the regular Ashio workforce had tripled in size since the year before to reach 3067, and the proportion of convict labor soon declined accordingly. The spiralling demand for labor which accompanied the discovery of the bonanza went far beyond the limits of what the Tochigi Prefectural Prison was able to provide. It began to be felt that the presence of convict labor actually hampered efforts to recruit ordinary miners.
At the same time, there was an increasing number of escape attempts by convicts working outside and a policy of reducing outside work and increasing inside work was adopted. In 1884 the Police Department began to issue orders to 'prevent outside labor by convicts unless circumstances absolutely require it', and in 1885 the Interior Ministry ordered a tightening-up of controls on outside work. In response, the Gunma prefectural authorities banned outside work at all prisons in their area of responsibility31). When the Takashima Coal Mine Affair became a major scandal in 1888 and drew people's attention to the conditions under which miners had to labor, the tide began to turn against the use of convict labor in mines. A clear example of this was the Besshi copper mine. In November 1888, the Ehime prefectural assembly, citing "the many occurrences of illness and fatality" among the convict laborers at Besshi and the fact that the work they were required to do was "far more severe than other forms of (prison) service", resolved to recommend the closure of the Besshi and Tachikawa jails which were under the authority of Matsuyama Prison. Both jails were duly closed the following March32). Ashio's jail was also closed at the same time, on 31st March, and converted into the Utsunomiya Prison Ashio Work Station. On 30th September 1891 this too was closed33).
1) 'The History of Utsunomiya Prison' (Penal Association ed. "A History of Modern Japanese Penal Administration" Vol. 2, 1943) p. 1127.
2) Tashiro Zenkichi "A History of Tochigi Prefecture" (Shitano Historical Society, 1935) p. 446.
3) Itsukakai ed. "The Life of Furukawa Ichibei" (Itsukakai, 1926) p. 107.
4) "The Life of Furukawa Ichibei", appendix p. 32
5) Shigeno Kichinosuke ed. "The Autobiography of Kimura Chôshichi" (private publ., 1938) pp. 116-117.
6) Shigeno Kichinosuke "The Life of Kimura Chôshichi" (private publ., 1937) p. 46.
7) "Tochigi Prefecture Statistics" 1886, p. 308.
8) "The Life of Furukawa Ichibei" p. 126.
9) Prisons Association ed. "A History of Modern Japanese Penal Administration" Vol. 2, p. 64.
10) Cabinet Records Office "Complete Legal Statutes" (1891, reissued by Hara Shobô, 1980 Criminal and Penal Law Vol. 3 pp. 67,68,90.
11) Prisons Association ed. "A History of Modern Japanese Penal Administration" Vol. 2, pp. 1176-1177.
12) Cabinet Records Office "Complete Legal Satutes" Criminal and Penal Law, pp.168- 169.
13) "Tochigi Prefecture Statistics" 1886 p. 307, 1887 and 1888 p.333. Other staff members were the chief gaoler and the secretary. The numbers of wardens and overseers dropped In 1889/90 to 7 wardens and 3 overseers in 1889, and 8 wardens and 4 wardens in 1890.
15) Labor History Records Committee ed. "Historical Documents of the Japanese Labor Movement" Vol. 1, p. 82.
16) Cabinet Records Office "Complete Legal Statutes" Criminal and Penal Law, pp. 157- 158.
17) General laborers at Ashio in 1884 were paid between 20 and 23 sen a day. At the Karuizawa silver mine, also operated by Furukawa, "convicts were paid 1/5th of the wage of an ordinary worker" ("Journal of the Japan Mining Association" No. 60 Feb. 1890. No. 65 (July 1890) of the same journal also carries a report on the use of convict labor at the Karuizawa silver mine. Convicts working at the Handa silver mine in Fukushima Prefecture in 1888 were each being paid an average of 12 sen 5 rin remuneration a day ('The Handa Silver Mine' "Journal of the Japan Mining Association" No. 39, May 1888).
18) Cabinet Records Office "Complete Legal Statutes" Criminal and Penal Law, p. 195. The revision of prison regulations in 1889 decreed that long sentence prisoners were to be paid 2/10ths and short sentence prisoners 4/10ths. ("A History of Modern Japanese Penal Administration" Vol. 2, p. 1232).
19)19) Cabinet Records Office "Complete Legal Statutes" Criminal and Penal Law, pp. 68- 69.
20)20) See the chart of 'Convicts Working Hours' in "Complete Legal Statutes", p. 187, which details monthly work and rest hours. Prisoners were to wake up at sunrise, and have their evening meal finished and be back in their cells by sunset.
21) "Tochigi Prefecture Statistics" 1886 p. 313, and 1887/8 p. 339.
22)22) '1881 Penal Code' Article 103 ("Complete Legal Statutes" Criminal and Penal Law, p. 178)
23) Prisons Association ed. "A History of Modern Japanese Penal Administration" Vol. 2, p. 87. See under "Official chronological records".
24) Tsuda Masumi 'The Mining Industry in the Early Meiji Period' ("Hitotsubashi University Research Yearbook" Sociology Studies 13, 1974). Tsuda claims that "convict labor enabled the exploitation of the Ônaori (bonanza) at the Ashio copper mine in 1884", but as far as I is aware, there is no evidence of convicts at Ashio ever having been used for prospecting and ore extraction. In "A History of Modern Japanese Penal Administration" there is indeed a reference to "convicts extracting ore" (see p. 1120 and appendix p. 151). But all references to Ashio in the book are drawn from "The Life of Furukawa Ichibei", and there are no other facts to back up that single reference. In passing, it may be noted that Article 44 of the 1889 Penal Code Provisions of Enforcement states that: 'male prisoners shall be required to perform tasks such as rock-breaking, land clearance, ore mining, laboring, stone cutting, and agricultural haulage or any other work outside the prison establishment in accordance with the needs of the said establishment' ("A History of Modern Japanese Penal Administration" Vol. 2, p. 1235). The judiciary were accustomed to referring to work in the mines generally as 'ore mining'.
25) See "A History of Modern Japanese Penal Administration" Vol. 2, chapter 6 and appendix. For the Yoshioka mine, see "Mitsubishi Company Journal" Vol. 2 p. 282, and for the Karuizawa silver mine, see "Journal of the Japan Mining Association" No. 60, Feb. 1890. For the Chikuhô coal mines, see note 2)27).
26)) For convict labor at Miike, see Tanaka Naoki "Historical Studies of Japanese Coalminers" (Sôfûkan, 1984) pp. 240-269. The situation at Horonai is discussed on p. 245 of the same volume. There is good reason to believe that estimates of the numbers of convicts employed at Miike have been greatly exaggerated; see Nimura Kazuo 'Numbers of Mineworkers in the Early Years of the Mining Industry' (2)' ("Research Monthly" No. 290, Oct. 1982).
27) Hidemura Senzô 'Convict Labor in Fukuoka Coalmines in the Early and Mid-Meiji Periods - Prison Despatch Stations, Workplaces, and Jails' (Kyûshû University "Studies in Economics" Vol. 37, Nos. 1-6, Feb. 1972).
28)) Exceptional cases where convicts were sent to coal and metal mines outside the local prefecture were Miike coal mine which took convicts from all the Kyûshû prefectural jails, the Ikuno silver mine in Hyôgo Prefecture which took them from Okayama Prefecture, and the Handa silver mine in Fukushima which took convicts 'on loan' for a short period from the Miyagi regional concentration prison.
29)29) For 1886, see "The 7th Annual Statistics" no. 255, p. 592.
30) Nimura Kazuo ' 'Numbers of Mineworkers in the Early Years of the Mining Industry' (1)' ("Research Monthly" No. 289, Sept. 1982).
31) Prison Association ed. "A History of Modern Japanese Penal Administration" 2, pp. 1204-1205.
32)32) "The History of the Labor Movement in Ehime Prefecture" Vol. 1, pp. 78- 79, 85-86.
33)"Modern Japanese Penal Administration" 2, p. 1127. | <urn:uuid:da9d845e-ad14-4c2c-b88a-31124e2fc045> | CC-MAIN-2022-33 | http://nimura-laborhistory.jp/English/en-ashio-convictlabor.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00098.warc.gz | en | 0.971551 | 6,112 | 2.9375 | 3 |
On July 3, 1876 — eight days after Custer and his men were killed at the Battle of Little Bighorn — a cattle drive arrived in the Black Hills boomtown of Deadwood, South Dakota. Three thousand longhorn steers had been brought up from Texas via New Mexico, Colorado and Wyoming. One of the men driving this herd was a twenty-two year old cowhand named Nat Love. A large number of cowboys were already in town and, along with the freshly arrived Texans, they decided to hold an impromptu rodeo to determine the most skilled cowhand in Deadwood. Love, one of six African-Americans participating in the competition, won the roping contest by lassoing and mounting a wild mustang in nine minutes flat. He swept the shooting contests as well, nailing fourteen out of fourteen bullseyes with his rifle, and ten out of twelve with his pistol. Love — who had previously gone by the nickname Red River Dick — was awarded two hundred dollars and a new name: Deadwood Dick.
Love’s shooting skills had been honed over the course of seven long years on the cattle trails. In his 1907 autobiography, The Life and Adventures of Nat Love, Better Known in the Cattle Country as “Deadwood Dick,” which remains the main source of information on his life, Love writes, “In those days on the great cattle ranges there was no law but the law of might, and all disputes were settled with a forty-five Colt pistol. In such cases the man who was quickest on the draw and whose eye was the best, pretty generally got the decision.” He had become adept with firearms to defend himself and his trail mates against rustlers, Native American raiding parties, saloon brawlers and stampeding animals. The mighty buffalo were still abundant on the plains, and hunting them served as both sport and a source of food on the trail. Love relied on the mass-produced, repeating firearms that defined the era: the Winchester lever action rifle and the single-action, six-shot Colt revolver. ”It was of the greatest importance,” Love writes, “that the cow boy should understand his gun, its capabilities and its shooting qualities.”
While Love was proficient with his firearms and willing to use them if he had to, he was was not one of the many Western lawmen and outlaws who made their names as a killer. Rather, he was an affable, hardworking cowhand who enjoyed sharing the free life of the range with his friends. At one point in his memoir, he describes a fight that broke out between his trail mates and some outlaws at a makeshift saloon. “A fuss was started between our men and some cattle rustlers resulting in some shooting, but fortunately without serious consequences. As we were not looking for trouble, and not wishing to kill any one we left at once for home.” Love would much rather ride off with his pals then have to engage in violence.
This is not say that the man was mild-mannered. He was known for drunken, prankish acts of bravado, such as riding as horse into a saloon and ordering whiskey for both himself and the horse, or lassoing a cannon at an Army fort and attempting to ride off with it. More than these antics, though, Love was known for riding the cattle trails and doing what needed to be done in a rough situation. He prided himself on his “cool head and a steady hand.” Love was a cowboy’s cowboy. The tales of his life, as he wrote them, capture the essence of the cowboy experience in general, and the African-American cowboy experience in particular.
Nat Love is the emblematic African-American cowboy, but he was by no means the only one. While there are no comprehensive records of cowboys — who were by definition transient, and often used assumed names — historians estimate that at least a quarter, if not a full third, of all cowboys in the Old West were black. The first significant number of black cowboys was found in Texas prior to the Civil War. Most of them were slaves owned by white ranchers, but some were freemen. The famous western cattle trails were established in the 1860s, immediately after the Civil War, just as many former slaves were looking to begin new lives. These trails led north from the ranches of Texas to the booming cattle markets of Kansas, Nebraska and the Dakotas. It is no surprise that many black men who had been born in slavery, either in Texas or in other Southern states, found work as cowhands.
Nat Love’s life fits squarely into this narrative. He was born on a plantation in Tennessee in 1854. He and his family were the slaves of a man named Robert Love. The slave schedules of the 1860 census list a five-year-old male amongst Robert Love’s twenty-two slaves; this is presumably Nat, though the slaves were not listed by name. As a small child, Nat witnessed horrific violence inflicted on his fellow slaves by their overseers. “Young as I was,” he writes, “my blood often boiled as I witnessed these cruel sights.” When Robert Love returned from fighting in the Confederate Army, he did not inform Nat’s family that they were now free, and they remained in bondage for a time after the war. Nat developed useful skills in his youth, such as horsemanship and hunting, but there were few prospects for a young black man in 1860s Tennessee other than a lifetime of backbreaking farm work for former slave masters. In 1869, when Love was fifteen, he left home and headed west.
While the West was by no means devoid of racism, it offered freedoms that the South did not. Back in Love’s home state of Tennessee, he would not even have been permitted to own firearms, let alone compete and make a name for himself with them. As the legal scholar Michael Waldman notes, in the 1860s Southern states “passed Black Codes seeking to restore slavery in all but name. These laws disarmed African Americans but let whites retain their guns.” In the West, though, all men carried firearms, regardless of race. The prevalence of African-American troops — the famous Buffalo Soldiers — in the United States Army acclimated western whites to seeing black men bearing arms. The Buffalo Soldiers served under white officers, but they exercised authority over white lawbreakers and mobs. The regular presence of black soldiers in newly established towns not subject to Black Codes often meant that businesses such as hotels and saloons served black customers, even when they did not serve Mexican or Native American customers.
Black cowhands were particularly embraced by their white peers. The necessities of trail life meant that cowboys of all races had to work, sleep and eat side by side. In their influential 1965 book, The Negro Cowboys, Philip Durham and Everett L. Jones write of the racist social strictures of Reconstruction-era Texas. “Upon Negro cowboys, however, these sanctions fell less heavily than upon many other Negroes, for as cowboys they had a well-defined place in an early established social and economic hierarchy.” Durham and Jones do go on to note, however, that this unique social role did not offer upward mobility. Even experienced and well-respected cowhands and top hands had little chance of ever being promoted to foreman of a cattle outfit.
Love’s range life began when he made his way from Tennessee to Dodge City, Kansas, by walking and hitchhiking on farmer’s wagons. Dodge, he writes, “was a typical frontier city, with a great many saloons, dance halls, and gambling houses, and very little of anything else.” In Dodge, Love found a short-handed Texas cattle outfit made up of white and black cowboys. They fed him breakfast, then gave him a job interview which consisted of mounting a bucking bronco named Good Eye. To the cowboys’ surprise, Love was able to ride Good Eye without being thrown. Having proven that he was a skilled horseman, and no “tenderfoot,” the teenage Love was taken on as a cowhand.
The experiences presented in Love’s memoirs coincide with what is known about African-American cowboy life. The Dodge City that Love walked into was examined by C. Robert Haywood in his journal article, “No Less A Man”: Blacks in Cow Town Dodge City, 1876-1886. While the majority of African-Americans were trapped in menial service and labor jobs, some did fairly well in Dodge during this time period. The racial segregation that came to be enforced at the end of the nineteenth century was not yet present in the early days of Dodge City, and the sight of blacks and whites eating and drinking together in integrated saloons, restaurants and hotels was not uncommon. Black cowboys received more respect than the local black laborers, as they not only worked with white cowboys but also earned the same wages as they did. Haywood writes, “Although subject to some of the same attitudes and customs as the permanent black residents, the black cowboys expected and received better treatment. The freedom and equality of range life had conditioned them to a more integrated relationship.” Dodge City was the terminus of many cattle drives, and cowboys came off the trail looking to spend their money. None of them were turned away.
For over two decades, Love helped drive hundreds-strong herds of longhorn cattle over hundreds of miles of open, lawless land. He took great pride in his work record, stating, “By strict attention to business, born of a genuine love of the free and wild life of the range, and absolute fearlessness, I became known throughout the country as a good all around cow boy and a splendid hand in a stampede.” Stampedes, which were indeed quite dangerous and could easily crush a slow cowboy to death, were only one of the obstacles faced on the trail. Treacherous weather and rugged terrain — such as sudden hailstorms and unmapped sheer cliffs — took their toll on men and beasts. Cowhands would spend upwards of two months on the trail, sleeping on the hard ground in the elements, working every waking hour to keep the cattle from wandering off, being stolen or getting killed.
The image of cowboys — and other Western heroes — as white was enshrined through film and television, but originally in the Western pulp novels that were sold for nickels and dimes throughout the late nineteenth and early twentieth centuries. Before the advent of crime and detective stories, Western tales were the dominant form of pulp literature. The original Western novels were not about historical material; they were written at the height of the Wild West period, about contemporary events. Writers back east would eagerly gather news and rumors reported from the West and fill in the rest with their imaginations. At the same time, aspiring heroes and outlaws would devour the pulps in anticipation of their own adventures.
It has only been in the past few decades that a strong effort has been made to reclaim the stories of men of color behind many pulp novels featuring white protagonists. For instance, the widely-panned 2013 film “The Lone Ranger,” starring two white men, had the unexpected effect of renewing public interest in Bass Reeves, an African-American United States Marshall whose adventures are believed to have been the inspiration for the white Lone Ranger. Unlike Reeves, and many forgotten African-American frontier figures, Nat Love made sure that his story was recorded by writing it himself. The publication of Love’s book established his claim as a Western hero.
Much of Love’s memoir is unverifiable, and a few episodes are downright dubious. Love claims to have been friends with several legendary Western figures such as Bat Masterson, Billy the Kid and Buffalo Bill, but is unlikely that he had significant contact with any of these people. As Durham and Jones point out, it was something of a literary convention for Western memoirists to present “their travels throughout the Southwest so that they were in the right place and right time to see Billy the Kid in action.” The fact is, not that many people encountered Billy in his brief, twenty-one-year life. Charming as the image is, Billy probably did not take Love to see “the little log cabin where he said he was born.”
It is much more possible that Love did encounter Buffalo Bill (a.k.a. William F. Cody), as Cody spent decades traversing the country and meeting Western figures. However, if Cody did know Love, he didn’t find Love notable enough to mention him in his own, detailed autobiography. Of course, considering the disparaging way Cody writes about African-American soldiers, he would not necessarily have taken note, let alone written admirably, of Love even if he did encounter him.
In one particularly exciting adventure in his memoirs, Love rides off from his outfit in search of stray cattle and is ambushed by a band of Native Americans led by a chief named Yellow Dog. A bullet passes through Love’s leg, killing his horse. He uses the horse’s corpse as a breastwork, holding off his attackers with his rifle for quite a while. Love kills five warriors in the battle, but is eventually captured when he runs out of ammunition. Impressed by the fight that he has put up, the tribe feeds Love and tends to his wounds. He is adopted into the tribe and offered one of the chief’s daughters as a bride. Despite the woman’s attractiveness, Love has no desire to stay with Yellow Dog’s band. He steals a horse and escapes back to his outfit.
While it is clear that Love’s prowess in battle is exaggerated, there is one particularly interesting element of his captivity story that grounds it in some historical reality: “Yellow Dog’s tribe was composed largely of half breeds, and there was a large percentage of colored blood in the tribe.” In fact, Love cites this as the reason he survived the ordeal. “As I was a colored man they wanted to keep me, as they thought I was too good a man to die.” Western dime novels tended to essentialize Native American communities and exclude racial complexity, but Love’s description rings true. The first Africans came to the American West as members of Spanish exploration parties in the early 1500s. As William Loren Katz documents in his book Black Indians, African-Americans frequently joined, and intermarried with, various Native American tribes. Furthermore, many frontier tribes had better relations with black (and mixed-race) scouts, trappers and traders than with the whites they encountered. It is quite likely that Love’s skin color was an aid in his interaction with this Indian band. Katz does not take issue with the general plausibility of Love’s account; rather, he is troubled by the violent racism against Native Americans which Love exhibits throughout his book.
The fact that Love embellished his life story with celebrity encounters and heroic feats does not discredit his core account of range life. Durham and Jones, who enjoyed nitpicking Love’s claims, admit, “Although much of Love’s autobiography reads like a dime novel…there is nothing inherently incredible about any one of his adventures.”
Love’s working life outlasted the Old West period. “With the march of progress came the railroad,” he writes, “and no longer were we called upon to follow the long horned steers or mustangs on the trail.” Always a man of his times, Love moved to Denver, married a woman named Alice and went to work as a Pullman porter on the Denver and Rio Grande Railroad. It is a bit disconcerting to read about the same man who describes engaging in wild shootouts with rustlers now serving customers in exchange for tips. But the fact remains that cowboys were laborers, even if their labor was especially difficult and glamorous. Love worked very hard for fairly modest wages his entire life. In the 1910 census, Love, though by now a published author, lists his occupation as “laborer.”
To be clear, serving as a Pullman porter — and later a “porter in charge” — carried a sense of pride for Love. He posed for a photograph in his porter’s uniform with the same swagger that he once posed for an iconic photograph in his cowboy gear. The position of the Pullman porter was not a bad job; in fact, it served as an entry point for many African-Americans into the middle class. The company established Pullman, Illinois (now part of Chicago), as a model town for workers and their families. Love visited Pullman, but was not impressed by the fact that the town was dry, and contained no saloons within its limits.
Pullman porters later became known for their role in African-American labor organizing, laying a foundation for the Civil Rights Movement. This began with a strike in 1894. There is no way to know exactly how Love felt about the strike, as his memoirs include no direct mention of the company after 1893. It seems that he was still working as a Pullman porter through the 1890s, because the 1900 census — which spells his name “Natt Love” and finds him and Alice living in Salt Lake City — lists Love’s occupation as “porter.” His omission of the historic strike evidently stems from not wanting to speak negatively about either the Pullman Company or the striking workers. Love does makes vague statements in his book about “the great trusts, corporations and brokers, who have for years been robbing the people of this country,” but stops short of actually endorsing labor organizing. The last mention of the Pullman Company in Love’s memoirs is when he personally approaches George Pullman to appeal for matching contributions for a proposed, collectively-owned “Porter’s Home” on one thousand acres of land.
In the decades prior to the publication of Love’s book, when he was busy working on the railroad, “Deadwood Dick” had grown famous as the name of the star of a series of dime novels by Edward Wheeler. In Wheeler’s books, Deadwood Dick is a daring outlaw who roams the Black Hills, often robbing stagecoaches en route to Deadwood. These robberies tend to end in shootouts, much like the ones depicted in Love’s memoirs. The only real difference is that Love was on the side protecting transports to Deadwood, whereas Wheeler’s Deadwood Dick is on the side attacking them. Dick always stays one step ahead of the law, despite the standing five-hundred-dollar reward “For the apprehension and arrest of a notorious young desperado who hails to the name of Deadwood Dick.” He is frequently shot or stabbed in his adventures but always pulls through. Any adversity or challenge is dismissed with a “wild laugh.”
The connection between Love and Wheeler’s Deadwood Dick character is unclear. Wheeler’s first Dick novel, Deadwood Dick; or, The Black Rider of the Black Hills was published in 1877, the year after Love claims to have won the mantle. It is entirely possible that Wheeler — who lived in Philadelphia, and never traveled to the West — heard an account about Love, and appropriated the name for his character. Wheeler does frequently play fast and loose with the names of real people. Sitting Bull, who was a respected spiritual and political leader, is portrayed by Wheeler as “the fiend incarnate,” bearing no resemblance to the actual man. In the first chapter of The Black Rider of the Black Hills, Sitting Bull and his “score of hideously painted savages” kidnap a white teenage girl, beat her and tie her to a stake, all for no apparent reason. Wheeler also appropriates the identity of the real-life Calamity Jane for a character in his novels.
On the other hand, Love could easily be the one playing fast and loose with the name. There is no evidence that Wheeler knew about Love, especially as Love’s name does not seem to have appeared in any newspapers as early as the 1870s. There is a chance that Love was still going by Red River Dick in 1877, and later borrowed the name Deadwood Dick from the novels to help establish himself as a marketable Western figure. The name was used by other men as well, such as Richard Clarke, a miner and pony express rider.
One key detail which circumstantially associates Love with Wheeler’s character is the latter’s identification as the “black rider.” It is made clear in his interactions with people of color that Dick is white, but he is referred to as a “black rider” due to his costume, which includes a black mask that Dick never takes off, even when he is shot in the chest and taken into captivity. It is possible that Wheeler, who had no experience with the racial complexity of the West, had heard rumors of Love and men like him, and thought a “black rider” was — or should be — a white rider cloaked in black, not an African-American rider. In Art Burton’s biography of Bass Reeves, Burton discusses how the “Black Marshall” became transformed into the masked Lone Ranger. “For most African Americans during this time in American history,” he writes, “Their dark faces became a black mask to white America — they became invisible.”
Love died in Los Angeles in 1921. In the years immediately following his death, racist, reactionary movements, including the second wave of the Ku Klux Klan, would sweep the West. Hollywood would churn out scores of cowboy movies, almost all of them starring white actors. Some even featured characters called Deadwood Dick, but none would depict the story of the old cowhand living out his last years as a hired driver down the road from Hollywood, in the beachfront towns of Malibu and Santa Monica. It can be eternally debated whether Love earned his place in the mythology of the West through feats with a horse, rope and gun, or only through feats with a pen, but either way he earned it.
Today, Love’s image is preserved in a portrait from his cowboy days. An enlarged print hangs on the second floor of the Black American West Museum in the historically black Denver neighborhood of Five Points, where Love likely lived during his early days as a Pullman porter. In the black-and-white photo, Love wears his “fighting clothes.” A large white cowboy hat sits atop his long curls. The front of the hat’s brim is folded up to display Love’s face. A jaunty scarf is tied around his neck, and fringe runs along the outer seam of his buckskin pants. The thumb of his right hand is tucked in his bullet belt, while his left hand grips the barrel of his Winchester rifle.
Love’s thoughts, feelings and experiences are preserved in his autobiography. Towards the end of the book, Love acknowledges that, already in 1907, “the cowboy is almost a being of the past.” However, he is not resigned to let his experience pass into obscurity.
“I, Nat Love, now in my 54th year, hale hearty and happy, will ever cherish a fond and loving feeling for the old days on the range.” | <urn:uuid:d8444a69-aa25-4117-9863-a84c2e63ebdd> | CC-MAIN-2022-33 | https://narratively.com/the-fearless-black-cowboy-of-the-wild-wild-west/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00497.warc.gz | en | 0.985013 | 4,905 | 2.8125 | 3 |
Good Morning and thanks for letting me be part of your week. Today we are going to talk about the Aviators who were the first test pilots and how their accomplishments made aviation what it is today.
I found this article on the web, while researching a different topic, and was so impressed with the facts and the writing, that I am going to present the article in it’s entirety without trying to merge different elements from other sources.
Not long after ex-World War I aviator John Macready left his California ranch at the age of 54 to serve again in World War II, he was checked out in one of the B-17 bombers he’d soon be flying over North Africa. A young lieutenant, eager to tout the modern, high-altitude capability of the Flying Fortress, pointed out the supercharger that made such missions possible. “Know anything about these, sir?” he asked the veteran of the Great War. Today, Sally Macready Wallace chuckles at the irony: “Daddy just looked at him and said, ‘Yes Lieutenant, I believe I do.’ ”
Twenty years earlier, as chief test pilot at McCook Field in Dayton, Ohio, John Macready had stunned the aviation world by flying a biplane fitted with the world’s first operational supercharger to an astonishing altitude of 34,500 feet. At one point during the flight, nearly seven miles up, it was so cold in the open cockpit that the pilot’s oxygen tube clogged with ice from his own breath. Just another day’s work at America’s first flight laboratory.
Variable-pitch propellers. Guided missiles. An operational rotorcraft 10 years before Sikorsky. Landing lights and radio navigation. The first nonstop transcontinental flight. The Gerhardt Cycle-plane, which collapsed in a heap. Around the world in an airplane—before anyone else. Higher, faster, farther.
Part Skunk Works and part research center, the R&D operation at McCook Field was the launch pad for much of 20th century aviation technology. More than 2,300 people worked there during the Roaring Twenties, in 70 buildings housing everything from wind tunnels to machine shops to offices. From 1917 to 1927, every pilot at McCook knew that his next experimental flight might represent a significant leap into aviation’s brave, bold future—and that when he landed, the guy shaking his hand might be Orville Wright.
The Wright brothers, though, were ancient history. Aviation may have been born in Dayton, but by the start of World War I, America’s early edge in flight had already slipped away. In 1912, the French had come to Chicago and walked away with the Gordon Bennett Trophy, after Jules Védrines piloted his Deperdussin racing monoplane at more than 100 mph. “No American competitor even flew against them,” says former Air Force historian Richard Hallion.
On the day in 1917 when the United States entered the war, the total U.S. inventory of military aircraft numbered less than 250, and all were trainers or observation platforms. Commercial aircraft production lagged. Assumptions that airplane development would grow out of the burgeoning auto industry proved unfounded. “Aircraft production at the time of the first world war was more akin to building pianos,” Hallion says. By the Armistice, the sole American-built airplane to see combat—the Dayton Wright Airplane Company’s de Havilland DH.4—was actually designed Over There, constructed to British blueprints.
While war in Europe raged without American airplanes, the U.S. government fast-tracked the establishment of an Army Signal Corps aviation research and development facility in Dayton. The project was assigned national defense priority, and crews worked overtime building wooden hangars, test facilities, classrooms, and barracks. Occupying 250 acres adjacent to the business district, McCook Field—named for the Fighting McCooks, a family of Civil War heroes who owned the property—was the most urban airfield in the nation.
McCook’s engineering division was charged with developing the technology to recapture American aviation’s lost mojo. Though the base was run by the Signal Corps, most of the engineers and designers were civilians, and the vibe was only quasi-military. Army red tape was minimized; Colonel Thurman Bane, commandant in the early years, believed a good idea took precedence over rank. The tempestuous Brigadier General Billy Mitchell, then chief of training and operations for the Air Service in Washington, butted heads with a military establishment he accused of preparing for the last war instead of the next. The looser hierarchy at Dayton suited his temperament, and provided a laboratory for his then-controversial theories of air supremacy. “Mitchell got every foreign aircraft he could find and had them all brought to McCook,” says Hallion. “Many were German, transferred to the U.S. as part of the terms of the Armistice.” Dayton residents soon became accustomed to the sight of a Fokker D.VII, still emblazoned with the Kaiser’s Iron Cross, wheeling alongside a British Sopwith Camel or a French-built Voisin 8 in the blue Ohio skies.
“I want tomorrow’s airplane today,” Mitchell told McCook engineers. Behind closed hangar doors, the German airplanes were stripped to the frame to reverse-engineer their secrets. Engineers searched for the perfect mix-and-match magic, installing American engines in European aircraft and vice versa. In the culture of experimentation Bane encouraged, any novel idea was granted at least a fair hearing, whether from a major company or a lone backyard inventor. The most promising designs were handed off to a crew that built prototypes in the cavernous assembly building, which were then flight-tested.
Among the concepts brought to life by the engineering division was a 16-ton behemoth known as the Barling Bomber. Based on a wartime idea that gargantuan airplanes staging night bombing raids could help decide future conflicts, the enormous triplane featured a 10-wheel landing gear, five gun stations, and a 5,000-pound bomb capacity. Though it completed testing and even a promotional tour, its range, just 170 miles, combined with a maximum speed below 100 mph doomed the outsized airplane. En route to Washington, D.C., for a demo flight before legislators, the Barling failed to clear the Appalachian mountain range and had to turn back. Cost overruns, including the requirement for a $700,000 hangar, were so big the project was canceled.
McCook’s greatest invention, though, may have been the professional U.S. military test pilot. No longer would aeronautical researchers rely on daredevils and barnstormers to check out their new machines. Europeans and Americans alike had started to take a more scientific approach to aviation, and for the pilots assigned to Dayton, technical training would be as important as flying skills.
One of the first of the new professionals was Eugene “Hoy” Barksdale, a Mississippian who flew for the British Exeter Cadet Squadron in World War I. Barksdale had three confirmed shoot-downs before he was downed behind enemy lines in France. After the Armistice, his aerial prowess—he set a speed record in a Curtiss biplane, for example—impressed Billy Mitchell, so in November 1923 he was transferred to the elite group of pilots in McCook’s Flight Test Section. “Mitchell put together the best of the best in the Air Service at McCook,” says Shawn Bohannon, a retired Air Force archivist. “And Barksdale was definitely one of them.” The 26-year-old pilot quickly developed a reputation, and he took on some of the boldest assignments. When the rear stabilizer separated from an experimental metal Boeing XCO-7, Barksdale bailed out in a spin and survived—an early beneficiary of new parachutes developed at McCook.In 1925, as he made ground-skimming passes in a modified DH.4 to test wing loading, Barksdale felt a jolt. He landed the airplane to check the damage, only to discover he’d decapitated two Army surveyors riding in a flatbed truck, who had inadvertently strayed into the test area. Despite the shock, the next day Barksdale was back in the pilot’s seat testing another aircraft over the same course. “I sustained no injuries and I am subject to duty,” he told a Dayton newspaper reporter, adding, “Fliers must have lady luck with them sometimes if they are to keep going.”
Many of the traits later associated with the classic test pilot psyche came together in Hoy Barksdale. “He wasn’t a terribly excitable man,” says Bohannon. “He was an incredibly professional and stoic man—a gifted pilot who had the ability to just press forward with the mission at hand.” At the time, critical observations and recordings during a test flight had to be committed to memory or written on a clipboard strapped to a leg. Not only could Barksdale keep control of his aircraft in stressful situations, “he was also a very keen observer and recorder, fantastic qualities for a test pilot,” says Bohannon. In fact, Barksdale literally wrote the book on the subject, authoring the military’s first test pilot manual in 1926. In Flight Testing of Aircraft, he lays out a program for testing different aircraft, one per month, with the results meticulously recorded in a standardized seven-page report. Eventually, Barksdale paid the ultimate price for his methodical approach to taking on new risks. While testing a spin-prone Douglas O-2 observation airplane in 1926, he deliberately induced a left spin. “It went into a flat spin and he couldn’t recover,” Bohannon says. As he attempted to jump free of the plane, centrifugal force slammed him into the fuselage. The cords of his parachute were severed by the wing rigging, sending him plummeting to his death in front of scores of witnesses.
The crash traumatized the Air Service. “His death became the driving force behind extensive test work conducted solely to determine the cause of flat spins,” Bohannon says. Another McCook test pilot, Harry Sutton, made it his mission to discover techniques to counter the mysterious phenomenon, beginning with theoretical work that led to wind tunnel tests and ultimately successful flight experiments. When an airfield opened in Louisiana in 1933, it was named for McCook’s pioneering aviator; today it’s called Barksdale Air Force Base.
American pilots commonly returned from World War I steeped in stick-and-rudder sense but lacking formal training in aeronautics. McCook’s Air School of Application was set up to mold the most promising candidates into disciplined pilots with an engineering mindset. Lieutenant Edwin Aldrin, who would later get a Ph.D. in aeronautical engineering from MIT, was made assistant commandant, in charge of the school’s operations. The curriculum included courses like “Economic Analysis of Dirigible and Airship Lines,” and instructors taught topics from airfoil theory to propeller design.
Edwin’s son Buzz Aldrin, who later became a NASA astronaut, connects the dots between McCook and the aerospace research that culminated with his own lunar landing in 1969. “It’s all a big circle,” he says. The school his father helped organize at McCook in 1919 evolved directly into the Air Force Institute of Technology—“the same institution that sponsored my Ph.D. in astronautics [on orbital rendezvous] in 1963.” The senior Aldrin had studied physics at Clark University under Robert Goddard, inventor of the first liquid-fueled rocket. Edwin Aldrin also knew Charles Lindbergh, who in turn had connections to philanthropist Harry Guggenheim. When Goddard came to Dayton seeking backers for his rocket experiments, Lindbergh introduced him to Guggenheim. Forty years later, a giant liquid-fuel rocket would propel Edwin Aldrin’s son to the moon. A big circle indeed.
The students and staff at McCook were a Who’s Who of early aerospace. The legendary Jimmy Doolittle was in the class of ’23. Leigh Wade was a McCook test pilot before setting out in 1924—with seven other Army pilots—on the first round-the-world flight. Stanford-educated John Macready was chief test pilot for the Air Service from 1920 to 1926, during which time he won the Mackay Trophy for aviation achievement three times. He even designed the first aviator sunglasses, working with Bausch & Lomb to come up with a shape and tint that could protect a pilot’s eyes in the thin air at high altitudes.
In her biography of her father, Sally Wallace described his first day at McCook. Escorted by the officer in charge to observe the test of an experimental vehicle, Macready watched in horror as the aircraft stalled at 700 feet and spiraled in, exploding in flames and burning the pilot beyond recognition. “As you can see,” the unfazed officer next to him said, “we need replacements.”
No test pilot flew as many flights as “Mac” Macready, and under conditions as strenuous. In the 1920s, the development of pressurized cockpits was still a work in progress. The McCook engineers welded an airtight steel barrel incorporating flight controls, an altimeter, and a six-inch glass porthole into the open cockpit of a de Havilland DH.9. Sealed inside, Macready, hunched in what he termed “a metal coffin,” would take it aloft.
The Engineering Division was always eager to find new applications for airplanes, and when a Cleveland park system employee wondered if the job of spraying trees with insecticide couldn’t be done better by a hydrogen dirigible—or even a newfangled airplane—the idea drifted through the Department of Agriculture and ended up at McCook. Soon, a hand-operated hopper with the capacity for 100 pounds of lead arsenate poison was mounted on a Curtiss JN-4. With the hopper’s designer in the observer’s seat, Macready flew the Jenny at 80 mph, 35 feet above a grove of catalpa trees infested with caterpillars. The insecticide was dispensed in six passes, coating the trees and killing the pests. The science of cropdusting was born. As Macready landed, ecstatic Department of Agriculture observers swarmed the airplane. Today aircraft spray 71 million acres of cropland each year.
Collaboration between the public and private aviation sectors was practically invented at McCook. When he retired in 1954, Gene Eubank was the oldest active pilot in the Air Force. Thirty years earlier, he had been a McCook test pilot assigned to bombers and large aircraft. Eubank had been flying border patrol missions against Pancho Villa’s bandits when Billy Mitchell spotted him and brought him to Dayton.
In an Air Force oral history interview in 1982, Eubank described the daily life of a McCook pilot. Being the first to fly airplanes made by U.S. manufacturers was considered a perk for military test pilots, who at the time had no counterparts in private industry. While testing the XB-906, an all-metal design by McCook engineer Bill Stout that evolved into Ford’s famous Trimotor, Eubank would frequently visit Detroit. “If there was anything to go to the factory to make a suggestion about…I was the one,” he said. McCook pilots were treated like celebrities, the astronauts of their day. “Mr. Henry Ford had me to lunch with him,” Eubank recalled. “Mr. Ford’s chief engineer, Mr. Henry Mayo, came down to the train and met me, then took me to his private club and put me up, then put me back on the train when I went back to Dayton. Now, that was the accord that a young aviator got from the top people in this country.”
Mac Macready enjoyed similar respect from industry leaders. Anthony Fokker, the Dutch-born aviation manufacturer who had moved to the United States in 1922, was a frequent house guest at Macready’s Dayton residence. Sally Wallace recalls the day in 1925 when Fokker invited members of her mother’s bridge club for a flight on his new T-2 transport. Many of them had never flown before, but this game group of young Jazz Age women unanimously accepted the dashing Fokker’s offer and took to the sky. Macready piloted the T-2 while Fokker schmoozed with the bridge club in the cabin and passed around a box of chocolates.
World War I had shown military strategists that altitude was advantage. Pre-war maximums averaging 8,000 feet were quickly surpassed by aircraft like the Fokker D.VII, with a ceiling above 20,000 feet. The limiting factor was not human physiology but the engine. The Liberty-12, a revolutionary water-cooled, 12-cylinder powerplant developed at McCook, delivered 400 horsepower at sea level but less than 90 in the oxygen-starved environment above 25,000 feet. So McCook engineers, working with General Electric, developed a turbo-supercharger to sustain horsepower at high altitudes, and applied it to a Liberty-powered LUSAC 11 fighter. Rudolph “Shorty” Schroeder made the first few high-altitude tests. On his last attempt, his oxygen supply faltered at just over 33,000 feet. Momentarily lifting his goggles in the open cockpit to adjust the flow, his eyeballs were quick-frozen and he lost consciousness. After the airplane plunged six miles in two minutes, the sound of the nearly empty fuel tanks contracting in the higher air pressure at lower altitudes jarred Schroeder back to consciousness, and he was able to glide the airplane to a landing.
Mac Macready took over the high-altitude program and made 50 flights above 30,000 feet in the LUSAC. On September 18, 1921, he was well above that when teardrops in his eyes turned to icicles and ice formed in his oxygen flow. “At this point, his mind began to grow fuzzy,” his daughter wrote. “Glancing at the airspeed indicator he was surprised to see that it read only 65 miles per hour.” It took a long moment before he realized he’d been peering at the tachometer displaying 6,500 revolutions per minute. “He told himself ‘I’m losing it,’ ” Wallace writes. Her father had enough altitude experience to know that a lagging thought process and a fizzy sense of euphoria were symptoms of deadly hypoxia. Nevertheless, he nudged the biplane up past 34,000 feet, where, in the thin air, it dangled more than flew, refusing to climb further. “Mac took a look around for the first time,” Wallace writes. “The sky was a dazzling white, almost blinding in its intensity…. He was higher at that moment than any man had ever been before.” Macready circled the LUSAC down to McCook in 5,000-foot increments. Although his altimeter read 41,200 feet (his daughter still has the instrument’s barograph traces), post-landing calibration led the Fédération Aéronautique Internationale to downgrade the official number to 34,563 feet. It was still a world record—witnessed by Orville Wright himself, who later came by Macready’s office to congratulate him.
During the war, when bullets hit the fuel tanks in wood-and-fabric airplanes, the craft became flying crematoriums. Pilots could opt to leap to their death or ride the flaming airplane down. Balloon observers had a better choice: When they jumped from the gondola, a rudimentary parachute unfolded that they could grab onto. The balloon escape system was effective: No wartime observer ever died as a result of one failing. In an airplane, however, instantly deployed parachutes could get tangled in the wing rigging, and aviators were dragged into the spinning prop. Billy Mitchell brought the problem to McCook engineers. Floyd Smith, a former circus performer and a test pilot for Glenn Martin who later headed the Parachute Division at McCook, spearheaded intensive research, which led to the invention of the Type A free fall parachute, made of Japanese Habutai silk. The Type A’s innovations included delayed ripcord opening—which allowed the pilot to fall clear of the airplane before opening the chute—and a smaller pilot chute to yank the main chute out of the pack.
Six months after the backpack-style Type A was introduced, McCook pilot Harold Harris was flying a Loening monoplane when the aircraft began to disintegrate. Harris released his harness and stood up, and was immediately blown out of the cockpit by the propeller blast. Normally that would have meant certain death, but instead, moments later he floated down beneath a billowing white canopy, landing in a backyard grape arbor without a scratch and becoming the first aviator saved by the McCook emergency freefall parachute.
A year later, when the engine in his DH.4 conked out over Dayton, Mac Macready “hit the silk” and claimed honors for the first nighttime save. Far below, at the estate of the president of the Dayton Chamber of Commerce, guests at a dinner party on the terrace were discussing the Book of Revelation when Macready’s de Havilland streaked overhead like a meteor and exploded in a vacant field, illuminating the sky. Seconds later, a disembodied voice could be heard in the darkness above. “My father was yelling ‘Hello! Help!’ as he came down in the parachute,” Sally Wallace explains. The host of the gathering, an avid Bible scholar, later likened the event to witnessing the archangel Gabriel calling down from heaven. Harold Harris and Mac Macready became, respectively, the first and second charter members of the Caterpillar Club, an organization that still records saves by parachute.
McCook did its part to assure the public that airplanes were safe by staging two record-breaking flights. In May 1923, Macready and Oakley Kelly flew a McCook-modified Fokker transport from Roosevelt Field in Long Island to San Diego, nonstop, in 26 hours. By then, research at McCook’s Instrument and Navigation Branch had made “blind flying”—flying on instruments only—more precise and predictable. To get headings free of magnetic deflection errors, the pilots used a compass invented at McCook. A bank-and-turn indicator, another McCook original, kept them shiny-side-up in clouds and fog. By the time Macready flew the big T-2 over sun-drenched downtown San Diego, their instrument-guided heading deviated less than a fraction of a mile from the course marked on the map. (Today the airplane is on exhibit in the National Air and Space Museum in Washington, D.C.)
Such long-distance flights became something of a McCook trademark. In June 1927, test pilots Lester Maitland and Albert Hegenberger flew a Fokker Trimotor christened the Bird of Paradise across 2,425 miles of open ocean between Oakland, California, and Honolulu. The airplane was crammed with the latest and greatest from McCook’s Instrument and Navigation Branch, along with an inflatable raft complete with 18-foot mast and sail. Two radio navigation beacons modeled on an experimental version at McCook were set up in San Francisco and on Maui. A navigational error of just four degrees would cause the Bird to miss Hawaii entirely and run out of fuel over the vast Pacific.
Charles Lindbergh’s flight to Paris had occurred just a few weeks earlier, and was still very much in the news. But notwithstanding the other risks he faced, Lindbergh could hardly have missed spotting the European continent as long as he kept flying. That fact was not lost on Maitland and Hegenberger. Lester Maitland’s grandson, David Knoop, remembers his grandfather’s take. “He certainly did believe [his] was a tougher flight than Lindbergh’s, and he knew Lindbergh well,” Knoop says. “As Lester always told it to me, it was a lot harder to find Hawaii than it was France back in those days.”
The Bird took off from an extended runway in Oakland on the morning of June 28, and soon after, most of its technology failed. Malfunction of the compass was followed by loss of the radio navigation signals from both California and Hawaii. Attempts to get a position via air-to-sea radio contact with a nearby Navy vessel were frustrated by poor reception. Maitland and Hegenberger navigated instead by plotting position lines from sun sightings, taking sextant fixes on stars, and observing the spume on the ocean below to estimate drift. They approached Hawaii in overcast conditions at 3:20 a.m., on the ragged edge of that four-degree margin of navigational error. They missed the Big Island entirely, and came dangerously close to bypassing the rest of the chain when the bright, flashing oil-vapor lamp of the Kilauea Lighthouse shone through the cloud cover. Maitland brought the Bird around and reversed course to Honolulu. While critical systems had failed, the flight of the Bird of Paradise is credited with revealing weak spots in navigation technology, leading to improvements that eventually established a regular air route to Hawaii. (Commercial airliners still included a sextant port in the cockpit as late as the 1960s.)Later that same year, all functions at McCook were transferred to newly constructed Wright Field, east of Dayton, and McCook began the fade into obscurity. During its 10-year tenure as aviation’s R&D nerve center, a black sign with white letters large enough to read from considerable altitude had been mounted above the door of McCook’s main hangar: THIS FIELD IS SMALL—USE IT ALL. The first test pilots did—every inch of it.
Have a good weekend, protect yourself and your profession, and join me again next week when we will be talking about…………………….
January 7, 2014 | <urn:uuid:64a2d364-910c-462d-90fd-5df1e66b388d> | CC-MAIN-2022-33 | https://www.robertnovell.com/blogwho-were-first-test-pilots-february-7-2014/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00697.warc.gz | en | 0.966787 | 5,615 | 3.65625 | 4 |
exploring self-realization, sacred personhood, and full humanity
"It cannot be too often repeated that philosophy is everybody's business. To be a human being is to be endowed with the proclivity to philosophize. To some degree we all engage in philosophical thought in the course of our daily lives. Acknowledging this is not enough. It is also necessary to understand why this is so and what philosophy's business is. The answer, in a word, is IDEAS. In two words, it is GREAT IDEAS - the IDEAS basic and indispensable to understanding ourselves, our society, and the world in which we live." Dr. Mortimer J. Adler
Mortimer Adler's Syntopicon Essay: Philosophy
Editor's 1-Minute Essay: Philosophy
Editor's Essay: How did the ancient Greeks, a religious people, manage, almost single-handedly, to create what we call philosophy? Why is it that the beginnings of so many important modern fields of enquiry find their roots in the ancient Hellenic culture?
Alfred North Whitehead, On Mathematical Method: "According to one account given by Plutarch ... [Archimedes] was found by a Roman soldier absorbed in the study of a geometrical diagram which he had traced on the sandy floor of his room. He did not immediately obey the orders of his captor, and so was killed... The death of Archimedes by the hands of a Roman soldier is symbolical of a world change of the first magnitude: the theoretical Greeks, with their love of abstract science, were superseded in the leadership of the European world by the practical Romans. Lord Beaconsfield, in one of his novels, has defined a practical man as a man who practises the errors of his forefathers. The Romans were a great race, but they were cursed with the sterility which waits upon practicality. They did not improve upon the knowledge of their forefathers, and all their advances were confined to the minor technical details of engineering. They were not dreamers enough to arrive at new points of view, which could give a more fundamental control over the forces of nature. No Roman [ever] lost his life because he was absorbed in the contemplation of a mathematical diagram."
Michael Faraday: Discoverer of the laws of electromagnetism (1831), Michael Faraday was asked, "What is the use of this discovery." He answered, "What is the use of a child - it grows to be a man." Faraday's "grown man" now rules the world as the basis of all applications of electricity.
Benjamin Franklin: He was fascinated to see the first free balloon flight of humans, which took place in November 1783. When someone who was also watching the event questioned the usefulness of this new invention, Franklin replied with a question, "Of what use is a newborn baby?"
Osho: “With me, illusions are bound to be shattered. I am here to shatter all illusions. Yes, it will irritate you, it will annoy you - that's my way of functioning and working. I will sabotage you from your very roots! Unless you are totally destroyed as a mind, there is no hope for you.” Editor's note: This reminds me of a purported saying of Jesus in the Gospel of Thomas, to the effect, "Blessed are those who allow themselves to be disturbed."
notes from Will & Ariel Durant on the meaning of philosophy:
Something momentous happens when a society turns from priestcraft as the source of all wisdom, which devolves into interpreting, not the facts of nature but, sacred texts and the words of the oracle.
"For what is philosophy but an art? - one more attempt to give form to the chaos of experience… Art is the creation of beauty; the expression of thought or feeling in a form, beautiful or sublime."
Confucius: "I seek unity, all-pervading"; the search for unity in all phenomena! i.e. a theory of everything. Is this not the work of philosophy?
"Philosophy itself, which had once summoned all sciences to its aid in making a coherent image of the world … found its task of coordination too stupendous for its courage … and hid itself in recondite and narrow lanes, timidly secure from the issues and responsibilities of life. Human knowledge had become too great for the human mind. All that remained was the scientific specialist, who knew 'more and more about
less and less,' and the [philosopher] who knew less and less about more and more… The gap between life and knowledge grew wider and wider."
"scholastic philosophy … a disguised theology"
Hermann Graf Keyserling (1880 – 1946), a German philosopher. (Believed that German militarism was dead and that Germany's only hope lay in the adoption of international, democratic principles.) "Philosophy is the completion of science in the synthesis of wisdom. The parts of philosophy are important branches of science. But it was an unmitigated evil that [as a result of this fractionalization] the sense for the living synthesis should have disappeared."
"wisdom is not wise if it scares away merriment [because] a sense of humor, born of perspective, bears a near kinship to philosophy; each is the soul of the other."
"Epistemology … the study of the knowledge-process [better viewed as a] science of psychology [rather than] philosophy [which is] the synthetic interpretation of all experience rather than the analytic description of the mode and process of experience itself. Analysis belongs to science and gives us knowledge; philosophy must provide a synthesis for wisdom."
"There is a pleasure in philosophy - [felt against the backdrop of] the coarse necessities of physical existence [which] drag [one] from the heights of thought into the mart economic strife and gain. Most of us have known some golden days in the June of life when philosophy was in fact what Plato calls it, 'that dear delight'; when the love of a modestly elusive Truth seemed more glorious, incomparibly, than the lust for the ways of the flesh and the dross of the world… 'Life has meaning,' we feel with Browning - 'to find its meaning is my meat and drink.' So much of our lives is meaningless, a self-cancelling vacillation and futility; we
strive with the chaos about us and within; but we would believe all the while that there is something vital and significant in us, could we but decipher our own souls. We want to understand; 'life means for us
constantly to transform into light and flame all that we are or meet with (Nietzsche)'; … we want to sieze the value and perspective of passing things, and so to pull ourselves up out of the maelstom of daily
circumstance… we want to see things now as they will seem forever - 'in the light of eternity.' … we want to be whole [not fearing death] … 'To be a philosopher,' said Thoreau, 'is not merely to have subtle thoughts, nor even to found a school, but so to love wisdom as to live, according to its dictates, a life of simplicity, independence, magnanimity, and trust.' We can be sure that if we can but find wisdom all things will [yet] be added unto us. 'Seek ye first the good things of the mind,' Bacon admonishes us, 'and the rest will either be supplied or its loss will not be felt.' Truth [may] not make us rich, but it will make us free."
"Is philosophy stagnant? Science seems always to advance, while philosophy seems always to lose ground… Yet this is only because philosophy accepts the hard and hazardous task of dealing with problems not yet open to the methods of science… Every science begins as philosophy and ends as art; it arises as hypothesis and flows into achievement... philosophy is a hypothetical interpretation of the unknown… it is the front trench of the seige of truth. Science is the captured territory; and behind it are those secure regions [of knowledge]… philosophy seems to stand still, perplexed; but only because she leaves the fruits of victory to her daughter the sciences, and herself passes on, divinely discontent, to the uncertain and unexplored… Science [does not] inquire into the values and ideal possibilities of things, nor into their total and final significance; it is content to show their present actuality and operation, it narrows its gaze resolutely to the nature and process of things as they are… But [philosophy] is not content to describe the fact [but] to ascertain its relation to experience in general, and thereby to get at its meaning and its worth… to combine things in interpretive synthesis … to put together, better than before, that great universe-watch which the inquisitive scientist has taken apart… Science without philosophy, facts without perspective and valuation, cannot save us from havoc and despair."
(a) logic: the study of ideal method in thought and research
(b) esthetics: the study of ideal form or beauty; it is the ph of art
(c) ethics: the study of ideal conduct
(d) politics: the study of ideal social organization
(e) metaphysics: unlike the other forms of philosophy, it does not seek to coordinate the real in light of the ideal; it is the study of ultimate reality: 1) ontology and 2) epistemology
These are the parts of philosophy, but so dismembered it loses its beauty and joy. "Great men speak to us only so far as we have ears and souls to hear them; only so far as we have in us the roots of that which
flowers in them. We too have had the experiences they had, but we did not suck those experiences dry of their secret and subtle meanings; we were not sensitive to the overtones of the reality that hummed about us.
Genius hears the overtones, and the music of the spheres; genius knows what Pythagoras meant when he said that philosophy is the highest music."
David Hume: "philosophy is common sense, methodized and corrected"
Kant and Hume: earlier philosophers had tried to understand the world outside of human behavior, that, it made sense on its own; only by understanding human beings can we understand the world; what we
think of as an 'objective' world is merely that seen through human eyes; humanity is condemed to see the world from its own perspective, there is no other viewpoint to guide it; after Hume and Kant, it was seen that the structure of knowledge comes from within mankind and not an external ideal or source; we are part of what we know and detached spectators of what is known;
on Seneca: "philosophy is the science of wisdom and wisdom is the art of living. Happiness is the goal but virtue, not pleasure, is the road."
Alfred North Whitehead, On Mathematical Method: "From the earliest epoch (2634 B.C.) the Chinese had utilized the characteristic property of the compass needle, but do not seem to have connected it with any theoretical ideas. The really profound changes in human life all have their ultimate origin in knowledge pursued for its own sake. The use of the compass was not introduced into Europe till the end of the twelfth century A.D., more than 3,000 years after its first use in China. The importance which the science of electromagnetism has since assumed in every department of human life is not due to the superior practical bias of Europeans, but to the fact that in the West electrical and magnetic phenomena were studied by men who were dominated by abstract theoretic interests."
Nietzsche: "Even a thought, even a possibility, can shatter us and transform us."
Will Durant: "Philosophy is harmonized knowledge making a harmonious life; it is the self-discipline which lifts us to serenity and freedom. Knowledge is power, but only wisdom is liberty."
Stillman Drake, Galileo at Work: His Scientific Biography: "Philosophy itself cannot but benefit from our disputes, for if our conceptions prove true, new achievements will be made; if false, their refutation will further confirm the original doctrines... I truly believe the book of philosophy to be that which stands perpetually open before our eyes, though since it is written in characters different from those of our alphabet it cannot be read by everyone."
George Smith on Adam Smith's Wealth of Nations: "Adam Smith was not the first to write on economics, but his work was the most thorough. His influence has been rivaled by only one other economist, Karl Marx. Why were these two writers so successful? One reason is that both men did not confine themselves to economics but combined it with philosophy, social theory, history, and psychology. Both were inter-disciplinary thinkers and this allowed them to produce books with distinct world views. Their theories stand on opposite sides of the fence but both Smith and Marx appeal to audiences outside the field of economics."
Francis Bacon: "A little philosophy inclineth man's mind to atheism, but depth in philosophy bringeth men's minds about to religion."
David Hume, Treatise Concerning Human Understanding: "If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, 'Does it contain any abstract reasoning concerning quantity or number?' No. 'Does it contain any experimental reasoning concerning matter of fact and existence?' No. Commit it then to the flames: for it can contain nothing but sophistry and illusion."
Oliver Wendell Holmes: "I was just going to say, when I was interrupted, that one of the many ways of classifying minds is under the heads of arithmetical and algebraical intellects. All economical and practical wisdom is an extension of the following arithmetical formula: 2 + 2 = 4. Every philosophical proposition has the more general character of the expression a + b = c. We are mere operatives, empirics, and egotists until we learn to think in letters instead of figures."
Thomas Carlyle, Sartor Resartus III: “It is a mathematical fact that the casting of this pebble from my hand alters the centre of gravity of the universe."
George Bernard Shaw: "For every difficult question, there is an answer that is clear and simple and wrong."
Pablo Picasso: "Computers are useless. They can only give you answers."
Woody Allen: "I was thrown out of college for cheating on the metaphysics exam; I looked into the soul of the guy next to me."
Alfred North Whitehead: Philosophy is "the endeavor to formulate a system of general ideas which shall be consistent, coherent and complete, in terms of which every aspect of our experience can be interpreted."
Cicero: "The whole life of the philosopher is a preparation for death."
Arthur Schopenhauer, The World as Will and Idea: "Animals learn death first at the moment of death... man approaches death with the knowledge it is closer every hour, and this creates a feeling of uncertainty over his life, even for him who forgets in the business of life that annihilation is awaiting him. It is for this reason chiefly that we have philosophy and religion."
William Wordsworth: "The human mind is capable of excitement without the application of gross and violent stimulants; and he must have a very faint perception of its beauty and dignity who does not know this."
Professor Daniel N. Robinson, Georgetown University: How did the ancient Greeks manage, almost single-handedly, it seems, to create what we call philosophy? Why is it that the beginnings of so many subjects find their roots in the Hellenic world? Various theories are advanced: there was an abundance of sunshine; a plentiful fish diet was most healthful; slave labor made possible ample leisure time - none of these explanations are at all satisfying: Pharaoh had no shortage of sunshine, but Egypt is rather lean on philosophical thought; other lands had managed to create reliable food sources, and slave labor was common in the ancient world. But the Greeks, it seems correct to assert, were different from all others in one area: The ancient Greek world never had a state religion, but the polis was never completely secular either; rather there is an "extraordinary integration of the secular and the devote." The ancient Greeks, it might be said, had a "religious attitude - but not a religion, as such." Prof. Robinson lays "great stress on the relationship, in any society, between the epistemological authority conferred on religious figures and the philosophical vitality of that age ... if you are fairly satisfied that the most burning questions are best answered by going to an authority [an oracle, a saint, a wise man] ... [then] I submit that the philosophical dimensions of that culture will be fairly thin and fragile, if present at all. There's something about philosophy that is at once humanizing and utterly human - when the oracles have failed us, when saints have grown silent, and when God has chosen not to reveal himself, then we must stand back in the dark shadows of confusion and fear and ask, What sort of being am I? What sort of life is right for me? ... The philosopher doesn't enter the arena of philosophy devoid of belief, purpose, plan, aspiration and values - all of that is in place; but there are those moments when we say no matter how much this means to me, no matter how centered my being is on this pattern of beliefs, no matter how close, emotionally, romantically, I am to those who hold these convictions, I'm going to be skeptical about those statements, I'm going to plumb the depths of those arguments to see finally what their true value is." To do otherwise, you are, as Plato said, a puppet on a string, a slave; but the truth will set you free. Editor's note: As I survey the great thoughts of history, I am impressed by many things; but one principle asserts itself continually: true progress, the advancement of humankind, takes place only when the dignity and sanctity of personhood is honored. Religious persons are so often threatened by philosophy, this wine of the Devil - but why it be so? Should it be so difficult to accept that God might have intended for men and women to actually use their high-powered faculties of reason? - to learn, to plan, to make mistakes, to reason, to fail, to try again? - and in this process become more godlike? Instead, errant true-believers often reduce "faith" to a mindless exercise of blindly obeying whatever self-styled authorities serve up as definitions of "the truth."
Professor Daniel N. Robinson, Georgetown University: "We always look back on the long shadow of Socrates, who wrote not a line, while we proceed to write volumes."
Alfred North Whitehead, Science and the Modern World, 1925: "Philosophy asks the simple question, What is it all about?"
Henry David Thoreau: "To be a philosopher [literally: "to love wisdom"] is not merely to have subtle thoughts, nor even to found a school, but so to love wisdom as to live, according to its dictates, a life of simplicity, independence, magnanimity, and trust."
Editor's last word:
I very much like Thoreau's simple definition of what it means to be a philosopher. It's what I'd like to do, how I'd like to live, for the unending future.
We learn from the afterlife testimonies that in the next world we do not suddenly become omniscient with all mysteries unraveled. The citizens of Father Benson’s world employ both science and philosophy to advance Summerland-society in the struggle for knowledge. There as here, philosophy is the advance-guard offering its uncertain theories and hypotheses, with science as the captured territory of empirically-tested information. It’s a long and difficult road, this mapping of reality. It was meant to be – for in this process we evolve ourselves as human beings and claim more of that inner potential “without discernible upper limit.” | <urn:uuid:957a4480-7e0c-4c69-915b-d825ee822bc9> | CC-MAIN-2022-33 | http://wordgems.net/philosophy.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00295.warc.gz | en | 0.95893 | 4,234 | 2.671875 | 3 |
Volume 14, Number 10—October 2008
Cryptosporidium Species and Subtypes and Clinical Manifestations in Children, Peru
To determine whether clinical manifestations are associated with genotypes or subtypes of Cryptosporidium spp., we studied a 4-year longitudinal birth cohort of 533 children in Peru. A total of 156 infection episodes were found in 109 children. Data from first infections showed that C. hominis was associated with diarrhea, nausea, vomiting, general malaise, and increased oocyst shedding intensity and duration. In contrast, C. parvum, C. meleagridis, C. canis, and C. felis were associated with diarrhea only. C. hominis subtype families were identified (Ia, Ib, Id, and Ie); all were associated with diarrhea. Ib was also associated with nausea, vomiting, and general malaise. All C. parvum specimens belonged to subtype family IIc. Analysis of risk factors did not show associations with specific Cryptosporidium spp. genotypes or subtypes. These findings strongly suggest that Cryptosporidium spp. and subtypes are linked to different clinical manifestations in children.
Cryptosporidiosis is often observed as a pediatric disease in areas where Cryptosporidium spp. are endemic. Children <2 years of age are frequently infected in theses areas in community (1–4) and hospital (5) settings. The spectrum of symptoms is diverse, ranging from acute diarrhea, severe chronic diarrhea (6), or vomiting to asymptomatic infections (2,3). In community-based studies in Peru, ≈30% of immunocompetent children with cryptosporidiosis reported diarrhea (2,7). In AIDS patients, the diversity of symptoms has been linked to immune status; severe chronic diarrhea affects patients whose CD4+ counts are <200 cells/mm3 (8). A recent study in HIV-infected patients in Peru showed that only 38% with Cryptosporidium infections had diarrhea (9), although 64% of participants had CD4+ counts <200 cells/mm3. However, the cause for these variations is not clearly understood.
The use of molecular tools in epidemiologic investigations has provided new insights into the diversity of Cryptosporidium spp. infecting humans and animals (10). There are at least 16 established Cryptosporidium spp. and >40 unnamed genotypes that are potentially different species. At least 8 of them have been reported in humans: C. hominis, C. parvum, C. meleagridis, C. felis, C. canis, C. muris, and C. suis, and the Cryptosporidium cervine genotype. Molecular characterization of the 60-kDa glycoprotein (GP60) gene of C. hominis and C. parvum has enabled further division into subtype families and subtypes (11).
Humans are most frequently infected with C. hominis and C. parvum (7,11,12); recent reports indicate possible associations between these 2 organisms and different clinical manifestations. In Brazil, children infected with C. hominis had increased parasite shedding, more frequent presence of fecal lactoferrin, and delayed growth when compared with those infected with C. parvum (13). In a study of sporadic cryptosporidiosis in the United Kingdom, illness was more severe in persons infected with C. hominis than in those infected with C. parvum (14,15). A recent study reported different clinical manifestations among Cryptosporidium spp. in HIV-positive persons, and C. hominis was linked to more severe symptoms. The high virulence of C. hominis was evident within its subtype family Id, while absent in subtype families Ia and Ie (16).
In this study, we analyzed the diversity of Cryptosporidium at the species, subtype family, and subtype levels in children living in an area with endemic cryptosporidiosis. We also analyzed the association between clinical manifestations and infections with specific Cryptosporidium spp. and C. hominis subtype families.
Specimens and data were obtained from a longitudinal birth cohort study of diarrheal diseases conducted during 1995–1998 in Pampas de San Juan de Miraflores, Lima, Peru. This community was initially settled in the 1980s by immigrants from rural areas. It is located in the outskirts of Lima and had at the time of the study ≈40,000 inhabitants. In this community, the prevalence of HIV infection was <1% (2,7). The study protocol was reviewed and approved by the institutional review boards of Johns Hopkins University and the Centers for Disease Control and Prevention. All participants provided informed consent before participation in the study.
The study participants were asked to provide weekly fecal specimens for microscopic detection of ova and parasites, including Cryptosporidium spp. Stool specimens were washed and concentrated by using the modified Ritchie formalin-ether method and examined for Cryptosporidium spp. oocysts by microscopy of smears stained with a modified acid-fast stain. Intensity of Cryptosporidium spp. oocyst shedding in stools was determined by counting the number of oocysts per 50 μL of concentrated sample. We used a 0 to 3+ scoring system in which 0, negative; 1+, 1–50 oocysts; 2+, 51–150 oocysts; and 3+, >150 oocysts.
Genotyping and Subtyping
Cryptosporidium spp. were identified by using a small subunit rRNA-based PCR–restriction fragment length polymorphism genotyping tool (7,12,17). Subtyping of C. hominis and C. parvum was based on sequence analysis of GP60 genes (18). Each specimen was analyzed by either method at least twice. Subtype families within C. hominis and C. parvum were determined on the basis of sequence differences in the nonrepeat region of the gene. Within each subtype family, subtypes differed from each other, mostly in the number of serine-coding trinucleotide repeats (TCA, TCG, or TCT microsatellite) located in the 5′ region of the gene. The previously established nomenclature system was used to differentiate subtypes within each subtype family (11,16,17). For C. parvum subtype family IIc, the original GP60 sequence for C. parvum subtype family IIc (GenBank accession no. AF164491) was assigned as IIcA5G3a. Subtypes that diverged from this sequence were assigned subsequent alphabetical extensions.
Associated Clinical Manifestations and Risk Factors
Daily information on clinical manifestations was gathered by using structured questionnaires. These data were collected by study personnel during interviews of adult caregivers of the participants. Data included relevant gastrointestinal symptoms such as abdominal pain, fever, general malaise, nausea, vomiting, number and consistency of bowel movements, and blood in stools.
Study of potential risk factors for infections was based on sanitation and socioeconomic data obtained at study enrollment. These factors included hygiene parameters (water piped inside the house and presence of flush toilets), presence of animals (dogs, chicken, ducks, guinea pigs, rabbits, parrots, and sheep), house infrastructure (sturdy walls and roof), and indirect economic indicators (house infrastructure and possession of electronic appliances).
For the epidemiologic and statistical analyses, we included data from eligible children who had >6 months of participation in the study and <20% noncompliance of study procedures. For the epidemiologic analyses we used the following definitions.
Duration of an infection episode was defined as an episode that started on the first date that Cryptosporidium spp. oocysts were microscopically detected in stools and ended on the date of the last positive stool that was followed by at least 3 weekly specimens that were microscopically negative. The length of the infection episode was the number of days between the start and end dates.
An episode of diarrhea was defined as a child having >3 liquid or semiliquid bowel movements on any day and the mother’s assessment that the child had diarrhea. Diarrhea was considered associated with an episode if it occurred within 7 days of a positive result for Cryptosporidium spp.
Statistical analyses included data from participants infected with 1 species of Cryptosporidium and compared children with a specific Cryptosporidium sp. or C. hominis subtype family with all other participants not infected with that species or subtype family. Subtype families were compared because of the extensive sequence polymorphism in the nonrepeat regions of GP60, and subtypes within families primarily differed from each other in the length of the serine stretch at the beginning of the protein. Data from the few children infected with >1 species or subtype determinations that were conflicting with genotype categorizations were excluded from that particular comparison. Because all C. parvum in this population belonged to 1 subtype family, results were presented at the species level. Few participants were infected with C. canis and C. felis and these species are genetically divergent from C. hominis, C. parvum, and C. meleagridis. Therefore, the data for these persons were pooled.
Poisson regression was used to compare incidence rates of gastrointestinal symptoms (dependent variables) and infections with Cryptosporidium spp. or subtype families (independent variables) detected in each infection episode. This model was used to incorporate individual incidence rates of infections and the duration that each person participated in the study. These regression analyses were conducted by using SAS Proc Genmod (SAS Institute, Cary, NC, USA) for linear models. The generalized estimating equations procedure was implemented to adjust for correlation among multiple infections for the same child. Statistical significance for a priori tests was set at α = 0.05. Whenever multiple subtypes were compared, a separate Bonferroni adjustment was used to maintain an overall experiment-wide α of 0.05.
The χ2 or Fisher exact tests were used to analyze any association between Cryptosporidium spp. or subtypes and animal contacts or socioeconomic risk factors. Pooled t test was used to investigate the differences in age at first infection episode among Cryptosporidium spp. and subtype families. All statistical analyses were performed by using SAS version 9.1 (SAS Institute).
A total of 533 children were enrolled, and their median age at enrollment was 14 days. They contributed 44,042 stool specimens for detection of enteric parasites and 324,067 child-days of clinical manifestation surveillance.
Prevalence of Cryptosporidiosis
Data from 368 participants who met the evaluable criteria were included in the epidemiologic analyses. Cryptosporidiosis was detected by microscopy for 109 participants, for a total of 156 infection episodes. Among them, 71 children had 1 infection, 30 had 2 infections, 7 had 3 infections, and 1 had 4 infections.
Cryptosporidium spp. Genotypes and Subtypes
Genotype data for Cryptosporidium spp. were obtained from 127 (81%) of 156 infection episodes. Among those genotyped, C. hominis (70%) was the species most frequently detected, followed by C. parvum (13%) and C. meleagridis (8%). In contrast, C. canis and C. felis were detected in 2% and 5% of cases, respectively (Table 1). Among 106 infection episodes with either C. hominis (89) or C. parvum (17), subtype analysis was successfully accomplished for 78 of 89 infections with C. hominis and 14 of 17 infections with C. parvum. Four subtype families were identified within C. hominis: Ia, Ib, Id, and Ie, the least frequent was Id. All infections with C. parvum belonged to subtype family IIc. Novel subtype sequences were deposited in GenBank under accession nos. EU095258–EU095267 (Table 2).
Several subtypes were found within subtype families Ia and Id of C. hominis and IIc of C. parvum. Subtype family Ia was the most diverse with 6 subtypes, followed by subtype families Id and IIc, each with 3 subtypes. In contrast, subtype families Ib and Ie each had only 1 subtype: IbA10G2 was the only subtype in subtype family Ib and IeA11G3T3 was the only subtype in subtype family Ie (Table 2).
Cryptosporidium spp. and Oocyst Shedding
The mean age for first infections was 1.6 years of age (median 1.4 years, range 0.2–4.7 years). Infections with C. parvum occurred at a younger age than those with other genotypes, and infections with C. canis or C. felis occurred in older children. However, these differences were not statistically significant after the Bonferroni correction (Table 3).
The mean duration of the first infection episode was 8.1 days (median 5.5 days, range 1–40 days). Infections with C. hominis (mean 10.3 days) lasted longer than infections with other species of Cryptosporidium (mean 5.8 days; p = 0.001). The length of the infection episodes among children infected with different subtype families of C. hominis was not significantly different (9.3, 13.1, 7.7, and 12.8 days for Ia, Ib, Id, and Ie, respectively).
Similar patterns were observed for intensity of parasite excretion. Children infected with C. hominis had higher parasite excretion scores (mean 1.93) than those infected with other species of Cryptosporidium (mean 1.42; p = 0.021). Among children infected with different subtype families of C. hominis, the intensity of parasite shedding was similar.
Sequential Cryptosporidium spp. Infections
Among children with complete genotyping data, sequential infections were detected in 17 children: 15 had 2 episodes of Cryptosporidium spp. infection and 2 had 3 episodes (total of 19 reinfection events). The median interval between infections was 10 months (range 2.1–26 months). The same Cryptosporidium sp. was detected in 6 of 15 children with 2 episodes and 1 of 2 children with 3 infections, all involving C. hominis (Table 4). When analysis of reinfections included C. hominis subtype family data, only 2 sequential infections occurred with the same subtype family: child 5395 had C. hominis subtype family Id in the first and second infections, and child 5076 had C. hominis subtype family Ie in the second and third episodes of cryptosporidiosis.
Cryptosporidium spp. and Subtypes and Associated Clinical Manifestations
Distribution of species and subtype families at first infection among 109 Cryptosporidium spp.–infected children was similar to the distribution in all infection episodes. A second model analyzed the data from all infection episodes (Table 5).
On the basis of microscopy results, 36% of infected children had diarrhea, 28.4% had general malaise, 16.5% had abdominal pain, 15.7% had vomiting, and 7.9% had nausea. None of the study participants reported fever or blood in stools. Overall, 44.1% reported >1 of the manifestations assessed in the study.
Associated clinical manifestations at first infection varied among different Cryptosporidium spp. First infections with C. hominis were associated with nausea, vomiting, general malaise, and diarrhea (Table 5). In contrast, infections with other species were associated with diarrhea only.
Patterns of clinical manifestations also varied among C. hominis subtype families. Infections with subtype family Ib were associated with nausea, vomiting, general malaise, and diarrhea. Infections with other subtype families (Ia, Id, and Ie) were generally associated with diarrhea only. A similar trend was also seen in the cumulative analysis of all infection episodes at the species and subtype family levels. A possible exception was C. hominis subtype family Ia, which showed an association with nausea and vomiting at first infections but did not show such an association in the cumulative analysis of all infection episodes (Table 5).
Rates of clinical manifestations in our study were lower than rates reported for a birth cohort in Brazil, where 81% of 42 participants infected with C. hominis or C. parvum had diarrhea (13). This difference can be attributed to differences in study designs. Our study analyzed weekly stool samples for the presence of Cryptosporidium spp. and other parasites in a cohort of healthy children. In contrast, the cohort study in Brazil was designed to identify causes of diarrhea, and the specimens were collected within 2 weeks of clinical identification of diarrhea.
C. hominis was the predominant species in this community-based longitudinal study, followed by C. parvum (7). This predominance of C. hominis has been observed in persons in other developing countries, such as pediatric populations from Malawi (19), Kenya (20), India (21), Haiti (22), and Brazil (13), children and elderly persons from South Africa (23), and hospitalized HIV-infected children from South Africa and Uganda (24,25). As reported in previous studies (21,24,26,27), we also detected few concurrent infections with multiple Cryptosporidium spp. or C. hominis subtype families.
We observed a comparatively large proportion of participants infected with C. meleagridis, a finding that was also reported at a high frequency in HIV-infected adults in Lima, Peru (12,16). This species has been rarely reported for studies from other locations such as Portugal (28), India (21,26,29), Taiwan (30), or Iran (31) that included either children or adults with or without HIV infections. It should be noted that the diversity of Cryptosporidium spp. is also affected by the methods used. We used a genotyping tool proven to distinguish several dozen species and genotypes. However, methods based on genes coding for a 70-kDa heat-shock protein (32), Cryptosporidium spp. oocyst wall protein (33), or a smaller fragment of the small subunit rRNA gene (34) discriminate fewer Cryptosporidium spp. and genotypes.
Overall, distribution of species and C. hominis subtype families in our study was similar to that found in an HIV study in Lima, Peru (12,16). These 2 studies were conducted in the same area but in different study populations. In both studies, all C. parvum specimens belonged to subtype family IIc, which is considered anthroponotic in origin (17). The normally zoonotic subtype family IIa was not seen in our study population. This finding is also supported by our risk factor data, which showed the lack of bovines in the study households and the absence of cattle farms in or near the community of Pampas de San Juan. The similarity of the species and subtype distribution in both studies is highly suggestive that the prevalence of Cryptosporidium spp. and subtypes in a specific location is independent of the immune status of the study population.
The role of parasite genetics in clinical manifestations of cryptosporidiosis is not clear. Studies of human volunteers showed that exposure provided some degree of protection against infection and illness; the infection rates and frequencies of infection-associated clinical manifestations were lower for subsequent infections (35). Thus, clinical manifestations caused by parasite differences would be better observed in primary infections. Our longitudinal birth cohort study enrolled children at an early age (median 14 days), which enabled us to study genotypes and subtypes present at first infections and their associations with different clinical manifestations.
First infections with all species and C. hominis subtype families were associated with diarrhea. However, only C. hominis subtype family Ib was also associated with nausea, vomiting, and general malaise, but C. hominis subtype families Ia, Id, and Ie, and other Cryptosporidium spp. were not. Previously, other studies had suggested that C. hominis might be more pathogenic than other species or might induce different clinical manifestations (13,15,21). Our results indicate that within C. hominis, subtype family Ib may be more pathogenic than Ia, Id, and Ie. Subtype family Ib of C. hominis is the most frequently detected Cryptosporidium spp. in waterborne outbreaks of cryptosporidiosis in industrialized nations (36).
A previous study of cryptosporidiosis in HIV-infected persons in Peru showed that infections with different species or subtype families were associated with different clinical manifestations. Patients infected with subtype families Ib and Id of C. hominis, C. parvum, or C. canis/C. felis were more likely to have chronic diarrhea, and patients infected with C. parvum were more likely to have infection-associated vomiting (16). Overall, subtype family Id was the most virulent in the HIV study and was strongly associated with diarrhea in general and chronic diarrhea in particular. Subtype family Ib was also marginally associated with diarrhea and vomiting but not with chronic diarrhea. In this study, however, Id was only associated with diarrhea. This difference may be caused by the fact that chronic cryptosporidiosis, the life-threatening manifestation of the disease in AIDS patients, was never detected in this study of pediatric patients, and few children in this study were infected with subtype family Id, which might have prevented us from assessing its clinical manifestations fully. Nevertheless, our study corroborated the previous observation of defined patterns of clinical manifestations associated with different Cryptosporidium spp. and C. hominis subtype families.
We also conducted a risk factor analysis for predictors of infection, including age at first infection, in which we did not identify statistically significant associations between any Cryptosporidium spp. or subtype families and any of the variables analyzed, although they covered basic aspects of sanitation and zoonotic, foodborne, and waterborne transmission. One possible explanation is that our questionnaires did not obtain data on factors that were relevant. However, the same questionnaire successfully identified infection risk factors for other organisms in the same community (2). A more likely explanation is that because most Cryptosporidium spp. in this study were anthroponotic in origin, children may be constantly exposed to these ubiquitous parasites through different transmission routes. Therefore, single exposure variables were not identified as risk factors. This constant exposure may also fit the age distribution pattern of cryptosporidiosis in the community, in which most cases are found in children <2 years of age, occasionally found in older children, and almost never found in immunocompetent adults. This finding is in contrast to transmission of Cryptosporidium spp. in industrialized nations, where infections have been frequently associated with waterborne transmission from either drinking water (37) or recreational water (38).
In conclusion, clinical manifestations of cryptosporidiosis in healthy populations in disease-endemic areas are likely diverse, and the spectrum of these clinical manifestations can be attributed in part to the different species of Cryptosporidium and subtype families of C. hominis. Although further laboratory and longitudinal cohort studies in other disease-endemic areas are needed to validate our observations, these results demonstrate that parasite genetics may play an important role in the clinical manifestations of human cryptosporidiosis. Future studies should be conducted in different geographic settings; they should overcome some potential limitations of this study, such as lack of data on other gastrointestinal pathogens, which might have confounded the clinical findings, and small sample sizes, which had limited the power of the statistical analyses.
Dr Cama is a microbiologist at the Centers for Disease Control and Prevention. His research interests are the molecular epidemiology and transmission dynamics of enteric pathogens, primarily Cryptosporidium spp., microsporidia, Cyclospora spp., and Giardia spp.
We thank our study personnel in Pampas de San Juan de Miraflores for excellent work; Carmen Taquiri for invaluable efforts in the parasitology laboratory; Marco Varela for data management; and Paula Maguiña, Ana Rosa Contreras, and Paola Maurtua for administrative support.
This study was supported in part by National Institutes of Health–National Institute for Allergy and Infectious Diseases (NIH-NIAID) grant U01-AI35894 and charitable RG-ER funds, which are concerned with health in developing countries. R.H.G and V.A.C. were supported in part by NIH-NIAID grants 5P01AI051976 and 5R21 AI059661.
- Mata L, Bolanos H, Pizarro D, Vives M. Cryptosporidiosis in children from some highland Costa Rican rural and urban areas. Am J Trop Med Hyg. 1984;33:24–9.
- Bern C, Ortega Y, Checkley W, Roberts JM, Lescano AG, Cabrera L, Epidemiologic differences between cyclosporiasis and cryptosporidiosis in Peruvian children. Emerg Infect Dis. 2002;8:581–5.
- Priest JW, Bern C, Roberts JM, Kwon JP, Lescano AG, Checkley W, Changes in serum immunoglobulin G levels as a marker for Cryptosporidium sp. infection in Peruvian children. J Clin Microbiol. 2005;43:5298–300.
- Simango C, Mutikani S. Cryptosporidiosis in Harare, Zimbabwe. Cent Afr J Med. 2004;50:52–4.
- Tzipori S, Smith M, Birch C, Barnes G, Bishop R. Cryptosporidiosis in hospital patients with gastroenteritis. Am J Trop Med Hyg. 1983;32:931–4.
- Sallon S, Deckelbaum RJ, Schmid II, Harlap S, Baras M, Spira DT. Cryptosporidium, malnutrition, and chronic diarrhea in children. Am J Dis Child. 1988;142:312–5.
- Xiao L, Bern C, Limor J, Sulaiman I, Roberts J, Checkley W, Identification of 5 types of Cryptosporidium parasites in children in Lima, Peru. J Infect Dis. 2001;183:492–7.
- Flanigan T, Whalen C, Turner J, Soave R, Toerner J, Havlir D, Cryptosporidium infection and CD4 counts. Ann Intern Med. 1992;116:840–2.
- Bern C, Kawai V, Vargas D, Rabke-Verani J, Williamson J, Chavez-Valdez R, The epidemiology of intestinal microsporidiosis in patients with HIV/AIDS in Lima, Peru. J Infect Dis. 2005;191:1658–64.
- Xiao L, Fayer R, Ryan U, Upton SJ. Cryptosporidium taxonomy: recent advances and implications for public health. Clin Microbiol Rev. 2004;17:72–97.
- Sulaiman IM, Hira PR, Zhou L, Al-Ali FM, Al-Shelahi FA, Shweiki HM, Unique endemicity of cryptosporidiosis in children in Kuwait. J Clin Microbiol. 2005;43:2805–9.
- Cama VA, Bern C, Sulaiman IM, Gilman RH, Ticona E, Vivar A, Cryptosporidium species and genotypes in HIV-positive patients in Lima, Peru. J Eukaryot Microbiol. 2003;50(Suppl):531–3.
- Bushen OY, Kohli A, Pinkerton RC, Dupnik K, Newman RD, Sears CL, Heavy cryptosporidial infections in children in northeast Brazil: comparison of Cryptosporidium hominis and Cryptosporidium parvum. Trans R Soc Trop Med Hyg. 2007;101:378–84.
- Hunter PR, Hughes S, Woodhouse S, Syed Q, Verlander NQ, Chalmers RM, Sporadic cryptosporidiosis case-control study with genotyping. Emerg Infect Dis. 2004;10:1241–9.
- Hunter PR, Hughes S, Woodhouse S, Raj N, Syed Q, Chalmers RM, Health sequelae of human cryptosporidiosis in immunocompetent patients. Clin Infect Dis. 2004;39:504–10.
- Cama VA, Ross JM, Crawford S, Kawai V, Chavez-Valdez R, Vargas D, Differences in clinical manifestations among Cryptosporidium species and subtypes in HIV-infected persons. J Infect Dis. 2007;196:684–91.
- Xiao L, Ryan UM. Cryptosporidiosis: an update in molecular epidemiology. Curr Opin Infect Dis. 2004;17:483–90.
- Alves M, Xiao L, Sulaiman I, Lal AA, Matos O, Antunes F. Subgenotype analysis of Cryptosporidium isolates from humans, cattle, and zoo ruminants in Portugal. J Clin Microbiol. 2003;41:2744–7.
- Peng MM, Meshnick SR, Cunliffe NA, Thindwa BD, Hart CA, Broadhead RL, Molecular epidemiology of cryptosporidiosis in children in Malawi. J Eukaryot Microbiol. 2003;50(Suppl):557–9.
- Gatei W, Wamae CN, Mbae C, Waruru A, Mulinge E, Waithera T, Cryptosporidiosis: prevalence, genotype analysis, and symptoms associated with infections in children in Kenya. Am J Trop Med Hyg. 2006;75:78–82.
- Ajjampur SS, Gladstone BP, Selvapandian D, Muliyil JP, Ward H, Kang G. Molecular and spatial epidemiology of cryptosporidiosis in children in a semiurban community in south India. J Clin Microbiol. 2007;45:915–20.
- Raccurt CP, Brasseur P, Verdier RI, Li X, Eyma E, Stockman CP, Human cryptosporidiosis and Cryptosporidium spp. in Haiti. Trop Med Int Health. 2006;11:929–34.
- Samie A, Bessong PO, Obi CL, Sevilleja JE, Stroup S, Houpt E, Cryptosporidium species: preliminary descriptions of the prevalence and genotype distribution among school children and hospital patients in the Venda region, Limpopo Province, South Africa. Exp Parasitol. 2006;114:314–22.
- Tumwine JK, Kekitiinwa A, Bakeera-Kitaka S, Ndeezi G, Downing R, Feng X, Cryptosporidiosis and microsporidiosis in Ugandan children with persistent diarrhea with and without concurrent infection with the human immunodeficiency virus. Am J Trop Med Hyg. 2005;73:921–5.
- Leav BA, Mackay MR, Anyanwu A, O’Connor RM, Cevallos AM, Kindra G, Analysis of sequence diversity at the highly polymorphic Cpgp40/15 locus among Cryptosporidium isolates from human immunodeficiency virus-infected children in South Africa. Infect Immun. 2002;70:3881–90.
- Gatei W, Das P, Dutta P, Sen A, Cama V, Lal AA, Multilocus sequence typing and genetic structure of Cryptosporidium hominis from children in Kolkata, India. Infect Genet Evol. 2007;7:197–205.
- Cama V, Gilman RH, Vivar A, Ticona E, Ortega Y, Bern C, Mixed Cryptosporidium infections and HIV. Emerg Infect Dis. 2006;12:1025–8.
- Matos O, Alves M, Xiao L, Cama V, Antunes F. Cryptosporidium felis and C. meleagridis in persons with HIV, Portugal. Emerg Infect Dis. 2004;10:2256–7.
- Das P, Roy SS, Mitradhar K, Dutta P, Bhattacharya MK, Sen A, Molecular characterization of Cryptosporidium spp. in children in Kolkata, India. J Clin Microbiol. 2006;44:4246–9.
- Hung CC, Tsaihong JC, Lee YT, Deng HY, Hsiao WH, Chang SY, Prevalence of intestinal infection due to Cryptosporidium species among Taiwanese patients with human immunodeficiency virus infection. J Formos Med Assoc. 2007;106:31–5.
- Meamar AR, Guyot K, Certad G, Dei-Cas E, Mohraz M, Mohebali M, Molecular characterization of Cryptosporidium isolates from humans and animals in Iran. Appl Environ Microbiol. 2007;73:1033–5.
- Sulaiman IM, Morgan UM, Thompson RC, Lal AA, Xiao L. Phylogenetic relationships of Cryptosporidium parasites based on the 70-kilodalton heat shock protein (HSP70) gene. Appl Environ Microbiol. 2000;66:2385–91.
- Spano F, Putignani L, McLauchlin J, Casemore DP, Crisanti A. PCR-RFLP analysis of the Cryptosporidium oocyst wall protein (COWP) gene discriminates between C. wrairi and C. parvum, and between C. parvum isolates of human and animal origin. FEMS Microbiol Lett. 1997;150:209–17.
- Sturbaum GD, Reed C, Hoover PJ, Jost BH, Marshall MM, Sterling CR. Species-specific, nested PCR-restriction fragment length polymorphism detection of single Cryptosporidium parvum oocysts. Appl Environ Microbiol. 2001;67:2665–8.
- Okhuysen PC, Chappell CL, Sterling CR, Jakubowski W, DuPont HL. Susceptibility and serologic response of healthy adults to reinfection with Cryptosporidium parvum. Infect Immun. 1998;66:441–3.
- Xiao L, Rayn U. Molecular epidemiology. In: Fayer R, Xiao L, editors. Cryptosporidium and cryptosporidiosis. 2nd ed. Boca Raton (FL): Taylor and Francis; 2007. p. 119–71.
- Sopwith W, Osborn K, Chalmers R, Regan M. The changing epidemiology of cryptosporidiosis in north west England. Epidemiol Infect. 2005;133:785–93.
- Yoder JS, Beach MJ. Cryptosporidiosis surveillance—United States, 2003–2005. MMWR Surveill Summ. 2007;56:1–10. | <urn:uuid:018ed3fb-4d2a-41aa-aaac-dc063eecfe31> | CC-MAIN-2022-33 | https://wwwnc.cdc.gov/eid/article/14/10/07-1273_article | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00495.warc.gz | en | 0.918255 | 7,540 | 2.515625 | 3 |
Written by: Sandeep Ravindran, Ph.D. | Issue # 43 | 2015
- Certain types of milk sugars, called oligosaccharides, form the third largest component of human milk.
- These human milk oligosaccharides (HMOs) have been shown to positively influence the gut microbiome and immunity.
- These sugars are structurally complex and diverse and, as a result, extracting or synthesizing them for use in formula has been challenging.
- Researchers are studying different ways to obtain these sugars, including extracting them from cow milk, chemically or enzymatically synthesizing them, or using microbes to produce them.
- At the moment, only a handful of simple HMOs have been produced at a large scale, but many others have been synthesized in smaller amounts.
- Extraction and synthesis techniques are continuing to improve, but we are still a long way from replicating the full diversity and complexity of sugars in human milk.
It’s well known that human milk is good for you (1-5). Sugars, called oligosaccharides, form the third largest component of human milk and have been associated with many beneficial effects. These human milk oligosaccharides (HMOs) have been shown to influence the composition of the gut microbiome, modulate the immune system, and help protect against pathogens (6-11, 22). HMOs act as prebiotics, promoting the growth of certain beneficial bacteria while suppressing the growth of other disease-causing bacteria (12-18). In addition, some HMOs have been found to mimic the attachment sites of harmful bacteria and thus block their ability to attach to and invade cells in the infant intestine (19, 20). HMOs may also be involved in the development of the infant gut, immune system, and brain (8-11).
Given the various benefits of HMOs, there has been a lot of interest in figuring out how to introduce HMOs into formula. However, more than 200 human milk oligosaccharides have been discovered so far, and their variety and complexity makes them challenging to synthesize (21-23). “Right now there are no formula where human milk oligosaccharides are being added,” says Geert-Jan Boons, Professor of Chemistry at the University of Georgia.
In an effort to deliver some of the benefits of HMOs, current dietary products sometimes include simpler oligosaccharides, often derived from plants (19, 24). Some of these simpler oligosaccharides have been reported to have prebiotic effects, but they do not have the structural complexity and diversity of HMOs. The effects of HMOs are considered to be highly structure-dependent, so in order to better replicate their function researchers are trying to produce oligosaccharides more similar to those in human milk.
“The bottom-line is that the carbohydrates that are being added right now to formula are not the carbohydrates found in human breast milk,” says Boons.
Extracting HMOs from milk
One way to obtain the same oligosaccharides found in human milk would be to purify them directly from breast milk. About a year ago, a press release by startup Medolac Laboratories announced its ability to commercially purify large amounts of native HMOs from donor human milk (25). But the difficulty of obtaining large amounts of human milk for commercial HMO production means that the majority of efforts have focused on other approaches to obtaining these molecules.
One such approach involves concentrating and extracting HMOs from cow milk. The oligosaccharides in cow milk are structurally similar to those in human milk, but their concentration is much lower (24, 26). Cow milk is already the most common milk used for infant formula in the U.S., so oligosaccharides extracted from it would be expected to be safe for human consumption. Researchers are trying to use filtration techniques to remove most of the lactose and salts from cow milk and increase the concentration of oligosaccharides.
The University of California, Davis milk processing lab is developing methods to extract large quantities of both human and bovine milk oligosaccharides from cow milk, according to Daniela Barile, an Associate Professor of food science and technology at UC Davis. These sugars could be tested in animal studies to determine whether they provide the beneficial effects associated with HMOs. Cow milk, and in particular whey—the liquid part of milk that separates from the curd during cheese production—could thus potentially serve as a way to produce commercial oligosaccharides with similar benefits to those in human milk.
“The technology’s in place, so we should be able nowadays to isolate oligosaccharides from whey,” says Barile. “Whey is a great source, but there are still great challenges if you want to really reach good purity and have a reproducible process batch to batch,” she says.
Individual oligosaccharides from cow’s milk are not exactly identical to those in humans, but an advantage of this technique is its ability to replicate some of the oligosaccharide diversity found in human milk, says Barile. Other methods have so far only been able to produce a handful of these sugars, she says. “If you really want to say that you want to mimic human milk, instead of having just one or two oligosaccharides you want to have the full complement,” says Barile. “The synthesis approach has been making a lot of progress, and they can now make bigger quantities, but it’s not representing the very complex constellation of different structures that is found in human milk,” she says.
“Right now, the isolation process can yield a better diversity than synthesis gets. So you can have more structures, you can have more molecules, so it’s more similar to human milk,” says Barile. “But there are still many challenges,” she says. “There is not a single product in the market right now made of oligosaccharide isolated from whey, so it’s all in the future,” says Barile. “We are at the beginning of the process, there’s still a long way to go.”
Oligosaccharides can be synthesized through a series of chemical reactions, and that’s another approach that researchers have been pursuing. “The challenge is, we do not have robust technology to make complex carbohydrates at this time,” says Geert-Jan Boons. Unlike the process by which DNA is used to produce RNA and RNA is used to produce proteins, carbohydrates are not biosynthesized through a template-mediated process, Boons says. “If DNA goes to RNA goes to protein, that gives exact copies. When carbohydrates are being biosynthesized, because it’s not a template, you create heterogeneity,” he says.
“There are laboratories that are trying to automate chemical oligosaccharide synthesis in the way a peptide can be synthesized on machines off a standardized protocol,” Boons says. “The protocols are still not very robust, but progress is being made,” he says.
Glycom is one company that is using chemical processes for HMO synthesis, although the company also uses other production methods. However, using chemical synthesis to create commercial quantities of HMOs without making them prohibitively expensive could be a challenge. “The beauty of chemical synthesis is, they can make any HMO,” says Yong-Su Jin, Associate Professor of Food Science and Human Nutrition at the University of Illinois. “However, the cost would be much, much higher,” he says.
Stefan Jennewein, Managing Director and cofounder of one of Glycom’s competitors, Jennewein Biotechnologie, believes there are multiple issues that make chemical synthesis impractical for commercial production of HMOs. “At the time we founded the company, several chemical processes were established relying on chemical synthesis, which however are based on the use of toxic reagents like pyridine and chloroform and other noxious chemicals,” says Jennewein. While these processes might work fine at a small scale with extensive purification, they have high costs and lack scalability, he says. “Many in the industry are convinced that these processes are not compatible with food production,” Jennewein says.
Instead of chemical synthesis, Jennewein Biotechnologie and many other companies and researchers use genetically engineered microbes to produce HMOs. “Microbial production is a very stable and safe method,” says Yong-Su Jin. It’s similar to techniques already used for large-scale production of amino acids and vitamins, he says. “It is a very robust and safe way to mass-produce food quality material,” says Jin.
Jin genetically engineers microbes to introduce the enzymes necessary to produce HMOs. At the moment he is using either the bacteria Escherichia coli, which is often used to produce proteins and metabolites, or the yeast Saccharomyces cerevisiae, used in baking, winemaking and brewing.
“So far we are using these two microorganisms to produce human milk oligosaccharides,” says Jin. “In particular we are making 2’-Fucosyllactose (2-FL), which is one of the most abundant HMOs in human milk,” he says (27). “We did very subtle chemical analyses, and we are very confident that our 2-FL is the same as 2-FL in human milk,” says Jin.
“That’s one advantage of biological production,” Jin says. “With chemical synthesis, you may have some minor modification somewhere, but enzymatic or microbial production have high fidelity in the reaction,” he says.
Jin says he is able to use genetically engineered E. coli to make up to 2-3 gm/L of 2-FL in the medium, similar to its levels in milk. Even though the E. coli strain he is using is very different from the ones that cause food disease, Jin says that, due to negative public perceptions about E. coli, he is now switching to using yeast. “Because they drink wine and beer or eat bread everyday, people believe that this strain is safer, so we are trying to make 2-FL in yeast right now,” he says. He says he is still optimizing 2-FL production in yeast to produce similar levels to that in E. coli.
Companies including Glycom, Jennewein Biotechnologie, and Glycosyn LLC are working on producing simple HMOs at a much larger scale for commercial use. “Several companies are currently developing formula containing 2’-fucosyllactose, but also other HMOs will soon enter the stage,” says Stefan Jennewein. Jennewein Biotechnologie produces 2-FL at a commercial scale using genetically engineered E. coli, and has been seeking market approval for their products. “We were the very first who filed for a Novel Food application in the EU for a food ingredient originating from a recombinant bacterial process,” says Stefan Jennewein. “In 2014 we obtained GRAS (Generally Recognized as Safe) status for Infant and Toddler Nutrition and General Nutrition in the US,” he says. The company is filing for registration in other major markets, and has been building production capacity for large-scale production of HMOs. “We completed the world’s first commercial multi-ton facility for HMOs in 2014, which is fully certified for food production,” says Jennewein.
Although microbial production can produce HMOs at a large scale, it has so far only been used to make a few of the simpler HMOs. Researchers are still trying to figure out how to expand the repertoire of HMOs that can be produced using microbes. “The good news is, although we have these 180 or so HMOs, they are not random chemical structures,” says Jin. “If we look at the basal structures, only 2-3 different sugars are connected with some rule,” he says. “So it doesn’t mean that we need to construct 180 strains with different biochemical pathways. Maybe if we make only 6 or 7 pathways, by mixing and matching combinations we will be able to create 180 different HMOs.”
“It’s like Lego blocks,” says Jin. “If you have 3 Lego blocks, then you can create 20 or 50 different shapes,” he says. “So in the future, we should be able to make any desired HMO by microbial production,” says Jin. “But I think it’s still very far off,” he says.
An enzymatic approach
To create some of the more complex oligosaccharides, Geert-Jan Boons and others have been focusing on enzymatic methods. “We have also very complex oligosaccharides in milk, and it is our belief that they are actually the compounds that perform very specific biological functions,” says Boons. “Those are not easily accessible right now,” he says.
Boons says his research group has been able to express almost every mammalian enzyme involved in modifying complex sugars, and he is working on using these enzymes to produce almost every human milk oligosaccharide. “The caveat is, we can produce only small amounts,” he says. Although the technology may not be able to produce commercial levels of HMOs, Boons says it will be very helpful for research purposes, to find out which HMOs are beneficial and what their functions are.
“I think that discovery, what these molecules actually do and which ones are the interesting ones, that will be done through chemical and enzymatic synthesis,” says Boons. “So, we will go through a discovery phase, find out how these molecules actually perform their beneficial properties, and which are really the active components, and create a mixture that can make a big difference for humans,” he says.
“Large scale production will be done through biotechnology, with cells that are engineered to produce these oligosaccharides,” says Boons. “I think what will be done in the next couple of years is, the relatively simple oligosaccharides, which can now be produced at a relatively large scale, they will move into the clinic,” he says. “Basic scientists like me, we will develop protocols to make the more complex ones, and they will be examined in cell culture and animal models, and when we begin to understand how they work, they will move into the clinic,” says Boons. “So, a lot of exciting things are happening,” he says.
However, it’s going to take a while before scientists or companies are able to produce formula that contains all the oligosaccharides found in human milk. “We are still a long way from making an artificial human milk oligosaccharide composition,” Boons says. “What we can do is begin to supplement cow milk with the main simple oligosaccharides found in human milk,” he says. “To make the whole structural diversity found in human milk, that is still quite far away,” says Boons.
Yong-Su Jin is confident that a combination of academia and industry will figure out ways to produce HMOs in the same manner they were able to achieve the production of many other oligosaccharides over the last five or 10 years. “The last 2-3 years have been very exciting,” he says. “Before that, although we had publications about the beneficial effects of HMOs, I didn’t see that much commercial activity, but now I see more and more infant formula companies interested in adding HMO into their product,” Jin says. “So, it’s a very exciting time,” he says. “I’m very optimistic, but we are still at a very early stage.”
1. Section on Breastfeeding. Breastfeeding and the use of human milk. Pediatrics. American Academy of Pediatrics; 2012 Mar;129(3):e827–41.
2. Duijts L, Jaddoe VWV, Hofman A, Moll HA. Prolonged and exclusive breastfeeding reduces the risk of infectious diseases in infancy. Pediatrics. American Academy of Pediatrics; 2010 Jul;126(1):e18–25.
3. Blaymore Bier J-A, Oliver T, Ferguson A, Vohr BR. Human milk reduces outpatient upper respiratory symptoms in premature infants during their first year of life. J Perinatol. Nature Publishing Group; 2002 Jul;22(5):354–9.
4. Ip S, Chung M, Raman G, Chew P, Magula N, DeVine D, et al. Breastfeeding and maternal and infant health outcomes in developed countries. Evid Rep Technol Assess (Full Rep). 2007 Apr;(153):1–186.
5. Quigley MA, Kelly YJ, Sacker A. Breastfeeding and hospitalization for diarrheal and respiratory infection in the United Kingdom Millennium Cohort Study. Pediatrics. American Academy of Pediatrics; 2007 Apr;119(4):e837–42.
6. Stepans MBF, Wilhelm SL, Hertzog M, Rodehorst TKC, Blaney S, Clemens B, et al. Early consumption of human milk oligosaccharides is inversely related to subsequent risk of respiratory and enteric disease in infants. Breastfeed Med. Mary Ann Liebert, Inc. 2 Madison Avenue Larchmont, NY 10538 USA; 2006;1(4):207–15.
7. Zivkovic AM, German JB, Lebrilla CB, Mills DA. Human milk glycobiome and its impact on the infant gastrointestinal microbiota. Proc Natl Acad Sci USA. National Acad Sciences; 2011 Mar 15;108 Suppl 1(Supplement_1):4653–8.
8. Bode L. Human milk oligosaccharides: every baby needs a sugar mama. Glycobiology. Oxford University Press; 2012 Sep;22(9):1147–62.
9. Kuntz S, Rudloff S, Kunz C. Oligosaccharides from human milk influence growth-related characteristics of intestinally transformed and non-transformed intestinal cells. Br J Nutr. Cambridge University Press; 2008 Mar;99(3):462–71.
10. Rabinovich GA, van Kooyk Y, Cobb BA. Glycobiology of immune responses. Ann N Y Acad Sci. Blackwell Publishing Inc; 2012 Apr;1253(1):1–15.
11. de Kivit S, Kraneveld AD, Garssen J, Willemsen LEM. Glycan recognition at the interface of the intestinal immune system: target for immune modulation via dietary components. Eur J Pharmacol. 2011 Sep;668 Suppl 1:S124–32.
12. Ward RE, Niñonuevo M, Mills DA, Lebrilla CB, German JB. In vitro fermentation of breast milk oligosaccharides by Bifidobacterium infantis and Lactobacillus gasseri. Appl Environ Microbiol. 2006 Jun;72(6):4497–9.
13. Harmsen HJ, Wildeboer-Veloo AC, Raangs GC, Wagendorp AA, Klijn N, Bindels JG, et al. Analysis of intestinal flora development in breast-fed and formula-fed infants by using molecular identification and detection methods. J Pediatr Gastroenterol Nutr. 2000 Jan;30(1):61–7.
14. Ninonuevo MR, Ward RE, LoCascio RG, German JB, Freeman SL, Barboza M, et al. Methods for the quantitation of human milk oligosaccharides in bacterial fermentation by mass spectrometry. Anal Biochem. 2007 Feb 1;361(1):15–23.
15. Yu Z-T, Chen C, Kling DE, Liu B, McCoy JM, Merighi M, et al. The principal fucosylated oligosaccharides of human milk exhibit prebiotic properties on cultured infant microbiota. Glycobiology. Oxford University Press; 2013 Feb;23(2):169–77.
16. Marcobal A, Barboza M, Froehlich JW, Block DE, German JB, Lebrilla CB, et al. Consumption of human milk oligosaccharides by gut-related microbes. J Agric Food Chem. 2010 May 12;58(9):5334–40.
17. LoCascio RG, Ninonuevo MR, Freeman SL, Sela DA, Grimm R, Lebrilla CB, et al. Glycoprofiling of bifidobacterial consumption of human milk oligosaccharides demonstrates strain specific, preferential consumption of small chain glycans secreted in early human lactation. J Agric Food Chem. 2007 Oct 31;55(22):8914–9.
18. Asakuma S, Hatakeyama E, Urashima T, Yoshida E, Katayama T, Yamamoto K, et al. Physiology of consumption of human milk oligosaccharides by infant gut-associated bifidobacteria. J Biol Chem. American Society for Biochemistry and Molecular Biology; 2011 Oct 7;286(40):34583–92.
19. Bode L. Human milk oligosaccharides: prebiotics and beyond. Nutrition Reviews. 2009 Nov;67:S183–91.
20. Espinosa RM, Taméz M, Prieto P. Efforts to emulate human milk oligosaccharides. Br J Nutr. Cambridge University Press; 2007 Oct;98 Suppl 1(S1):S74–9.
21. Ninonuevo MR, Park Y, Yin H, Zhang J, Ward RE, Clowers BH, et al. A strategy for annotating the human milk glycome. J Agric Food Chem. American Chemical Society; 2006 Oct 4;54(20):7471–80.
22. Ruhaak LR, Lebrilla CB. Advances in analysis of human milk oligosaccharides. Advances in Nutrition: An International Review Journal. American Society for Nutrition; 2012 May;3(3):406S–14S.
23. Fong B, Ma K, McJarrow P. Quantification of bovine milk oligosaccharides using liquid chromatography-selected reaction monitoring-mass spectrometry. J Agric Food Chem. American Chemical Society; 2011 Sep 28;59(18):9788–95.
24. Zivkovic AM, Barile D. Bovine Milk as a Source of Functional Oligosaccharides for Improving Human Health. Advances in Nutrition: An International Review Journal. 2011 May 10;2(3):284–9.
25. Medolac Laboratories Announces the First Large Scale Purification of Human Milk Oligosaccharides (HMO) [Internet]. 2014. Available from: http://www.medolac.com/uploads/8/8/1/4/8814177/medolac_hmo_final.pdf
26. Mehra R, Barile D, Marotta M, Lebrilla CB, Chu C, German JB. Novel High-Molecular Weight Fucosylated Milk Oligosaccharides Identified in Dairy Streams. Sim RB, editor. PLoS ONE. 2014 May 8;9(5):e96040–7.
27. Lee W-H, Pathanibul P, Quarterman J, Jo J-H, Han N, Miller MJ, et al. Whole cell biosynthesis of a functional oligosaccharide, 2′-fucosyllactose, using engineered Escherichia coli. Microbial Cell Factories. 2012;11(1):48–4. | <urn:uuid:6f865c98-ecf8-4d3d-a3c3-ebd6edcebd90> | CC-MAIN-2022-33 | https://www.milkgenomics.org/?splash=producing-human-milk-sugars-for-use-in-formula | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00697.warc.gz | en | 0.925863 | 5,215 | 2.90625 | 3 |
In most cases,
install.packages("arrow") should just
work. There are things you can do to make the installation faster (see
below). If, for any reason, it doesn’t work, set the environment
ARROW_R_DEV=true, retry, and share the logs with
The Apache Arrow project is implemented in multiple languages, and
the R package depends on the Arrow C++ library (referred to from here on
as libarrow). This means that when you install arrow, you need both the
R and C++ versions. If you install arrow from CRAN on a machine running
Windows or MacOS, when you call
a precompiled binary containing both the R package and libarrow will be
downloaded. However, CRAN does not host R package binaries for Linux,
and so you must choose from one of the alternative approaches.
This vignette outlines the recommend approaches to installing arrow on Linux, starting from the simplest and least customisable to the most complex but with more flexbility to customise your installation.
The intended audience for this document is arrow R package
users on Linux, and not Arrow developers. If you’re
contributing to the Arrow project, see
vignette("developing", package = "arrow") for resources to
help you on set up your development environment. You can also find a
more detailed discussion of the code run during the installation process
in the developers’
Having trouble installing arrow? See the “Troubleshooting” section below.
As mentioned above, on macOS and Windows, when you run
install.packages("arrow"), and install arrow from CRAN, you
get an R binary package that contains a precompiled version of libarrow,
though CRAN does not host binary packages for Linux. This means that the
default behaviour when you run
install.packages() on Linux
is to retrieve the source version of the R package that has to be
compiled locally, including building libarrow from source. See method 2
below for details of this.
For a faster installation, we recommend that you instead use one of the methods below for installing arrow with a precompiled libarrow binary.
If you want a quicker installation process, and by default a more fully-featured build, you could install arrow from RStudio’s public package manager, which hosts binaries for both Windows and Linux.
For example, if you are using Ubuntu 20.04 (Focal):
options( HTTPUserAgent = sprintf( "R/%s R (%s)", getRversion(), paste(getRversion(), R.version["platform"], R.version["arch"], R.version["os"]) ) ) install.packages("arrow", repos = "https://packagemanager.rstudio.com/all/__linux__/focal/latest")
Note that the User Agent header must be specified as in the example above. Please check the RStudio Package Manager: Admin Guide for more details.
For other Linux distributions, to get the relevant URL, you can visit the RSPM site, click on ‘binary’, and select your preferred distribution.
Similarly, if you use
conda to manage your R
environment, you can get the latest official release of the R package
including libarrow via:
conda install -c conda-forge --strict-channel-priority r-arrow
Another way of achieving faster installation with all key features
enabled is to use static libarrow binaries we host. These are used
automatically on many Linux distributions (x86_64 architecture only),
according to the allowlist.
If your distribution isn’t in the list, you can opt-in by setting the
NOT_CRAN environment variable before you call
Sys.setenv("NOT_CRAN" = TRUE) install.packages("arrow")
This installs the source version of the R package, but during the installation process will check for compatible libarrow binaries that we host and use those if available. If no binary is available or can’t be found, then this option falls back onto method 2 below (full source build), but setting the environment variable results in a more fully-featured build than default.
Except for the those built for gcc 4.8 (default on CentOS 7), the binaries include support for AWS S3 and Google Cloud Storage (GCS). These features require libcurl and openssl libraries installed separately; see below on how to install them. If you don’t have these installed, the libarrow binary won’t be used, and you will fall back to the full source build.
Generally, compiling and installing R packages with C++ dependencies requires either installing system packages, which you may not have privileges to do, or building the C++ dependencies separately, which introduces all sorts of additional ways for things to go wrong.
The full source build of arrow, compiling both C++ and R bindings, does handle most of the dependency management for you, but it is much slower. However, if using binaries isn’t an option for you, or you wish to fine-tune or customize your Linux installation, the instructions in this section explain how to do that.
If you wish to install libarrow from source instead of looking for
pre-compiled binaries, you can set the
Sys.setenv("LIBARROW_BINARY" = FALSE)
By default, this is set to
TRUE, and so libarrow will
only be built from source if this environment variable is set to
FALSE or no compatible binary for your OS can be found.
When compiling libarrow from source, you have the power to really
fine-tune which features to install. You can set the environment
FALSE to enable a
more full-featured build including S3 support and alternative memory
Sys.setenv("LIBARROW_MINIMAL" = FALSE)
By default this variable is unset, which builds many commonly used
features such as Parquet support but disables some features that are
more costly to build, like S3 and GCS support. If set to
TRUE, a trimmed-down version of arrow is installed with all
optional features disabled.
Note that in this guide, you will have seen us mention the
NOT_CRAN - this is a convenience
variable, which when set to
TRUE, automatically sets
Building libarrow from source requires more time and resources than
installing a binary. We recommend that you set the environment variable
TRUE for more verbose output
during the installation process if anything goes wrong.
Sys.setenv("ARROW_R_DEV" = TRUE)
Once you have set these variables, call
install.packages() to install arrow using this
The section below discusses environment variables you can set before
install.packages("arrow") to build from source and
customise your configuration.
When you build libarrow from source, its dependencies will be
automatically downloaded. The environment variable
ARROW_DEPENDENCY_SOURCE controls whether the libarrow
installation also downloads or installs all dependencies (when set to
BUNDLED), uses only system-installed dependencies (when set
SYSTEM) or checks system-installed dependencies first
and only installs dependencies which aren’t already present (when set to
AUTO, the default).
These dependencies vary by platform; however, if you wish to install these yourself prior to libarrow installation, we recommend that you take a look at the docker file for whichever of our CI builds (the ones ending in “cpp” are for building Arrow’s C++ libaries, aka libarrow) corresponds most closely to your setup. This will contain the most up-to-date information about dependencies and minimum versions.
If downloading dependencies at build time is not an option, as when building on a system that is disconnected or behind a firewall, there are a few options. See “Offline builds” below.
The arrow package allows you to work with data in AWS S3 or in other
cloud storage system that emulate S3, as well as Google Cloud Storage.
However, support for working with S3 and GCS is not enabled in the
default source build, and it has additional system requirements. To
enable it, set the environment variable
choose the full-featured build, or more selectively set
ARROW_GCS=ON. You also need
the following system dependencies:
gcc>= 4.9 or
clang>= 3.3; note that the default compiler on CentOS 7 is gcc 4.8.5, which is not sufficient
The prebuilt libarrow binaries come with S3 and GCS support enabled, so you will need to meet these system requirements in order to use them. If you’re building everything from source, the install script will check for the presence of these dependencies and turn off S3 and GCS support in the build if the prerequisites are not met–installation will succeed but without S3 or GCS functionality. If afterwards you install the missing system requirements, you’ll need to reinstall the package in order to enable S3 and GCS support.
In this section, we describe how to fine-tune your installation at a more granular level.
Some features are optional when you build Arrow from source - you can configure whether these components are built via the use of environment variables. The names of the environment variables which control these features and their default values are shown below.
||S3 support (if dependencies are met)*||
||GCS support (if dependencies are met)*||
||The JSON parsing library||
||The RE2 regular expression library, used in some string compute functions||
||The UTF8Proc string library, used in many other string compute functions||
There are a number of other variables that affect the
configure script and the bundled build script. All boolean
variables are case-insensitive.
||Allow building from source||
||Try to install
||Build with minimal features enabled||(unset)|
||More verbose messaging and regenerates some code||
||Directory to save source build logs||(unset)|
||Alternative CMake path||(unset)|
See below for more in-depth explanations of these environment variables.
LIBARROW_BINARY: By default on many distributions, or if explicitly set to
true, the script will determine whether there is a prebuilt libarrow that will work with your system. You can set it to
falseto skip this option altogether, or you can specify a string “distro-version” that corresponds to a binary that is available, to override what this function may discover by default. Possible values are: “centos-7” (gcc 4.8, no AWS/GCS support); “ubuntu-18.04” (gcc 8, openssl 1); “ubuntu-22.04” (openssl 3).
LIBARROW_BUILD: If set to
false, the build script will not attempt to build the C++ from source. This means you will only get a working arrow R package if a prebuilt binary is found. Use this if you want to avoid compiling the C++ library, which may be slow and resource-intensive, and ensure that you only use a prebuilt binary.
LIBARROW_MINIMAL: If set to
false, the build script will enable some optional features, including S3 support and additional alternative memory allocators. This will increase the source build time but results in a more fully functional library. If set to
trueturns off Parquet, Datasets, compression libraries, and other optional features. This is not commonly used but may be helpful if needing to compile on a platform that does not support these features, e.g. Solaris.
NOT_CRAN: If this variable is set to
true, as the
devtoolspackage does, the build script will set
LIBARROW_MINIMAL=falseunless those environment variables are already set. This provides for a more complete and fast installation experience for users who already have
NOT_CRAN=trueas part of their workflow, without requiring additional environment variables to be set.
ARROW_R_DEV: If set to
true, more verbose messaging will be printed in the build script.
arrow::install_arrow(verbose = TRUE)sets this. This variable also is needed if you’re modifying C++ code in the package: see the developer guide vignette.
ARROW_USE_PKG_CONFIG: If set to
false, the configure script won’t look for Arrow libraries on your system and instead will look to download/build them. Use this if you have a version mismatch between installed system libraries and the version of the R package you’re installing.
LIBARROW_DEBUG_DIR: If the C++ library building from source fails (
cmake), there may be messages telling you to check some log file in the build directory. However, when the library is built during R package installation, that location is in a temp directory that is already deleted. To capture those logs, set this variable to an absolute (not relative) path and the log files will be copied there. The directory will be created if it does not exist.
CMAKE: When building the C++ library from source, you can specify a
/path/to/cmaketo use a different version than whatever is found on the
Daily development builds, which are not official releases, can be installed from the Ursa Labs repository:
Sys.setenv(NOT_CRAN = TRUE) install.packages("arrow", repos = c(arrow = "https://nightlies.apache.org/arrow/r", getOption("repos")))
or for conda users via:
conda install -c arrow-nightlies -c conda-forge --strict-channel-priority r-arrow
You can also install the R package from a git checkout:
git clone https://github.com/apache/arrow cd arrow/r R CMD INSTALL .
If you don’t already have libarrow on your system, when installing the R package from source, it will also download and build libarrow for you. See the section above on build environment variables for options for configuring the build source and enabled features.
The previous instructions are useful for a fresh arrow installation,
but arrow provides the function
install_arrow(), which you
can use if you:
install_arrow() provides some convenience wrappers
around the various environment variables described below.
Although this function is part of the arrow package, it is also available as a standalone script, so you can access it for convenience without first installing the package:
install_arrow(nightly = TRUE)
install_arrow(verbose = TRUE)
install_arrow() does not require environment variables
to be set in order to satisfy C++ dependencies.
Note that, unlike packages like
blogdown, and others that require external dependencies, you do not need to run
install_arrow()after a successful arrow installation.
install-arrow.R file also includes the
create_package_with_all_dependencies() function. Normally,
when installing on a computer with internet access, the build process
will download third-party dependencies as needed. This function provides
a way to download them in advance.
Doing so may be useful when installing Arrow on a computer without
internet access. Note that Arrow can be installed on a computer
without internet access without doing this, but many useful features
will be disabled, as they depend on third-party components. More
arrow::arrow_info()$capabilities() will be
FALSE for every capability. One approach to add more
capabilities in an offline install is to prepare a package with
pre-downloaded dependencies. The
create_package_with_all_dependencies() function does this
If you’re using binary packages you shouldn’t need to follow these steps. You should download the appropriate binary from your package repository, transfer that to the offline computer, and install that. Any OS can create the source bundle, but it cannot be installed on Windows. (Instead, use a standard Windows binary package.)
Note if you’re using RStudio Package Manager on Linux: If you still
want to make a source bundle with this function, make sure to set the
first repo in
options("repos") to be a mirror that contains
source packages (that is: something other than the RSPM binary mirror
my_arrow_pkg.tar.gzto the computer without internet access
install.packages("my_arrow_pkg.tar.gz", dependencies = c("Depends", "Imports", "LinkingTo"))
cmakemust be available
arrow_info()to check installed capabilities
cpp/thirdparty/download_dependencies.shmay be helpful)
ARROW_THIRDPARTY_DEPENDENCY_DIRon the offline computer, pointing to the copied directory.
The intent is that
install.packages("arrow") will just
work and handle all C++ dependencies, but depending on your system, you
may have better results if you tune one of several parameters. Here are
some known complications and ways to address them.
If you see a message like
------------------------- NOTE --------------------------- There was an issue preparing the Arrow C++ libraries. See https://arrow.apache.org/docs/r/articles/install.html ---------------------------------------------------------
in the output when the package fails to install, that means that installation failed to retrieve or build the libarrow version compatible with the current version of the R package.
Please check the “Known installation issues” below to see if any
apply, and if none apply, set the environment variable
ARROW_R_DEV=TRUE for more verbose output and try installing
again. Then, please report an
issue and include the full installation output.
If a system library or other installed Arrow is found but it doesn’t
match the R package version (for example, you have libarrow 1.0.0 on
your system and are installing R package 2.0.0), it is likely that the R
bindings will fail to compile. Because the Apache Arrow project is under
active development, it is essential that versions of libarrow and the R
package matches. When
install.packages("arrow") has to
download libarrow, the install script ensures that you fetch the
libarrow version that corresponds to your R package version. However, if
you are using a version of libarrow already on your system, version
match isn’t guaranteed.
To fix version mismatch, you can either update your libarrow system
packages to match the R package version, or set the environment variable
ARROW_USE_PKG_CONFIG=FALSE to tell the configure script not
to look for system version of libarrow. (The latter is the default of
install_arrow().) System libarrow versions are available
corresponding to all CRAN releases but not for nightly or dev versions,
so depending on the R package version you’re installing, system libarrow
version may not be an option.
Note also that once you have a working R package installation based
on system (shared) libraries, if you update your system libarrow
installation, you’ll need to reinstall the R package to match its
version. Similarly, if you’re using libarrow system libraries, running
update.packages() after a new release of the arrow package
will likely fail unless you first update the libarrow system
If the R package finds and downloads a prebuilt binary of libarrow, but then the arrow package can’t be loaded, perhaps with “undefined symbols” errors, please report an issue. This is likely a compiler mismatch and may be resolvable by setting some environment variables to instruct R to compile the packages to match libarrow.
A workaround would be to set the environment variable
LIBARROW_BINARY=FALSE and retry installation: this value
instructs the package to build libarrow from source instead of
downloading the prebuilt binary. That should guarantee that the compiler
If a prebuilt libarrow binary wasn’t found for your operating system
but you think it should have been, please report an
issue and share the console output. You may also set the environment
ARROW_R_DEV=TRUE for additional debug
If building libarrow from source fails, check the error message. (If
you don’t see an error message, only the
----- NOTE -----,
set the environment variable
ARROW_R_DEV=TRUE to increase
verbosity and retry installation.) The install script should work
everywhere, so if libarrow fails to compile, please report an
issue so that we can improve the script.
On CentOS, if you are using a more modern
devtoolset, you may need to set the environment variables
CXX either in the shell or in R’s
Makeconf. For CentOS 7 and above, both the Arrow system
packages and the C++ binaries for R are built with the default system
compilers. If you want to use either of these and you have a
devtoolset installed, set
CC=/usr/bin/gcc CXX=/usr/bin/g++ to use the system
compilers instead of the
devtoolset. Alternatively, if you
want to build arrow with the newer
false so that you build the
Arrow C++ from source using those compilers. Compiler mismatch between
the arrow system libraries and the R package may cause R to segfault
when arrow package functions are used. See discussions here and here.
If you have multiple versions of
zstd installed on
your system, installation by building libarrow from source may fail with
an “undefined symbols” error. Workarounds include (1) setting
LIBARROW_BINARY to use a C++ binary; (2) setting
ARROW_WITH_ZSTD=OFF to build without
(3) uninstalling the conflicting
zstd. See discussion here.
As mentioned above, please report an issue if you encounter ways to improve this. If you find that your Linux distribution or version is not supported, we welcome the contribution of Docker images (hosted on Docker Hub) that we can use in our continuous integration. These Docker images should be minimal, containing only R and the dependencies it requires. (For reference, see the images that R-hub uses.)
You can test the arrow R package installation using the
docker-compose setup included in the
apache/arrow git repository. For example,
R_ORG=rhub R_IMAGE=ubuntu-gcc-release R_TAG=latest docker-compose build r R_ORG=rhub R_IMAGE=ubuntu-gcc-release R_TAG=latest docker-compose run r
installs the arrow R package, including libarrow, on the rhub/ubuntu-gcc-release image. | <urn:uuid:b61a19d9-bfc0-46cb-8298-5c660235df5c> | CC-MAIN-2022-33 | https://cran.gedik.edu.tr/web/packages/arrow/vignettes/install.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00496.warc.gz | en | 0.822973 | 5,473 | 2.515625 | 3 |
In chapters 2 & 3 of Revelation we find seven letters that Jesus Himself writes to seven churches that existed in Asia Minor in the first century. As we noted last week, these letters were intended to have at least four levels of meaning and application.
- Local application – these were 7 real churches in Asia Minor (modern day Turkey)
- We are told they are for any that have ‘ears to hear’
- These 7 letters were all intended to be passed around, to mutually encourage, edify and strengthen the churches at that time – and so are applicable to all churches through the last 2000 years
- However, as we noted last week, these letters are prophetic, and tell – in advance – the history of the church.
Chapter 2: 12-17 The Letter to Pergamos ‘the church in the World’ 313 A.D–590 A.D.
Pergamos was situated about 50 miles north of Smyrna and about 15 miles inland off the Aegean Sea. The city was given over to idolatry and worshipped many pagan gods. There was an Acropolis on top of the hill along with an amphitheatre that could seat 10,000 people.
Pergamos was famous for the alter of Zeus – the Greek and Roman deity. The alter was huge, measuring 100ft on each side; and it had become a place of Cesar worship.
Asclepius (pronounced: sc-lay-pe-us) the serpent god was also worshipped and it was believed that he had power to heal. This idea was a distortion of the account recorded in Numbers 21 when God sent fiery serpents among the Children of Israel. When the people cried out to God, He commanded Moses to put a brass serpent on a pole. Whoever had been bitten then only needed to look at the pole and they would be healed. In John 3:14-15 Jesus explains that this was an anticipatory type of Himself dying on the cross and being judged for our sin (the pole represents the cross / the brass speaks of judgment / the serpent represents sin). On the cross, Jesus became sin for us and was judged in our place. Just as with the serpent on the pole in the wilderness, anyone who looks at Jesus will be saved. The worship of Asclepius was a distortion of this story that had been handed down through the generations.
NB: The medical profession still have the emblem of snakes on their belts / uniforms even today!
During this period of history, the Roman Empire was divided into two sections. Constantine, in a bid for total control entered into a war with Maximus. Just before a battle at Milvian Bridge, legend has it that Constantine looked up and saw a flaming cross in the sky and heard a voice saying ‘in this sign conquer’. He then went out and won the battle and was so impressed that he ‘converted’ to Christianity. This brought about a dramatic change that has affected the world ever since. One day the Christians are being fed to the lions, the next day Christians are given equal rights and even given their land and property back.
According to historians, Constantine then invited Christian Bishops form all over the area to come to Rome to talk about this new ‘faith’. One of these Bishops was a monk called Damasis. In an effort to please all the religious people of the day, Damasis had mixed some of the Babylon religious practices with Christianity. Constantine was impressed with Damasis’ ability to please ‘everybody’ (after all, what Emperor wouldn’t want all his subjects to get along?), and as a result he gave Damasis the position of ‘Bishop of Rome’ and the title of ‘Supreme Pontiff’ (which up until that point had only been used by Emperors). This title is still used today by the head of the Roman Catholic Church.
Constantine also set up a basilica (sacred Roman building) and had 13 sarcophaguses (tombs) placed within it. His intention was to bring the bodies of the 12 Apostles to Rome, and lay them to rest in this basilica. The 13th place was intended to be for himself when he died.
The Catholic Church, as it became known (catholic simply means ‘universal’) had 5 major centres. Antioch / Byzantium (later to be renamed Constantinople – and now known as Istanbul) / Jerusalem – These three were in the east of the empire. The other two were Alexandria, in Egypt, and Rome – both in the west of the empire.
Constantine, named the church in Rome the ‘Roman Catholic Church’.
However, being concerned that the church at Rome was going off on a tangent (supported very much by those within the church in Alexandria, Egypt), the churches in the east named their churches ‘Orthodox’ meaning the original, or authentic. This is where we get the Greek Orthodox, Eastern Orthodox and Russian Orthodox churches.
The rivalry grew and before long the Roman Catholic Church started to collect ‘relics’ in an attempt to prove that they were the most authentic. (Relics are artefacts, or things that belonged to, were worn, or had some connection with Jesus or the 12 Apostles or other key early church figures).
The Roman Catholic Church then started telling people that they had the ‘best’ relics, so they must be the true church. To counter this the Greek and Eastern Orthodox churches started doing the same, thus it became a treasure hunt to find the best relics.
To underline the madness of this, until recently the Roman Catholic Church had the body of James but the Eastern Orthodox had his head! In a ‘gesture of good will’ the Eastern Orthodox church gave the head to the Roman Catholic Church – James can now be put back together again!
The Roman Catholic Church then started saying that people had been healed from touching the bones and relics, so the Orthodox did the same.
Now, on the positive side it was probably good for new Christians to see the tombs of the Apostles and the things that one belonged to them. This would help them to realise that they had an historic faith – that it was real, and unlike the pagan religions that were based in mythology and superstition; but the problem was that this soon turned to idolatry as people started to worship these relics.
One of the other major problems at this time was that the whole theology of the church shifted. For years the church had been persecuted and in fear of death, eagerly looking for the return of the Lord, holding to the blessed hope. Now, overnight, the persecution had stopped and some Christians started thinking that maybe all the persecution had been the tribulation. If that was so, then this must now be the Millennium. This idea went hand in hand with another idea that began to surface saying that you don’t have to take the Bible seriously, that it doesn’t mean what it says, but rather it’s just stories to illustrate good and bad. This idea was largely promoted by a man call Origen, and later by one of his pupils called Augustine of Hippo. Augustine became so influential in the church that even people like John Calvin during the reformation over 1000 years later leaned heavily on his teaching.
This is now a matter of history and can be easily verified (www.hallofchurchhistory.com is a good place to start)
Irenaeus, a pupil of Polycarp (who in turn was a disciple of John the Apostle), became the Bishop of Lyons in France. Irenaeus, like Justin Martyr (an early Christian historian), believed that Christ will reign on earth for a thousand years, and he vehemently protested against attempts to allegorize away the millenarian proof texts. Irenaeus also argued against the Gnostic doctrine of a secret teaching by appealing to apostolic succession — if there had been such a teaching, the apostles would have passed it on to their successors. The apostles, he claimed, taught the Rule of Faith (very similar to our Apostles’ Creed).
Irenaeus wrote, “The tradition of the Apostles is manifest throughout the whole world; and we are in a position to reckon up those who were, by the Apostles, ordained bishops in the churches, and the succession of those men to our own time. If the Apostles had known hidden mysteries, they would have delivered them, especially to those to whom they were committing the churches themselves. For they were desirous those men should be very perfect and blameless in all things, whom also they were leaving behind as their successors, delivering up their own place of government to these men.”
However, people like Augustine, realising that it was unpopular to teach that Jesus was coming back to usurp the rulers of this world, (the same rulers that had just ‘legalised’ Christianity) popularised the Gnostic teachings from the 2nd century that scripture, and particularly the Book of Revelation, were to be taken symbolically, not literally.
Hence the church shifted form looking for the imminent return of the Lord to deliver them, to believing that the kingdom had now come and was being manifested on earth through the church. This is still the view taught in most Roman Catholic and Anglican churches today.
The name ‘Pergamos’ means mixed or elevated marriage.
Pergos – berg (German) = castle or something that is lifted up.
Gammos – marriage (as in polygamy, monogamy etc.)
The church had become elevated through its marriage to Rome, and now the influences of the pagan religions had come under its roof.
One day, the pagan priests were worshipping their idols in their temples, the next, they are told that they must convert to Christianity or die. The church, which for so long had been oppressed, suddenly became the oppressor with its newly found poitical power.
The Christians then started to use the lavish pagan temples and turn them into ‘churches’. As more churches began to be built, they were modelled on the existing pagan temples, but also began to become even more elaborate. This was a good move for architecture but bad for true Christianity. Now the buildings themselves became the focus; a practice that persists even to this day. It is because of this that so many people associate ‘church’ with a building and not with the people who have left all to follow Jesus Christ.
All in all, this was one of the worst times in the history of the Church, far from setting their minds on heavenly things, the Church had embraced the world and allowed countless compromises, including the watering down of the Word of God.
And to the angel of the church in Pergamos write; These things saith he which hath the sharp sword with two edges;
As we have already noted, the letter is addressed to the pastor of the church – Jesus holds him accountable for the sheep entrusted to his care. The pastor’s role is to feed the sheep and bring them to spiritual maturity. Sadly, in Pergamos, as with so many churches today, the pastor had let the influence of the world permeate the church. It is therefore no surprise that the characteristic of Jesus drawn from chapter one that Jesus uses in the introduction to this letter is the image of Himself with the two-edged. The two-edged sword is symbol of the word of God (Heb 4:12). This was the very thing this church were moving away from.
I know thy works, and where thou dwellest……………………,
“Where thou dwellest.” The Lord commends this church for three very definite things. First, He takes note of their circumstances. He knew that these believers were living in a very difficult place. And, my friend, the Lord takes note of our circumstances. Sometimes we are inclined to condemn someone who is caught in a certain set of circumstances, but if we were in the same position, we might act in an even worse way than he is acting. (J V McGee)
…………………………..…….even where Satan’s seat is:
Satan had persecuted the Church from the outside, but this had only made the Church stronger. So now he switches tactics and introduces the world into the church. Pergamos was overflowing with idolatry and pagan religious practices. So this place became the center of Satan’s operations.
……………..and thou holdest fast my name, and hast not denied my faith,
In the midst of all of this compromise, the Church at Pergamos are commended for holding fast Jesus’ name and not denying the faith. Although much of the so-called church was shifting away from the truth, the true church was still standing.
However, it was during this time that Arius, a theologian, built on the teaching of the Gnostics and taught that Jesus was not God, he was the son of God, but not God. Alexander of Alexandria led the move to depose Arius and those who had joined him.
This led to the council of Nicaea (Nicaea is a town in northern Turkey) where Christian leaders from all over the Roman empire were called together to establish a creed, that would constitute a basis of Christian faith. The creed that they produced is still used in many churches today as a statement of faith:
“We believe in One God, the Father Almighty, Maker of all things visible and invisible:”- “And in One Lord Jesus Christ, the Son of God, begotten of the Father, Only-begotten, that is, from the essence of the Father; God from God, Light from Light, Very God from Very God, begotten not made, One in essence with the Father, by Whom all things were made, both things in heaven and things in earth; Who for us men and for our salvation came down and was made flesh, was made man, suffered, and rose again the third day, ascended into heaven, and cometh to judge quick and dead.” “And in the Holy Ghost.”
“And those who say, `Once He was not,’ and `Before His generation He was not,’ and `He came to be from nothing,’ or those who pretend that the Son of God is `Of other subsistence or essence ,’ or `created’ or `alterable,’ or `mutable,’ the Catholic Church anathematizes.”
Even though Arius had taught that Jesus was not God, the council ‘held fast Jesus’ name’ and did not deny the faith.
……………….even in those days wherein Antipas was my faithful martyr, who was slain among you, where Satan dwelleth.
Very little is known about Antipas except that apparently he was slowly roasted alive inside a hollow bronze calf.
Antipas means ‘against-all’ – he stood against all heresy.
But I have a few things against thee, because thou hast there them that hold the “doctrine”……..
A boy was in Sunday school and was having a quiz. The teacher asked, “What is false doctrine?” The boy’s hand shot up. “Yes” said the teacher. ‘I know’ said the boy. ‘It’s when a Doctor gives the wrong thing to someone who is sick’
This is exactly what false doctrine is! As with the physical body where the wrong medicine prescribed to a patient could kill them, or at least have harmful effects; so can false doctrine to the spirit. Jesus mentions doctrine twice in this letter.
Doctrine is mentioned 45 times in the New Testament. Anything mentioned that many times is worthy of our attention. Some people say that we should not insist on doctrine because it causes division. But this is exactly why we must ‘hold fast’ to the apostles doctrine (1 Tim 4:16 / 2 John 1:9). In fact we are even told to divide over doctrine. (See: Romans 16:17 / 1 Tim 6 / 2 John 1:10)
– It is a healthy body that purges itself of poison – Joe Focht
Well-meaning individuals within the church have suggested that we should lay aside our doctrine for the sake of love. After all is not love the most important thing –
“A new commandment I give unto you, That ye love one another; as I have loved you, that ye also love one another.” (John 13:34)
And it’s true that doctrine without love is hard and legalistic. Doctrine needs love to soften it, but love needs truth.– speak the truth in love
“That we [henceforth] be no more children, tossed to and fro, and carried about with every wind of doctrine, by the sleight of men, [and] cunning craftiness, whereby they lie in wait to deceive; But speaking the truth in love, may grow up into him in all things, which is the head, [even] Christ (Eph 4:14
Without love doctrine becomes hard, but without doctrine love has no strength
……………..of Balaam, who taught Balac to cast a stumbling block before the children of Israel, to eat things sacrificed unto idols, and to commit fornication. (See Numbers 22.)
“The doctrine of Balaam” is different from the error of Balaam (see Jude 11), which revealed that Balaam thought that God would curse Israel because they were sinners. It is also different from the way of Balaam (see 2 Pet. 2:15), which was covetousness. But here in the verse before us, it is the doctrine or teaching of Balaam. He taught Balac the way to corrupt Israel by intermarriage with the Moabite women. This introduced into the nation of Israel both idolatry and fornication. And during the historical period that the church at Pergamum represents, the unconverted world came into the church. (J Vernon McGee)
The Devil could not bring Israel down from the outside, so he did from the inside. He could not bring the church down through persecution, so he did it from within by infiltrating the church with pagan and Gnostic teaching.
So hast thou also them that hold the doctrine of the Nicolaitans, which thing I hate.
See notes on Rev 2:6 – This has now become a doctrine. Notice that Jesus hates this because it puts people (Priests / Religious leaders) between Himself and his children. We all have direct access to God through Jesus Christ.
“There is neither Jew nor Greek, there is neither bond nor free, there is neither male nor female: for ye are all one in Christ Jesus.” (Gal 3:28)
“But ye [are] a chosen generation, a royal priesthood, an holy nation, a peculiar people; that ye should shew forth the praises of him who hath called you out of darkness into his marvellous light:” (1 Peter 2:9)
“But he that is greatest among you shall be your servant.
And whosoever shall exalt himself shall be abased; and he that shall humble himself shall be exalted.” (Matt 23:11)
Repent; or else I will come unto thee quickly, and will fight against them with the sword of my mouth.
Fight against ‘them’. Who were the ‘them’? The ones teaching heresy. Jesus will fight against them with the Word of God.
What a mistake we make if we think that the church has the authority to decide what is right and what is wrong. – Chuck Missler
He that hath an ear, let him hear what the Spirit saith unto the churches;
to him that overcometh will I give to eat of the hidden manna………
“Our fathers did eat manna in the desert; as it is written, He gave them bread from heaven to eat. Then Jesus said unto them, Verily, verily, I say unto you, Moses gave you not that bread from heaven; but my Father giveth you the true bread from heaven. For the bread of God is he which cometh down from heaven, and giveth life unto the world. Then said they unto him, Lord, evermore give us this bread. And Jesus said unto them, I am the bread of life: he that cometh to me shall never hunger; and he that believeth on me shall never thirst. (John 6:32-35)
…………..and will give him a white stone, and in the stone a new name written, which no man knoweth saving he that receiveth it.
A white stone was a mark of acceptance in the Jewish council.
Those who overcome will receive a personal acceptance from Jesus.
We will be given a new name – the ‘old’ will finally have passed away! All of the regrets of the past will be gone – they are identified to our present name.
Chapter 2: 12-17 The Letter to Thyarira ‘the church of the dark ages’ 600 A.D–Tribulation
The historical application has been lost in the sands of time, although many commentators believe that there was a woman – probably labelled ‘Jezebel’ – who had set herself up in the church at Thyatira as a prophetess, and probably bring many Babylonian ideas and teachings into the church. However, that is all just speculation. What we do know is that when looked at in the light of prophecy, this church fits like a glove the medieval church from about 600 to 1520 (which was when the reformation began – which we will look at in the next chapter).
The name Thyatira means ‘continual sacrifice’ – and is again very apt.
All we know about Thyatira from the scripture is recorded in Acts 16 (and Rev of course!)
And a certain woman named Lydia, a seller of purple, of the city of Thyatira, which worshipped God, heard us: whose heart the Lord opened, that she attended unto the things which were spoken of Paul. And when she was baptized, and her household, she besought us, saying, If ye have judged me to be faithful to the Lord, come into my house, and abide there. And she constrained us. And it came to pass, as we went to prayer, a certain damsel possessed with a spirit of divination met us, which brought her masters much gain by soothsaying: The same followed Paul and us, and cried, saying, These men are the servants of the most high God, which shew unto us the way of salvation. And this did she many days. But Paul, being grieved, turned and said to the spirit, I command thee in the name of Jesus Christ to come out of her. And he came out the same hour. Acts 16:14-18
The ‘purple’ mentioned is rather significant. It was a dye that was extracted from a plant and/or a salt-water snail, used for dying material. It is therefore interesting to note that it is the colour worn by the woman in Rev 17:4 – and is the colour worn by the priests of the Roman Catholic Church.
And unto the angel of the church in Thyatira write; These things saith the Son of God, who hath his eyes like unto a flame of fire, and his feet are like fine brass;
‘the Son of God’ is a title always used in reference to His authority. It is used here to remind the Church who is in charge. This is then backed by the idiom of ‘eyes like unto a flame of fire, and feet like fine brass – always speaking of judgment.
Jesus presents Himself as the Judge to this church.
People get scared about some silly things sometimes – this however is the time to be afraid when the Creator presents Himself to you in this way.
I know thy works, and charity, and service, and faith, and thy patience, and thy works; and the last to be more than the first.
From a prophetic standpoint, all of the churches so far have had their own period of history. However the last four churches all appear to last up until the Tribulation (or Rapture in the case of Philadelphia).
Viewed in this light, it is a matter of fact that Christians within the Roman Catholic church have done marvellous works of charity. As we look around the world we see hospitals and care homes started by the church – Buddhists do not start hospitals, Muslims do not establish care homes. People like Mother Theresa etc. have worked tirelessly to tackle social and political injustices.
In recent years, these works have been considerable – Jesus says He knows this.
Notwithstanding I have a few things against thee, because thou sufferest that woman Jezebel, which calleth herself a prophetess, to teach and to seduce my servants to commit fornication, and to eat things sacrificed unto idols.
But Jesus is absolutely clear; He is against these Christians’ acceptance and toleration of the false religious system taught by this woman.
As we have noted, not only were these letters to be sent to real churches in the first century, with the clear intention that there would also be an application to all churches and all individuals, but we also find that in the order they are given, they lay out – in advance – the history of the church! That being so, we find that the church of Thyatira depicts the era that saw the birth and subsequent future of the Roman Catholic Church. It is no surprise therefore to see some incredible parallels between the character and nature of this woman Jezebel and the Roman Catholic Church.
- Firstly, this woman is called ‘Jezebel’.
From the Old Testament we know that Jezebel was the daughter of Ethbaal, king of the Zidonians, and was given to King Ahab of Israel, in marriage. Ethbaal was the high priest of Ashtaroth “goddess of sensuality and fertility” and Jezebel brought with her the pagan rituals and idolatry that ending up corrupting God’s people. She was also responsible for an inquisition (see 1 Kings 21), when she had a righteous man falsely accused in order to gain property for the ‘state’.
The Roman Catholic Church also merged pagan rituals and idolatry with true Christianity, which ended up corrupting the ‘church’. The Roman Catholic Church was also responsible for the famous ‘Inquisition’, where people were burned alive, tortured and their lands given to the ‘church’. There are historical records of people being tortured to death for owning their own copy of the scriptures – something that the Roman Catholic Church could not allow for fear of people finding out the truth for themselves. Even today Catholics are not encouraged to read the Bible; apparently only the specially trained theologians of the Roman Catholic Church are able to interpret it correctly!
(For more on this see Dave Hunt’s book – “A Woman Rides the Beast” ISBN-13: 978-1565071995 )
- Secondly, this woman Jezebel is self-appointed, “she calleth herself a prophetess”
This is also the case with the Roman Catholic Church. They profess to be in the office of St Peter – the first Pope – so they say! (This is easily disproved).
It is dubious weather Peter ever went to Rome, and even if he did, he did not establish a ‘throne’ to be inherited by his offspring, ruling over the Christian world.
And yet assuming this authority, subsequent Popes declared themselves the ‘Vicar of Christ’, ‘Pontifas Maximus’ – the title once held by Caesar.
- Thirdly, she teaches and leads God’s servants into fornication – The history of RC church is unbelievably shocking – again, see Dave Hunt’s book if you really want to know!
- Finally, she teaches and leads God’s servants into eating things sacrificed to idols. This is exactly what is done through the Roman Catholic doctrine of ‘Transubstantiation’.
And I gave her space to repent of her fornication; and she repented not.
God’s longsuffering – even in view of all that this ‘woman’ has done, God sill offered a way back – but she rejected it for worldly gain. Therefore…….
Behold, I will cast her into a bed, and them that commit adultery with her into great tribulation, except they repent of their deeds.
It is no accident that the great tribulation is referred to here. This wicked woman, prophetically depicting the Roman Catholic Church, will be subject to God’s wrath – a theme we will explore in more detail in Chapter 17 & 18.
However, what is significant here is that it is not just this woman who will go into the great tribulation, but those who ‘commit adultery’ with. That is, those who, rather than seek after God alone, are seduced by this woman and come under her power and authority.
The Reformation in Europe saw many give their lives to come out of the spiritual corruption in the Roman Catholic Church, yet today those fundamental differences are being eroded. Anglicans, Baptists, Methodists, Lutherans and many other Protestant group are gradually coming back under the sway of the Church of Rome, all for the sake of ‘unity’. But possibly even more significant is the fact that in January 2016 the Pope put online a prayer request, effectively calling for all religions to come together under one umbrella. The Pope made the outrageous claim that Muslims, Hindus, Jews, Christians, etc. are all worshiping the same God! This is not true, and the Bible repeatedly affirms that the God of Abraham, Isaac and Jacob is unique and that there is none like Him.
On the surface, this seems like the ultimate in acceptance and tolerance – all people of all faiths, religions and creeds working together for a common cause. This is seen as a good thing. But just stop and think about this for a moment: What kind of God would allow people to approach Him in any way they choose? What God would allow people to call Him by any name that suited them? What kind of God would establish a multitude of religions with contradictory beliefs and practices, see countless numbers of believers give their lives to maintain their religious freedoms and right to be different, and then reveal that actually they are all leading to the same place anyway!(?) Surely this would reveal a ruthless tyrant of a god on a par with the Roman Emperors who watched subjects in their empire fight to the death in the Coliseum? Although those that promote the ‘all roads lead to God’ philosophy hail themselves as tolerant and accepting, they are in actual fact nothing of the sort. Consider what they are saying: ‘Regardless of what you believe, regardless of your heritage, we all have to go to the same place’; they are telling us that there is only one destination for all (whether you want to go or not!). That is more narrow-minded and dogmatic than anything Jesus ever said. Jesus said there are two destinations and we each get to choose: ‘smoking’ or ‘non-smoking’! Nevertheless, the Roman Catholic Church is doing just this.
This verse also tells us that she (Jezebel/The Roman Catholic Church) and ‘they’ are not truly born again believers, for no born-again believer will go into the Tribulation. Matthew 13 tells that there are wheat and tares, each looking almost identical, yet at the time of the harvest there will be a separation, when the wheat will be gather into Christ’s barn (a beautiful picture of the Rapture) whereas the tares will be gathered into bundles – just as we are seeing – and then their destiny is to be burned – Again, the subject of Revelation 17 & 18.
Yet even in the mist of this great spiritual adultery we see God’s great Grace as He offers a way out…. ‘except they repent of what they have done’. God is not willing that any should perish, but that all should come to repentance (2 Peter 3:9).
Another thing we can deduce from this verse is that this woman and her teaching will carry on up until (and into) the Great Tribulation. Hence we know that the scope is not intended to be just the 1st century church at Thyatira, but a prophetic fulfilment is also in view.
And I will kill her children with death; and all the churches shall know that I am he which searcheth the reins and hearts: and I will give unto every one of you according to your works.
Notice that it is her children (not God’s) that will be killed.
Revelation 17:5 refers to the woman as ‘Mother of Harlots’. Who are her children? The churches that came out of the Roman Catholic Church – that is all of the reformation churches and any others that she adopts – which may will include Muslims whom the Pope has declared are all saved!
God searches the ‘reins’ – the controlling influence. Are your works out of a sense of duty to your controlling influence, or are they out of a deep, personal, love for God?
It is not the works that count, but the attitude of heart behind the works.
But unto you I say, and unto the rest in Thyatira, as many as have not this doctrine, and which have not known the depths of Satan, as they speak; I will put upon you none other burden.
Not all Christians were caught up in this satanic deceit. Not all in the Roman Catholic Church or the denominational churches will be swept away with this Satanic deception. God has His own and knows each one by name, and not one shall perish who is saved by the blood of the Lamb!
Jesus doesn’t ask for anything else – no other burden – than to………
But that which ye have already hold fast till I come.
This is a staggering situation. They are not told to preach more, or evangelise, or do works of charity, but simply to hold on to what (and who) they had until Jesus comes – (the Rapture).
And he that overcometh, and keepeth my works unto the end, to him will I give power over the nations: And he shall rule them with a rod of iron; as the vessels of a potter shall they be broken to shivers: even as I received of my Father.
This is one of many rewards for all true Christians.
“Know ye not that we shall judge angels? how much more things that pertain to this life?” 1 Cor 6:3
“He said therefore, A certain nobleman went into a far country to receive for himself a kingdom, and to return. And he called his ten servants, and delivered them ten pounds, and said unto them, Occupy till I come. But his citizens hated him, and sent a message after him, saying, We will not have this man to reign over us. And it came to pass, that when he was returned, having received the kingdom, then he commanded these servants to be called unto him, to whom he had given the money, that he might know how much every man had gained by trading. Then came the first, saying, Lord, thy pound hath gained ten pounds. And he said unto him, Well, thou good servant: because thou hast been faithful in a very little, have thou authority over ten cities. And the second came, saying, Lord, thy pound hath gained five pounds. And he said likewise to him, Be thou also over five cities.” Luke 19:12-19
And I will give him the morning star.
Rev 22:16 says: “I Jesus have sent mine angel to testify unto you these things in the churches. I am the root and the offspring of David, and the bright and morning star.”
The morning star is Jesus. Here we are told that Jesus will give us Himself (in marriage) to those who overcome, occupy, hold fast until He comes.
He that hath an ear, let him hear what the Spirit saith unto the churches.
We need to hear what each of these letters are saying, but as we go on through the letters they get more intense and therefore the call to hear becomes stronger.
“Ye are all the children of light, and the children of the day: we are not of the night, nor of darkness. Therefore let us not sleep, as do others; but let us watch and be sober.”
1 Thess 5:5-6
Next week will move into Revelation chapter 3 and look at the final three churches, Sardis, Philadelphia and Laodicea. | <urn:uuid:926bfae5-7521-45a9-8ab6-f0917a57e005> | CC-MAIN-2017-51 | http://www.calvaryportsmouth.co.uk/2016/02/pergamos-thyatira-lessons-for-today/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00727.warc.gz | en | 0.976625 | 7,790 | 2.71875 | 3 |
VII. LEGAL STANDARDS
By recruiting children into its armed forces and using them to participate in armed conflict, Burma is in violation of both its national laws governing recruitment, and its binding obligations under the Convention on the Rights of the Child. In addition, the treatment of children in its forces also violates numerous other provisions of the convention.
The Convention on the Rights of the Child defines a child as any person below the age of eighteen "unless under the law applicable to the child, majority is attained earlier."178 Burma's 1993 Child Law defines "a child as a person who has not attained the age of 16 years and a youth as a person who has attained the age of 16 years but has not attained the age of 18 years."179 In regard to military recruitment, the Regulation for the Persons Subject to the Defense Services Act establishes the minimum age for recruitment into Burma's armed forces at eighteen years.180
Burma ratified the Convention on the Rights of the Child in August 1991. Article 38 of the convention prohibits the recruitment of children under the age of fifteen or their direct participation in armed conflict. It further obliges states parties that recruit children between the ages of fifteen and eighteen "to give priority to those that are oldest."181 The convention states that none of its provisions should affect laws that are more conducive to the rights of the child. Since Burma's national law prohibits recruitment below age eighteen, this standard therefore prevails.
Since the adoption of the Convention on the Rights of the Child in 1989, other international standards have been adopted that strengthen protections for children affected by armed conflict. These standards reflect a growing international consensus that children under the age of eighteen should not participate in armed conflict-a principle reflected in Burma's own national law. Human Rights Watch takes the position that no child under the age of eighteen should be recruited-either voluntarily or forcibly-into any armed forces or groups, or participate in hostilities.
In 1999, the Worst Forms of Child Labour Convention (No. 182) was unanimously adopted by member States of the International Labour Organization (ILO). It commits each state that ratifies it to "take immediate and effective measures to secure the prohibition and elimination of the worst forms of child labour as a matter of urgency." It defines a child as any person under the age of eighteen and includes in its definition of the worst forms of child labor:
All forms of slavery or practices similar to slavery, such as the sale and trafficking of children, debt bondage and serfdom and forced or compulsory labour, including forced or compulsory recruitment of children for use in armed conflict.182
In May 2000, the United Nations General Assembly unanimously adopted an Optional Protocol to the Convention on the Rights of the Child on the involvement of children in armed conflict. The Protocol raises the standards set in the Convention on the Rights of the Child by establishing eighteen as the minimum age for any conscription or forced recruitment. Under articles 1 and 2:
States Parties shall take all feasible measures to ensure that members of their armed forces who have not attained the age of 18 years do not take a direct part in hostilities; States Parties shall ensure that persons who have not attained the age of 18 years are not compulsorily recruited into their armed forces.183
States parties may accept voluntary recruits into their armed forces from the age of sixteen. Upon ratification of the protocol, states must deposit a binding declaration establishing their minimum voluntary recruitment age, and if recruiting under the age of eighteen, the measures they have adopted to ensure that such recruitment is not forced or coerced. States parties accepting under-eighteen volunteers must also maintain other safeguards including the informed consent of a child's parents or legal guardians, reliable proof of age, and ensuring that the person is fully informed of the duties involved in military service.
The Optional Protocol also places obligations upon nongovernmental armed forces. Article 4 states that "armed groups that are distinct from the armed forces of a state should not, under any circumstances, recruit or use in hostilities persons under the age of eighteen." States parties must take measures to prevent such recruitment and use, including by criminalizing such practices.
The Statute for the International Criminal Court was adopted in July 1998. Although relying on the previous standard prohibiting the recruitment or use of children under fifteen, the statute defines such activities as war crimes, whether carried out by members of national armed forces or nongovernmental armed groups.184 The court may prosecute individuals for such crimes committed in the territories of ratifying states as well as for crimes committed anywhere by nationals of ratifying states.
Burma has not yet ratified ILO Convention 182, the Rome Statute for the International Criminal Court, or the Optional Protocol to the Convention on the Rights of the Child. However, these new international standards have been rapidly accepted within the international community. Convention 182 has become the most rapidly ratified treaty in ILO history, securing 129 ratifications by late August 2002. By the same time, the Optional Protocol had been signed by 110 governments, and ratified by thirty-seven. On July 1, 2002 the International Criminal Court came into being, having garnered the sixty ratifications necessary.
Apart from the prohibitions on the recruitment of children and their use in armed conflicts, Burma's treatment of children recruited into the military also violates numerous other provisions of the Convention on the Rights of the Child. Under the convention:
· All children have the right to life;
· All children should be registered immediately after birth;
· Children shall not be separated from their parents against their will;
· Children who are separated from their parents have the right to maintain direct contact with both parents on a regular basis;
· Children should be protected from all forms of physical or mental violence, injury or abuse, neglect or negligent treatment, maltreatment or exploitation;
· Children have the right to enjoy the highest attainable standard of health;
· Children have the right to education; each state is responsible for making primary education compulsory and available free to all and to encourage the development of secondary education;
· Children have the right to rest and leisure;
· Children have the right to be protected from economic exploitation and from performing any work that is likely to be hazardous or to interfere with the child's education, or to be harmful to the child's health or physical, mental, spiritual, moral or social development;
· Children should not be abducted, sold or trafficked;
· No child should be subjected to torture or other cruel, inhuman or degrading treatment or punishment;
· No child should be deprived of their liberty unlawfully or arbitrarily;
· Children who have been victim to exploitation, abuse or armed conflict should receive assistance for their physical and psychological recovery;
· Children alleged to have infringed the law have the right to due process, to be treated with dignity, and in a manner appropriate to their age and circumstance.185
Human Rights Watch has found that the SPDC's practices of child recruitment and its treatment of these children have violated each of these rights. Many of these violations occur on a systematic and routine basis.
Refugee Protection for Former Child Soldiers
Access to Status Determination Procedures
Former child soldiers, like all asylum seekers, must have access to refugee status determination procedures so that if they are refugees, they can receive the protection they are entitled to under international law. It is a primary obligation of states parties to the 1951 Geneva Convention Relating to the Status of Refugees (the Refugee Convention), or UNHCR as a part of its protection mandate, to establish regularized procedures to assess refugee status.186 Although earlier guidelines did not do so,187 UNHCR has since recognized that children, whether they are with family members or arrive in a host country "unaccompanied," may make their own individual claims to asylum.188 It is further understood that children in particular should be given priority and specialized attention while they are in the process of seeking asylum.
For example, UNHCR's Guidelines on Policies and Procedures in Dealing with Unaccompanied Children Seeking Asylum (UNHCR Guidelines on Unaccompanied Children) state that given their "vulnerability and special needs, it is essential that children's refugee status applications be given priority and that every effort be made to reach a decision promptly and fairly."189 The UNHCR Guidelines on Protection and Care of Refugee Children, 1994 (UNHCR Guidelines on Refugee Children) stress the importance of keeping children informed during the status determination process,190 and of giving unaccompanied children the benefit of the doubt when assessing the credibility of their refugee claims.191
The UNHCR Handbook on Procedures and Criteria for Determining Refugee Status (the Handbook), which provides guidance to states on procedures and criteria for determining refugee status, contains little direction on determining the refugee claims of minors, such as former child soldiers.
Protecting Former Child Soldiers from forcible return
Child soldiers who desert and flee to neighboring countries should be protected against forcible return to Burma. Under the Refugee Convention, refugees are protected against forcible return to a country where their lives or freedom would be threatened.192 The obligation of non-refoulement lies at the center of refugee protection and is now a well-established principle of customary international law.193 Former child soldiers, such as children deserting from military action in Burma, would undoubtedly face serious threats to their lives and freedom in Burma, including the threat of summary execution, and hence should be protected against forcible return. Countries that have not ratified the Refugee Convention, such as Thailand, are still bound by non-refoulement obligations under customary international law.
Moreover, the 1984 Convention against Torture and other Cruel, Inhuman or Degrading Treatment or Punishment (Convention Against Torture) prohibits the return of a person to a country where there are substantial grounds for believing that they would be in danger of being subjected to torture. The prohibitions against torture and against the return of persons to a country where they could face torture, or cruel, inhuman or degrading treatment have also been recognized as principles of customary international law.194 Thus even though Thailand has not ratified the Convention Against Torture it would, under customary international law, be prohibited from returning former child soldiers to Burma where they could face summary execution or other severe punishments.
Criteria For Granting Refugee Status To Former Child Soldiers
As discussed above, the Optional Protocol to the 1989 Convention on the Rights of the Child amends the prohibition on the recruitment of under fifteen year olds into military service under article 38, to a prohibition on forced recruitment of under eighteen year olds or their participation in armed conflict. There has been widespread international support for this position, including by international organizations. UNHCR, for example, has taken the position that all forms of child participation in armed conflict, whether direct or indirect, regardless of the child's or parent's consent, should be prohibited for all persons below the age of eighteen.195 UNHCR also took the position during the deliberations on the Statute for the International Criminal Court that the use of children in hostilities should be considered as a war crime.196
Under the Refugee Convention, a person can be considered a refugee if he or she fears persecution on the grounds of his or her race, religion, nationality, social group, or political opinion.197 UNHCR's Guidelines on Unaccompanied Children make specific reference to the Convention on the Rights of the Child and child recruitment when discussing the criteria for granting refugee protection to children. The UNHCR Guidelines state that:
It should be further borne in mind that, under the Convention on the Rights of the Child, children are recognized certain specific human rights, and that the manner in which those rights may be violated as well as the nature of such violations may be different from those that may occur in the case of adults. Certain policies and practices constituting gross violations of specific rights of the child may, under certain circumstances, lead to situations that fall within the scope of the Refugee Convention. Examples of such policies and practices are the recruitment of children for regular or irregular armies, their subjection to forced labour, the trafficking of children for prostitution and sexual exploitation and the practice of female genital mutilation.198 (emphasis added)
Given that the recruitment of children is increasingly condemned by the international community, that forced recruitment of children is widely accepted as a violation of international law, and that the recruitment of children under age fifteen is considered a war crime, the recruitment of children into the Burma army and opposition armies can be considered as giving rise to a well founded fear of persecution under the Refugee Convention.
Moreover, former child soldiers could also have a well-founded fear of future persecution if they are returned to Burma. As already described, children who have deserted are at serious risk of re-recruitment. Burma army deserters who are caught face jail sentences of three to five years, often followed by conscription back into the army. If they surrender to opposition resistance groups, they face possible execution if they are suspected of being spies. Child soldiers who have fought for opposition armies and surrender to the Burma army are often forced to join small proxy armies and fight against their former comrades. Whether by the Burma army or opposition forces, the re-recruitment of children under the age of eighteen constitutes a serious human rights abuse and violation of international law.
Desertion and persecution
Standing alone, fear of prosecution for desertion is not usually considered a ground for granting refugee protection. The Handbook states that
[w]hether military service is compulsory or not, desertion is invariably considered a criminal offence. The penalties may vary from country to country, and are not normally regarded as persecution. Fear of prosecution and punishment of desertion or draft-evasion does not in itself constitute well-founded fear of persecution. . . . A person is clearly not a refugee if his only reason for desertion or draft-evasion is his dislike of military service or fear of combat. . . . It is not enough for a person to be in disagreement with his government regarding the political justification for a particular military action.199
Association with condemned military action
However, the Handbook stipulates various exceptions to this norm that may apply to former child soldiers. Most significantly, the Handbook states that
[w]here, however, the type of military action, with which an individual does not wish to be associated, is condemned by the international community as contrary to basic rules of human conduct, punishment for desertion or draft-evasion could, in the light of all other requirements of the definition, in itself be regarded as persecution.200
The "type of military" actions that many child deserters do not wish to be associated with are contrary to the basic rules of human conduct. As discussed earlier in this report, child soldiers in the Burma army are frequently forced to round up civilians for forced labor, beat and kick them, burn houses, and even participate in massacres of women and children. As violations of the laws of war201, these military actions are contrary to the basic rules of human conduct. For former child soldiers who may reach the age of adulthood before seeking asylum in a neighboring country, their claims to asylum would best fit within this condemned military action exception to the Handbook's deserter rule.
In addition, child deserters are by definition seeking to disassociate themselves from a military force that recruits child soldiers, a practice that is widely considered to be a gross human rights violation and is prohibited under international law.
Given the fact that the forced recruitment of children below the age of eighteen is widely condemned by the international community and considered a violation of international law, former child soldiers who fear punishment for desertion could be entitled to refugee protection.
The refugee definition
In all of the above cases, former child soldiers should be provided with refugee protection if they meet the other requirements under the Refugee Convention. In other words, if a former child soldier could prove that he was likely to be persecuted on account of one of the five grounds-race, religion, nationality, membership of a social group, or political opinion-he would qualify for refugee protection. Refugee status inquiries must always be particularized to the individual. Although three of these grounds are examined briefly below, former child soldiers may have specific fears of persecution that fit any one or any combination of the above five grounds.
Some former child soldiers from Burma belong to an ethnic group, such as the Karen, Karenni, and Shan, which can cause them to have a well-founded fear of persecution on account of race. The government often targets members of Burma's approximately fifteen major ethnic groups (or their subgroups) for discrimination202 and serious human rights violations, and the targeting may be heightened for some ethnicities because opposition groups associated with them are literally at war with the regime.203
Second, some former child soldiers may face persecution in Burma on account of their political opinions. As documented in this report, some children were opposed to the practice of forced recruitment of children or to any of the other serious human rights violations committed by the Burma army or opposition groups. While some children may have been outwardly expressive of these views during the time they were soldiers, vocal opposition is not required in order to receive recognition under the Convention. Political opinions could also be attributed or imputed to the former child soldier by his persecutors.204
Many of the children we interviewed admitted being in opposition to the Burma army and its practices, and indeed mentioned this as a rationale for their desertion.205 In addition, several children explained how the fact of their flight to Thailand would place them at risk of increased persecution upon their return to Burma.206 As a result, some former child soldiers who return to Burma and encounter their former commanders or other representatives of the Burma government could be attributed with opposition views, both because of their desertion and because of their flight to a neighboring country, and therefore be at risk of persecution on political grounds.
Finally, some former child soldiers may face persecution in Burma because of their membership in the particular social group of children. In guidelines recently published by UNHCR on "membership of a particular social group," UNHCR uses the following definition:
a particular social group is a group of persons who share a common characteristic other than their risk of being persecuted, or who are perceived as a group by society. The characteristic will often be one which is innate, unchangeable, or which is otherwise fundamental to identity, conscience or the exercise of one's human rights207
This report has demonstrated that many children in Burma, including those who are former child soldiers, are at risk of forced recruitment. As discussed above, forced recruitment is a serious violation of human rights, and UNHCR has recognized that serious violations of human rights constitute persecution.208 As discussed previously in this report, military recruiters in Burma seek out children because they are more easily recruited and can be readily taught to perform difficult or gruesome tasks. Therefore, some former child soldiers may fear re-recruitment, and therefore persecution on account of their membership in the particular social group of children in Burma.
As noted earlier in this report,209 in practice, access to refugee camps is not available to the vast majority of former child soldiers from Burma, and most find it too difficult and dangerous to travel to a UNHCR office to seek a refugee status determination. Human Rights Watch is not aware of any cases in which former child soldiers from Burma have received protection documents or other assistance from UNHCR.
178 Article 1.
179 SLORC Law No. 9/93, adopted July 14, 1993, section 2 (a) and section 2 (b), cited in Myanmar's initial State party report to the Committee on the Rights of the Child, CRC/C/8/Add.9, 18 September 1995, paragraph 21.
180 Letter from the Permanent Mission of the Union of Myanmar to the United Nations, New York, to Human Rights Watch, May 8, 2002.
181 Convention on the Rights of the Child, Article 38, G.A. res. 44/25, U.N. Doc. A/RES/44/25 (adopted November 20, 1989; entered into force September 2, 1990). This provision is based on the 1977 Additional Protocols to the Geneva Conventions. Article 4(3)(c) of Protocol II, which governs non-international armed conflicts, states that "children who have not attained the age of fifteen years shall neither be recruited in the armed forces or groups nor allowed to take part in hostilities."
182 Article 3 (a), Convention concerning the Prohibition and Immediate Action for the Elimination of the Worst Forms of Child Labour (ILO No. 182), 38 I.L.M. 1207 (1999), entered into force November 19, 2000.
183 Optional Protocol to the Convention on the Rights of the Child on the involvement of children in armed conflicts, A/RES/54/263, adopted May 25, 2000, entered into force February 12, 2002.
184 Article 8 (2)(b)(xxvi) and Article 8 (2)(e)(vii), Rome Statute of the International Criminal Court, U.N. Doc. A/CONF.183/9, adopted July 17, 1998, entered into force July 1, 2002.
185 Convention on the Rights of the Child, articles 6, 7, 9, 19, 22, 24, 28, 31, 32, 35, 37, 39, 40.
186 See "The Determination of Refugee Status," ExCom Conclusion No. 8 (1977). The Executive Committee ("ExCom") is UNHCR's governing body. Since 1975, ExCom has passed a series of Conclusions at its annual meetings. The Conclusions are intended to guide states in their treatment of refugees and asylum seekers and in their interpretation of existing international refugee law. While the Conclusions are not legally binding, they do constitute a body of soft international refugee law and ExCom member states are obliged to abide by them.
187 The UNHCR Handbook on Procedures and Criteria for Determining Refugee Status requires revision in order to reflect the current state of international law, particularly the Convention on the Rights of the Child. The Handbook states that "in the absence of indications to the contrary - a person of 16 or over may be regarded as sufficiently mature to have a well-founded fear of persecution. Minors under 16 years of age may normally be assumed not to be sufficiently mature. They may have fear and a will of their own, but these may not have the same significance as in the case of an adult." UNHCR Handbook on Procedures and Criteria for Determining Refugee Status (Geneva: UNHCR 1979, 1992), pp. 50-51, paragraphs 213 and 215.
188 See, e.g. Guidelines on Refugee Children, pp. 98-99.
189 UNHCR Guidelines on Policies and Procedures in Dealing with Unaccompanied Children, Geneva, February 1997, p. 9, paragraph 8.1
190 UNHCR Guidelines on Refugee Children, p. 102.
191 UNHCR Guidelines on Refugee Children, p.101. In addition, as part of its staff training package, UNHCR teaches the importance of ensuring that unaccompanied children gain access to refugee status determination procedures and that they are fully informed about the procedure. UNHCR Training Module: Interviewing Applicants for Refugee Status, 1995, RLD4.
192 Article 33 (1) of the Refugee Convention.
193 The customary international law norm of non-refoulement protects refugees from being returned to a place where their lives or freedom are under threat. Customary international law is defined as the general and consistent practice of states followed by them out of a sense of legal obligation. That non-refoulement is a norm of customary international law is well-established. See, e.g. ExCom Conclusion No. 17, Problems of Extradition Affecting Refugees, 1980; No. 25, General Conclusion on International Protection, 1982; Encyclopedia of Public International Law, Vol. 8, p. 456. UNHCR's ExCom stated that non-refoulement was acquiring the character of a peremptory norm of international law, that is, a legal standard from which states are not permitted to derogate and which can only be modified by a subsequent norm of general international law having the same character. See ExCom Conclusion No. 25, General Conclusion on International Protection, 1982. The Executive Committee ("ExCom") is UNHCR's governing body. Since 1975, ExCom has passed a series of Conclusions at its annual meetings. The Conclusions are intended to guide states in their treatment of refugees and asylum seekers and in their interpretation of existing international refugee law. While the Conclusions are not legally binding, they do constitute a body of soft international refugee law and ExCom member states are obliged to abide by them. Thailand is an ExCom member state and as such is obligated to respect the international standards stipulated in the Conclusions.
194 See U.N. Human Rights Committee, General Comment No. 24 (52) (1994) at para 8 and Human Rights Committee, General Comment No. 20 (1992) at para 9.
195 See UNHCR Comments on the Optional Protocol to the Convention on the Rights of the Child on Involvement of Children in Armed Conflicts and UN bodies call for a prohibition on the recruitment and participation of children under age 18 in armed conflict, UNHCR statement, January 12, 2000.
196 Written communication between UNHCR and Human Rights Watch, July 17, 2002.
197 Article 1A of the Refugee Convention defines a refugee as someone with a "well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country."
198 UNHCR Guidelines on Unaccompanied Children, paragraph 8.7.
199 UNHCR Handbook, pp. 39-40, paragraphs 167, 168, and 171.
200 Ibid., p. 40, paragraph 171.
201 Article 3 common to the four Geneva Conventions of 12 August 1949, which applies to internal (as opposed to international) armed conflicts, states that "Persons taking no active part in the hostilities, including members of armed forces who have laid down their arms and those placed hors de combat by sickness, wounds, detention, or any other cause, shall in all circumstances be treated humanely, without any adverse distinction founded on race, colour, religion or faith, sex, birth or wealth, or any other similar criteria. To this end the following acts are and shall remain prohibited in any time and in any place whatsoever with respect to the above-mentioned persons: (a) violence to life and person, in particular murder of all kinds, mutilation, cruel treatment and torture; (b) taking of hostages; (c) outrages upon personal dignity, in particular humiliating and degrading treatment; (d) the passing of sentences and the carrying out of executions without previous judgment pronounced by a regularly constituted court, affording all the judicial guarantees which are recognized as indispensable by civilized people."
202 UNHCR recognizes that "discrimination on racial grounds will frequently amount to persecution in the sense of the 1951 Convention. This will be the case if, as a result of racial discrimination, a person's human dignity is affected to such an extent as to be incompatible with the most elementary and inalienable human rights." UNHCR Handbook, para. 69.
203 See discussion at pp. 15-17, above.
204 UNHCR Handbook, para 80 (recognizing that some political opinions can be "attributed. . .to the applicant").
205 Under Burmese law, desertion is a political offence, which is not grounds for refugee status (see discussion above). However, UNHCR's Handbook states that "there may be reason to believe that a political offender would be exposed to excessive or arbitrary punishment for the alleged offence. Such excessive or arbitrary punishment will amount to persecution." UNHCR Handbook, para 85.
206 A person who was not a refugee when he or she left her country, but who becomes a refugee at a later date, is called a refugee "sur place." UNHCR's Status Determination Handbook notes that, "a person may become a refugee `sur place' as a result of his own actions. . . . Regard should be had in particular to whether such actions may have come to the notice of the authorities of the person's country of origin and how they are likely to be viewed by those authorities." UNHCR Handbook, para. 96.
207 UNHCR Guidelines on International Protection: "Membership of a particular social group" within the context of Article 1A (2) of the 1951 Convention and/ or its 1967 Protocol relating to the Status of Refugees, HCR/GIP/02/02, May 7, 2002.
208 See Office of the United Nations High Commissioner for Refugees, Handbook on Procedures and Criteria for Determining Refugee Status (Geneva: UNHCR 1979, 1992) at p. 51 (stating that "[o]ther serious violations of human rights. . .would also constitute persecution").
209 See section entitled "Options." | <urn:uuid:7ce1c251-ee11-4b34-a1a1-3d8c2c9dd7d2> | CC-MAIN-2022-33 | https://www.hrw.org/reports/2002/burma/Burma0902-07.htm | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00497.warc.gz | en | 0.950538 | 5,906 | 2.984375 | 3 |
Pakhtun Identity versus Militancy in Khyber Pakhtunkhwa and FATA: Exploring the Gap between Culture of Peace and ilitancy
The Pakhtun culture had been flourishing between 484 - 425 BC, in the era of
Herodotus and Alexander the Great. Herodotus, the Greek historian, for
the first time, used the word Pactyans, for people who were living in parts of
Persian Satrapy, Arachosia between 1000 - 1 BC. The hymns’ collection from
an ancient Indian Sanskrit Ved used the word Pakthas for a tribe, who were
inhabitants of eastern parts of Afghanistan. Presently, the terms Afghan and
Pakhtun were synonyms till the Durand Line divided Afghanistan and Pakhtuns
living in Pakistan. For these people the code of conduct remained Pakhtunwali;
it is the pre-Islamic way of life and honour code based upon peace and
tranquillity. It presents an ethnic self-portrait which defines the Pakhtuns as an
ethnic group having not o ...
1-Hikmat Shah Afridi Ph.D scholar, International Islamic University, Islamabad, Pakistan.2-Manzoor Khan Afridi Assistant Professor,Department of Politics and IR, Islamic International University, Islamabad, Pakistan.3-Syed Umair Jalal Junior Research Fellow, Humanity Research Council, Islamabad, Pakistan.
Conflict Resolution: Editorialization of Government- Tehreek-i-Taliban Pakistan Dialogue
Every newspaper publishes an editorial every day to state their official opinion
on the most important of issues. Among public and official policymakers,
editorials are taken seriously. This study undertook Pakistan’s two leading
newspapers’ editorials – Dawn and The Nation - on the peace talks between the
Pakistan government and the Tehreek-i-Taliban Pakistan (TTP). The editorials
published between January 2014 and July 2014 on the dialogues were studied.
Using agenda-setting approach, this study found that Dawn published 67 and
The Nation 61 editorials discussing stakeholders’ stance on the dialogue,
dialogues bodies, and disruption of dialogues to terrorism and TTP terms. The
study measured the editorials to answer research questions. ...
1-Hassan Shehzad Lecturer, Media and Communication Studies, Islamic International University, Islamabad, Pakistan.2-Ahsan Raza Senior journalists, s, Dawn Newspaper, Pakistan.3-Zubair Shafi Ghauri PhD Candidate,Department of History,Bahauddin Zakariya University, Multan, Punjab, Pakistan.
Pakistan has been following the prison system of the British Empire. The
Pakistani prison system has gone through many changes. Efforts have been
made to bring the prison system in Pakistan in conformity with the modern
prison system. The restoration of democracy in Pakistan has paved the way for
further reforms in the prison system. Many suggestions have been forwarded to
the authorities and have been requested for the modification of the inside
condition of Pakistani jails. The data for this paper have been collected from
Human Rights Organization/ Council of Pakistan, Islamic Ideological Council
and jail training institute Lahore. The research under focus is an attempt to
explore prison reforms in Pakistan in historical perspective and put forward
suggestions to in tune the pr ...
1-Zahid Anwar Professor, Department of Political Science, University of Peshawar, Peshawar, KP, Pakistan.2-S. Zubair Shah Ph.D. Scholar,Department of Political Science,University of Peshawar, Peshawar, KP, Pakistan.
Displacement from FATA Pakistan (2009-2016): Issues and Challenges
Federally Administered Tribal Area (FATA) of Pakistan is one of the most
neglected regions in the world as far as development is concerned. It has been
the hub of all sorts of illegal activities including militancy and export of
terrorism. Thus, it became inevitable for the government of Pakistan to act
against militants through military operations. Over the years, hundreds of
thousands of people have been displaced from the tribal areas of the country
due to conflicts. Moreover, military operation Zarb-e-Azb has been launched in
North Waziristan Agency in June 2014. Apart from its success, the operation
displaced around 0.5 million people. This paper evaluates how the influx of
these IDPs in the country is affecting the socio-economic situation. Secondly,
attention has be paid t ...
1-Sohail Ahmad Assistant Professor, Department of Humanities, COMSATS University, Islamabad, Pakistan. 2-Sadia Sohail MS Scholar,Department of International Relations,COMSATS University, Islamabad, Pakistan.3-Muhammad Shoaib Malik Assistant Professor, Department of Pakistan Studies, , National University of Modern Languages Islamabad, Pakistan.
Pakistan's Foreign Policy: Initial Perspectives and Stages
Pakistan is a state like other states of the world. When it came out from the
British net the initial stages were very tough for it. It was considered that it will
rejoin India. But the administration of that time took sincere initiatives to
manage the affairs gradually. Cold war started at that time between the
Communist and Capitalist blocks. Newly established states joined one of them.
Pakistan was also one of them. Its foreign policy principles, rules and
regulations are highlighted in this paper. All these steps are discussed below
gradually with the help of primary and secondary sources. It is concluded that
Pakistan had no choice to join the capitalist block because of its financial
position that forced it to take such decisions as compared to India. But security
and sover ...
1-Muhammad Muzaffar Ph. D ScholarDepartment of Political Science,International Islamic University Islamabad.2-Zahid Yaseen Lecturer,Department of Political Science,Govt. Post Graduate College, Gujranwala, Punjab, Pakistan.3-Uroosa Ishfaq Junior Research Fellow,Humanity Research Council, Islamabad, Pakistan.
With the idea of ‘change’ in exiting social system of Pakistan mainly by
eradicating corruption, Pakistani Tehrik-e-Insaf emerged in 1996, under the
leadership of renowned cricket player Imran Khan. He pledged to promote
justice and the establishment of a welfare state. However, the party could hardly
attain electoral success until 2012 when it reached on national political peaks.
Majority of the Pakistani youth from urban areas follow the party agenda with
a new zenith. Unlike other political parties, PTI pledged to challenge the
existing transmissible-monarchy and decided to holds the intra party election.
By using different primary and secondary sources, this article tries to focus on
PTI’s intra party elections in order to denounce the existing socio-political
culture o ...
1-Muhammad Rizwan Assistant Professor, Department of Pakistan Studies, Abbottabad University of Science and Technology, Havelian, KP, Pakistan.2-Manzoor Ahmad Assistant Professor, Department of Political Science,Abdul Wali Khan University Mardan, Mardan, KP, Pakistan.3-Muhammad Bilal Assistant Registrar, Women University, Swabi, KP, Pakistan.
PTI, Political culture, Democracy, Intra party Elections
Elementary Education and Language Teachers' Professional Development Needs: The Context of Pakistan
Teachers professional development is central to meet the ever-growing
challenges at the elementary school level. In this article we describe the
development and use of Teachers Development Scale at the elementary school
level in Pakistan. An exploratory factor analysis (n=274) showed two basic
dimensions of teachers professional development: community development,
and individual development. Community-based developments included
collaborative projects, participation in conferences, and system of educators for
proficient advancement. On the other hand, individual developments related to
improvement in course work, coaching, observation visits to other schools and
qualification degree programs. The implication of the study identifies
constraints and suggestions for educators, educa ...
1-Fasih Ahmed HOD, Department of Humanities, COMSATS University, Islamabad, Pakistan. 2-Sana Hussan Lecturer,Department of English, Women University Mardan, KP, Pakistan.3-Muhammad Safiullah Research Assistant, Humanity Research Council, Islamabad, Pakistan.
Professional Development, Language Teachers, Elementary Education
Independent Judiciary and NationBuilding: A Case Study of Pakistan
Independent judiciary is the foundation of a fair and impartial and
constitutionally balanced society. Independence means that judges can freely
make lawful decisions whether involving influential politicians, governmental
personals or ordinary citizens. Thus, ensuring decisions are based on
constitution rather than is the result of political pressures or is favoring some
majority. Endowed with independence, the judicial system serves as a safeguard
of the peoples rights and freedom. In Pakistan, although, our Constitution
stipulates an independent judiciary but our governments, over the years, have
been bent upon ensuring that our judges always live in a climate of fear and
make biased and favorable decisions under the influence of the executives. The
paper concludes that indepe ...
1-Sadaf Farooq Assistant Professor, Department of Politics and International Relations,International Islamic University, Islamabad.2-Abida Rafique PhD Scholar, Department of Politics and International Relations, International Islamic University, Islamabad.3-Ghulam Qumber Deputy Director,ISSRA,National Defence University, Islamabad, Pakistan.
Independence, Accountability, Democracy, Media, Power
Employee Engagement and High Performance Work System: An Empirical Study
In the competitive working environment, organizations and researchers have
focused their attention to highlight means and ways to gain more advantage on
less resource consumption; hence more focus is paid on channelizing Human
Resource Practices to gain maximum return on Investment. The data is collected
from the banking sector of Pakistan from a sample of 400 employees. The theme
is to analyze the association among the variables at a multi-level i.e. the data is
collected from two pools of respondents, one the staff level employees and the
second are the middle management employees using an adapted questionnaire
comprising of two sections for each pool respectively. Reliability test,
correlation analysis and Regression analysis is conducted on the data. The
findings of the resul ...
Community Development Perspective in the Local Government System of District Mardan, Khyber Pakhtunkhwa
The present research study analyzed the Khyber Pakhtunkhwa Local
Government Act, 2013 and find out the role of elected leaders in community
development. The quantitative research design is used and sample is selected
through simple random sampling technique. The researchers interview 300
respondents from district Mardan. The statistical results show that elected
leaders are performing their effective role in infrastructure development i.e.
schools, basic health units, irrigation channels, roads and safety wall as well as
in dispute resolution and generating revenue for the local government. The
present research study recommended that government should release the
annual development budget to elected leaders on time for addressing the local
citizen needs. Similarly, the study sugg ...
1-Hussain Ali Lecturer, Department of Sociology, Abdul Wali Khan University Mardan, Mardan, KP, Pakistan. 2-Syed Ali Shah Assistant Professor,Department of Pakistan Studies,Abdul Wali Khan University Mardan, Mardan, KP, Pakistan.3-Ahmad Ali Assistant Professor, Department of Sociology, Abdul Wali Khan University Mardan, Mardan, KP, Pakistan.
Local Government, Community, Development, Leaders, Citizens
Traditional and Modern Education in Bukhara (1860s-1917): The Policy Juncture
This paper explores the nature of conflict that unfolded between the supporters
of traditionalist and modernist education in Tsarist Central Asia. The paper
explores the viewpoint of each camp and explores the causes of such approach.
The paper finds that the conflict or divergence was driven by the desire to ensure
the protection of political and economic interests each camp cherished. While
status quo offered traditionalists economic security and political power, the new
order and industrialization that came to Central Asia in the wake of Tsarist
conquests offered modernists a future in which their political power and
economic prosperity was ensured. Both camps diverged in rationalizing
education as means to sustaining their world view. However, they also
converged in their ins ...
1-Ayaz Ahmad Lecturer,Department of English, Abdul Wali Khan University Mardan, Mardan, KP, Pakistan. 2-Sana Hussan Lecturer,Department of English,Women University Mardan, KP, Pakistan.3-Muhammad Shoaib Malik Assistant Professor,Department of Pakistan Studies, National University of Modern Languages Islamabad, Pakistan.
Bukhara, Central Asian Education, Islamic education
Investigating the Role of Beliefs and Professional Values in HR Management
Human Resource Management has been accentuated as a theme for research in
various organizations and institutions and sufficient literature has been
introduced on this subject since 1980s in the developed world (Harley and
Hardy 2004). This aspect did not attract attention in developing / underdeveloped countries like Pakistan. Presence of literature regarding HRM at the
workplace and ideological orientation warrants its application in our
organizations/ institutions to help resolve managerial issues being confronted
by the managers, employees and the employers. HRM ideology is distinguished
by the unitarist and pluralist approach at workplace (Batt and Banerjee 2012). ...
1-Muhamamd Zia-ur Rehman Assistant Professor, Department of Leadership and Management Studies, National Defence University, Islamabad, Pakistan.2-Riasat Ali Khan Research scholar, Department of Leadership and Management Studies,National Defence University, Islamabad, Pakistan.3-Noor Hassan Lecturer,Department of Management Sciences, Abdul Wali Khan University Mardan, Mardan, KP, Pakistan.
Implications of Pak-Afghan Transit Trade for Regional Security
Pak-Afghan relations have almost remained far from being normal and under
the grip of allegations and counter allegation due to several bilateral political
issues. However, trade relations have remained unrestrained from several
decades. Afghanistan as a landlocked state always relied on Pakistani ports for
its trade requirements with the rest of the world. Despite ups and downs in the
relations, Pakistan provided the trade provision to Afghanistan under 1965
trade agreement which was replaced in October 2010 with agreement providing
better trade facilities to Afghanistan with India. Pakistan has security concerns
over India, as Indo-Afghan trade will reduce Pakistan’s imports of goods.
Growing Indian presence in the form of huge investment in Afghanistan has
threatened Pakista ...
1-Huma Qayum PhD Scholar,Department of Political Science and IR,International Islamic University Islamabad (IIUI), Pakistan.2-Syed Ali Shah Assistant Professor, Department of Pakistan Studies,Abdul Wali Khan University Mardan, Mardan, KP, Pakistan.3-Zubaria Andlib Ph.D Scholar,School of Economics, Quaid-i-Azam University, Islamabad, Pakistan.
Potential for Community Based Ecotourism (CBE) along Balochistan Coast, Pakistan
The term CBE is not only an ethic based approach to tourism but to make it sure
the community engages actively and enjoys the accruing profits of tourism
activities. Following the SFA Framework (Suitability, Feasibility, and
Acceptability), this study critically reviewed tourism resources in ecologically
sensitive coastal areas of Balochistan for assessing their potential for
establishment of CBE. Bio-physical, socio-cultural, and tourism information
were collected from coastal communities through a structured questionnaire
and the same was analyzed through SWOT analysis, while, coastal scenic
information was collected personally through a coastal scenic assessment and
analyzed through fuzzy logic analysis. The study confirmed that selected coastal
localities are potential CBE de ...
1-Zia Ullah Assistant ProfessorDepartment of Tourism & Hospitality, Abdul Wali Khan University, Mardan, KP, Pakistan.2-Muhammad Jehangir Assistant Professor, Institute of Business & Leadership, Abdul Wali Khan University, Mardan, KP, Pakistan3-Javed Iqbal Assistant Professor, Department of Economics, Abdul Wali Khan University, Mardan, KP,
Garbage Collection and Rag Picking: An Issue of Child Labor in Rawalpindi (An Anthropological Approach)
The nature of this study was qualitative and covers the children collecting the
Garbage and Rag picking in Rawalpindi city. In the study, Afghan scavengers
were selected for qualitative analyses. The family backgrounds of these
scavengers and the demographic factors were also analyzed. Most of the
qualitative methods including key informant interviews, visit and stay in the
area, In-depth interview of 50 participants was applied to observe the
phenomenon and collect the relevant information. The process of scavenging
and the situation explored presented that besides poverty and economic
pressure, the migration, independent nature of scavenging work, higher income
as capered to other forms of child labor and increased urbanization were the
major causes behind the phenomenon. The s ...
1-Syed Imran Haider Assistant Professor,Department of Sociology, Allama Iqbal Open University Islamabad, Pakistan.2-Muhammad Ali PhD,Goethe University,Frankfurt Main, Germany.3-Muhammad Bilal Lecturer,Department of Sociology,Abdul Wali Khan University Mardan, Mardan, KP, Pakistan.
The US presidential elections are over and to the astonishment of media, surveys
and polls; Donald Trump won a stunning victory over his democratic rival
Hillary by 289 electoral votes. The election results were surprising and may not
be digested by many Americans for long time and especially for the Democrats.
The stunning victory of Trump does not seem so dramatic if the past elections
are analyzed. This paper identifies few patterns through the analysis of past
elections that support the Republican victory in the Elections 2016. This article
highlights those patterns calling them the "Trump's Triumph Cards" and
correlates them with the results of the current election. This paper identifies that
the Role of the White Population, Population with 40+ years of age, The Rubio
1-Manzoor Ahmad Assistant ProfessorDepartment of Political Science,Abdul Wali Khan University Mardan, KP, Pakistan.2-Muhammad Rizwan Assistant Professor,Department of Pakistan Studies,Abbottabad University of Science and Technology, Abbottabad, KP, Pakistan.3-Zahir Shah Assistant Professor,Department of Political Science,Abdul Wali Khan University Mardan, Mardan, KP, Pakistan.
Presidential Elections, Voting Patterns, Primaries, white Population, Millennial Vote
United States Fundamental Interests in Chile and Cuba: A Historical Study
US hegemony as the result of its interventions in Cuba and Chile is a historical
reality. The United States used to be scared that imposition of Communism had
minimized the Americans dominance over there under the policy of
nationalization. Although, the United States had tried his luck in Cuba twice, in
decades of 1960's, to vanish communism dangerous roots, but unfortunately
faced defeat. Again in 1970's decade the United States faced the same threat of
communism (in form of Salvador Allende regime) in Chile. Chile has blessed
with such rich mineral resources like Cuba, so the United States also had
similarly established their strong hold inform of different significant companies.
In order to prevent the power of Salvador Allende and his nationalization policy,
the United State ...
1-Abdul Zahoor Khan Assistant Professor,Department of History & Pakistan Studies,International Islamic University, Islamabad, Pakistan.2-Nargis Zaman MPhil Scholar, Department of Political Science,University of Peshawar, Peshawar, KP, Pakistan.3-Zahir Shah Assistant Professor, Department of Political Science, Abdul Wali Khan University, Mardan, KP, Pakistan.
United States, Latin America, Caribbean, Communism, Capitalism, Cold-War
Teaching of Harry Potter and the Philosophers Stone in the Light of Barthes Narrative Codes at BS English Level
J. K. Rowling has written seven novels in the Harry Potter series. This fiction
series has also inspired the educationists and academicians and it has been
introduced in different western colleges as part of their syllabi. Warner
Brothers made the films based on all the novels of Harry Potter series. Harry
Potter World, the studio where these movies were made, is a tourist spot in
London and thousands of fans from all over the world visit it every week. The
present study explores teaching of Harry Potter and The Philosophers Stone
in the light of Barthes Narrative Codes with emphasis on hermeneutic codes
and their roles in the building blocks of narrative structure of the novel. The
result of the study shows the extensive use of enigma and delays in the series to
make it captivat ...
1-Ameer Sultan PhD Scholar,Department of English,International Islamic University, Islamabad, Pakistan.2-Rashida Imran Lecturer, Department of English Language and Applied Linguistics,Allama Iqbal Open University, Islamabad, Pakistan.3-Saira Maqbool Assistant Professor, Department of English Language and Applied Linguistics,Allama Iqbal Open University, Islamabad, Pakistan.
Harry Potter, Philosopher’s Stone, Barthes’s Narrative, Codes
Exploring the Attitudes of Undergraduate Students towards Plagiarism in Public and Private Institutions
The purpose of the study is to explore the attitude of undergraduate students
towards plagiarism from both public and private higher educational
institutions. A cross-sectional survey was used to collect the data through
adopted questionnaire which comprised of three subscales; positive attitude,
negative attitude and subjective norms towards plagiarism. Data was collected
from 309 students of BS-Mathematics (n=155) and BS-English (n=154)
programs in which 153 students are from public and 156 are from private
institutions. Descriptive and inferential statistics methods were used to analyse
the data. The results of the study revealed that undergraduate students from
both programs have medium level of positive and negative attitude towards
plagiarism. The findings show that there i ...
1-Shahzada Qaisar Assistant Professor, University of Education, Lahore, Punjab, Pakistan. 2-Sumaira Rashid Assistant Professor, Department of Education, F.C. College, Lahore, Punjab, Pakistan.3-Aashiq Hussain Dogar Associate Professor, University of Education, Lahore, Punjab, Pakistan.
Islamization in Pakistan: A Critical Analysis of Zias Regime
Pakistan was made on the Islamic ideology, it was made to secure the political
and religious rights of the Muslims. It was clearly illustrated in the Objectives
Resolution that no law shall be made repugnant to Quran and Sunnah. The
Islamic Provision of pre 1973 constitution provided base for Islamization. With
the passage of time institutions like Council of Islamic Ideology and Federal
Shariat Court were established. This study is an attempt to analyze whether the
islamization process in law making actually fulfilled the considerations of the
basis of this country. This paper will analyze the attempts in different eras,
specially the era of Zia. Zia claimed to implement and impose Islam in every
walk of life in Pakistan but his Islamization had adverse impacts and criticized
1-Ali Shan Shah Lecturer,Department of Political Science & IR,GC University Faisalabad, Punjab,
Pakistan.2-Muhammad Waris Assistant Professor, Government College Bhalwal, Sargodha, Punjab, Pakistan.3-Abdul Basit Lecturer,Department of Political Science & IR, GC University Faisalabad, Punjab,
Islamization, Zia regime, Political engineering, institionalization, Radicalization
Institutional Determinants of Firm Performance: Evidence from Pakistan
Owner structure (OS) is an imperative feature of a firm and firm performance
(FP). Recent studies have debated the effect of OS and FP around the world.
Studies argue that OS is one of the important factors of decision making since
principals expect wealth maximization while agents try to increase their
personal gains. However, investor protection (IP) adds to the decision making
with regards to OS since IP ensures that shareholders finds shall not be
expropriated and that IP enhances the trust of all stakeholders and help in
making informed decision. Based on this premise, we investigate the effect of
OS on FP in the presence of IP. Using secondary data from Pakistans capital
market for the years 2008-2015 and applying panel data techniques, we find
that not only OS affects FP b ...
Comparing Perceptions of Public versus Government School Teachers towards Job Satisfaction at District Malakand
Many studies have been carried out on the job satisfaction of employees at
various organizational levels all over the world. However, little is known
about the government versus private schools in district Malakand Khyber
Pakhtunkhwa. This study compares the perceptions of private versus
government school teachers job satisfaction related to its six component
i.e, pay and promotion, job security, workload, supervision, work condition
and nature of work. A questionnaire was used for data collection. The data
were collected from 100 teachers both public and private schools on a
convenient sample basis. This study showed that there was a significant
difference among the job satisfaction level of teachers in private versus
public schools on the job satisfaction scale. the results of ...
1-Asghar Ali Assistant Professor,Department of Education,University of Malakand, Chakdara, Dir (L), Pakistan.2-Iqbal Ahmad Lecturer,Department of Education,University of Malakand, Chakdara, Dir (L), Pakistan.3-M. Anees-ul-Husnain Shah Assistant Professor, University of Education,D.G Khan Campus, Punjab,Pakistan.
Job Satisfaction, Private and Public Schools comparison
Anti-Culture Machine: The War on Terror and its Effects on Pakhtuns and their Culture
This study critically evaluates the continuing campaign against terrorism. It
especially discusses the counter-terrorism policies of Pakistan and the United
States of America, which affects Pakhtuns and their culture. Figures show there
has been a surge not only in the number but the activities of militants in the
Pakhtun region after the inception of the war on terror. It is very important,
therefore, to know the effects of the war on terror on Pakhtuns culture. Mostly
relying on secondary data and interviews with experts in the area, the study is
a qualitative analysis of the counter-insurgency campaigns and the resultant
response of the local population in the area. The analysis shows two
interrelated facts. The first is that ignoring cultural values in counterinsurgency campai ...
Impact of Shia-Sunni Annoyances on the Contemporary Geopolitics in the Middle East: A Critical Appraisal
The modern-day Shiite-Sunni split between the Sunnite Kingdom of SaudiArabia and Shia theocracy Islamic Republic of Iran is predominantly portrayed
as a sectarian conflict. Instead, their rivalry constituted geopolitical, economic,
military, and religious supremacy and legitimacy in the region of the Middle
East. Riyadh and Tehran are convoluted in a complex rivalry over a volatile
region where both want their dominance and become a Muslim world leader.
Religious dissimilarities are of secondary worth for the political elite of both
the states, despite the doctrinal variance of Wahhabism and Shiism in their
socio-religious setup; the competition of geostrategic influence in the Middle
East makes the primary concern instead. Both countries have directly and
indirectly supported sec ...
1-Inayat Kalim Assistant Professor, Department of Humanities, COMSATS Institute of Information Technology, Islamabad, Pakistan.2-Muhammad Mubeen Assistant Professor, Department of Humanities, COMSATS Institute of Information Technology, Islamabad, Pakistan.3-Sohail Ahmad Assistant Professor, Department of Humanities, COMSATS Institute of Information Technology, Islamabad, Pakistan.
Shia-Sunni, Sectarianism, Political Rivalry, Theological Divide, Islamophobia, Muslim Unity, Middle East
Language and Cross-Cultural Construction: A Systemic Functional Interpretation of Hiroko Tanaka in Shamsie's Burnt Shadows
This study interprets and explains Hiroko Tanaka one of the major characters of Burnt Shadows from the perspective of cross-cultural construction by using Transitivity which is a major system for the explanation of experiential meta function at the level of clause in Systemic Functional Grammar. Shamsie’s Burnt Shadows (2009) covers five countries and fifty-six years. The novel is about diverse cultures and significant historical periods of the world represented through the intimate relations among the characters. Systemic Functional Linguistics describes language in four hierarchical levels, and the relation among these levels is that of realization/actualization of higher strata realized in the next lower level. This study exploits this realization from the lower level of lexico gramma ... | <urn:uuid:54329ea7-afea-4afb-948c-b8ce7e2a402a> | CC-MAIN-2022-33 | https://www.grrjournal.com/issue/1/1/2016 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00698.warc.gz | en | 0.90201 | 6,356 | 3.015625 | 3 |
|KANSAS COLLECTION BOOKS|
This town is on the Kansas City, Fort Scott & Gulf Railroad, seven miles south of Pleasanton. It is on a slightly elevated plateau in the valley of the Little Osage, and surrounded by excellent farming and grazing land. The town was named in honor of C. H. Prescott, who was at the time Auditor and Treasurer of the Missouri River, Fort Scott & Gulf Railroad.
The town site was laid out in March, 1870, by Edward Billings, eighty acres of whose farm was a part of it, the remainder, twenty acres, being a part of the farm of W. H. Billings. A. D. Perrin built the first dwelling, a two story frame; the second by William Bower, the third by G. H. B. Hopkins. About the 1st of April, R. Odell started a general store, Dr. Rader a drugstore, and William Bower a blacksmith shop. In May or June, the postoffice was established, William Bower first Postmaster. He has been succeeded by R. Odell and L. H. Lane, the present incumbent. The Methodist Church was organized in 1873, with twelve members, by William Sibley, the first preacher. In 1881, this society erected a very neat frame church building, costing $1,500. The first school was taught in 1873, in a private residence by Miss Jane McCormick. A frame schoolhouse was built in 1876, and in the winter of 1882-83 an elegant and substantial two story brick schoolhouse was erected at a cost, including furniture, of $4,000. The first birth in the town was that of Andrew Bower, son of Mr. and Mrs. William Bower, August 30, 1870; the first marriage, that of M. L. Bowe to Miss Maria M. Ham, October 1, 1873, and the first death that of Willie H., son of Mr. and Mrs. A. D. Perrin, August 23, 1882. The growth of Prescott has been gradual but continuous. it now contains four general stores, one drug store, two hardware stores, one blacksmith and wagon shop, one hotel, one elevator, one grist mill and about 300 inhabitants.
WILLIAM ANTHONY, physician, Section 2, P. O. Prescott, born in Indiana County, Penn., October 13, 1829. In 1848, entered Elder's Ridge Academy, Pennsylvania; remained two years. He then removed to Jacksonville Academy, Penn., where he attended one year, and, in 1851, entered Jefferson College at Cannonsburg, Penn., where he completed his studies, after which he read medicine, and, in 1853, entered Jefferson Medical College at Philadelphia, where he remained one term; returned in 1854, and graduated in 1855. He then located at Olathe, Johnson County, Kan., where he practiced for three years. He then purchased a farm near Olathe, where he continued his profession until 1870, when he located in Linn County on his present place of 410 acres, where he is actively engaged in the duties of this profession. Married in Indiana County, Penn., September 20, 1854, Miss Jane D. McHenry, of Pennsylvania. They have two children - Cynthetta and Idilla V.
F. N. BROCK, merchant, born in McLean County, Ill., May 16, 1858, where he lived until nine years of age, when he removed with his parents to Linn County, Kan., and was there raised and educated. In 1879, he located in Prescott, and was employed as a clerk for a short time; in 1882, took a commercial course at Paola, Kan., and, in August, 1882, became a partner in the firm of Brock, Robinson & Co., where he is doing a business of about $15,000 per year.
M. W. EBY, merchant, born in Ross County, Ohio, August 27, 1852; when young, removed with his parents to Stark County, Ill., where he attended school until 1864, when he came to Kansas with his parents and settled in Linn County, after completing his education, he learned the blacksmith's trade, which he pursued for three years and a half. Went to Prescott July 2, 1874, and worked for J. D. Sweet as an apprentice for a year and a half, after which he bought Mr. Sweet's blacksmith tools and carried on the blacksmithing business for two years, and, in 1878, engaged in the lumber, furniture and hardware trade, doing a business of $22,000 per annum. January 1, 1883, he bought Mr. Perrin's interest in the business, and is now alone. Mr. Eby was married in Linn County, Kan., November 27, 1879, to Miss Margaret F. McNabb, of Missouri. They have one child - Oscar W.
FRANK GRAY, farmer, Section 23, P. O. Pleasanton, born in Madison County, Ind., November 18, 1837, was raised and educated in his native State, after which he assisted his father on the farm until 1859, when he came to Kansas and settled in Linn County. In 1862, enlisted in Company K, Twelfth Regiment Kansas Infantry, and was discharged in 1865. He then returned to Linn County and located on his present place of 160 acres, and is engaged in farming. Married in Linn County, Kan., July 7, 1861, Sarah A. Venable, of Texas. She was born February 13, 1838. They have two children - Oliver M. and Ercenus C.
CHARLES HALLER, farmer, Section 7, P. O. Prescott, born in Frederick County, Md., January 14, 1827; was raised and educated in his native State. In 1848, removed to Montgomery County, Ohio, where he was employed in improving public roads, etc., for three years. he then located in Madison County, Ohio, where he followed farming until 1857, when he moved to Cooper County, Mo., where he was employed by the Union Pacific Railroad for a short time. The following fall came to Kansas and located in Bourbon County, where he remained some time engaged in farming. In 1858, settled on his present place near Prescott. His estate consists of 425 acres. In 1861, enlisted in Company G, Seventh Regiment Kansas Cavalry; served through the war. Married twice, first in 1869 to Amanda Osburn, of Indiana. By this union they have one child - William O. Married the second time at Dayton, Ohio, August 13, 1871, Mary A. Woodman, of Ohio.
K. W. HARKNESS, farmer, P. O. Prescott, born in Peoria County, Ill., June 21, 1841, where he was raised and educated. In 1857, he came to Kansas and settled in Linn County on his present place of 1,000 acres, where he is actively engaged in farming ad breeding fine stock. In addition to his present occupation, he owns a half-interest in the new elevator located at Prescott. In 1861, he enlisted in the Eighth Missouri Infantry - served three months. Re-enlisted in 1863 in Company K, Third Regiment Illinois Cavalry; was discharged in 1865. He was married in Peoria County, Ill., December 24, 1865, to Miss Julia F. White, of North Carolina. They have seven children - Lee, Minnie A., Ernest, Isaac, Nettie, Ella, Capitola and Dexter.
DR. L. H. LANE, druggist, born in Turin, Lewis Co., N. Y., April 1, 1830; when; young was taken by his parents to Kendall County, Ill., where he was raised and educated. In 1855, began the study of dentistry at Elgin, Ill., afterward located at Bristol, Ill., where he engaged in the duties of his profession until 1870, when he came to Kansas and located at Prescott, and turned his attention to the drug trade. In 1872, was elected to the Legislature and served one term. He was for five years railroad, freight and ticket agent at Prescott; has served as notary public and held other minor offices. He has been twice married, first in Bristol, Ill., February 23, 1854, to Emily J. Kendrick, of Illinois; she died in September, 1864. By this union he has four children - Edwin C., Charles E., Lyman K. and Francis A. Was married the second time at Topeka, Kan., December 10, 1879, to Rebecca Flower, of Ohio.
JOHN McAULEY, farmer, Section 23, P. O. Pleasanton, born in Glasgow, Scotland, March 4, 1827; when young moved to America with his parents and first located in New York City, where he attended school for two years. His parents then moved to Canada and settled near Toronto, where John completed his course of studies, after which he followed agricultural pursuits for some time, and then returned to New York and located at New York Mills, where he was engaged in the dye works until 1854, when he emigrated to Marquette county, Wis.; farmed until 1859, then came to Kansas and settled in Linn County. His estate consists of 400 acres. Married in Rome, N. Y., August 27, 1848, Rachael Blasier, of New York. They have seven children _ Mary M., Joan, Eugene M., Marion E., Alford B., Mercy M. D. I. R. and Charlie C.
ED. H. MANLOVE, general merchant, born in Schuyler County, Ill., April 25, 1855, where he was raised and educated. In 1873, came to Kansas and located at Cherokee, where he was employed as a clerk for one year, and, in 1874, removed to Prescott, where he engaged in general merchandise under the firm name of Manlove Bros. He married at Marshfield, Mo., October 6, 1880, Miss Capitola Phoenix, of Wisconsin. They have one child - Clyde Edwin. Mr. Manlove is identified with the Republican party.
A. D. PERRIN, farmer, Section 8, P. O. Prescott, was born in Medina County, Ohio, July 2, 1834, where he was raised and educated, and soon after learned the carpenter's trade, which he pursued for some time. In 1855, he removed to Kendall County, Ill., where he was employed as millwright for three years. September, 1858, he came to Kansas and located in Linn County, where he worked at the carpenter's trade nearly three years. In 1861, he enlisted in Company E (Calvary), Third Regiment Kansas Volunteers, as musician. In April, 1862, was transferred with the company to the Fifth Kansas Cavalry as Company D; soon after was commissioned Second Lieutenant, afterward First Lieutenant, which position he held until discharged from service, when he returned to Kendall County, Ill., and followed agricultural pursuits until 1870. He then returned to Linn County, Kan., and located at Prescott and engaged in contracting and building, having built the first dwelling in the city of Prescott. From 1878 to 1883, was engaged in general merchandise at Prescott; selling his interest, he located on his present place. Was married in Kendall County, Ill., January 4, 1865, to Miss Mary A. Lane, of Bristol, Ill. They have one child living - Herbert Lane.
L. R. SELLERS, physician, born in Madison County, Ind., March 11, 1848. Was reared and educated in his native State, after which he was employed as teacher which he pursued for some time. In 1869, came to Kansas and taught school near Mound City for about six years. During his term of teaching he studied medicine, and, in 1875-76, attended the medical lectures at the University of Louisville, Ky., and graduated at the Indiana Medical College at Indianapolis in 1877. He then located at Prescott, Kan., where he is actively engaged in the duties of his profession. Married, in Linn County, Kan., December 25, 1878, Miss Alice Goss, of Indiana. They have one child - Pearl.
M. C. STARK, Notary Public, born in Osage County, Mo., March 21, 1837; when young, was taken by his parents to Pike County, Ill., where he was raised and educated, after which he followed farming in Pike and Logan Counties, Ill., until 1871, when he came to Kansas, and first located in Lyon County, where he engaged in farming and stock-raising until 1879, when he located at Prescott, and is engaged in general merchandising, real estate and loan agency. He has an estate of 150 acres, and is also proprietor of a harness and saddler's shop. Served in the late rebellion in Company I, Seventieth Regiment Illinois Infantry as Orderly Sergeant. He has been twice married, first in 1857 to Mary A. Chaney, of Illinois. She died in 1877. By this marriage he has six children - Rebecca G., John L., Thomas Y., Maggie E., Ida A and Mary B. Married, the second time, in Lyon County, Kan., October 6, 1878, Addie J. Soule, of Illinois. They have two children - Addie M. and Pearl.
H. H. WOY, farmer, Section 29, P. O. Pleasanton, born in Carroll County, Ohio, November 17, 1840; was raised and educated in his native State, after which he located on a farm and followed agricultural pursuits until 1864, when he removed to De Witt County, Ill., where he remained for one year. In 1866, moved to Bates County, Mo., and engaged in farming until 1870. He then removed to Linn County, Kan., and turned his attention to agricultural pursuits for two years. He then located on his present place. He served in Company F, Fifty-seventh Regiment Ohio Infantry in the late war. In 1877, was elected County Commissioner, which position he still holds. Married, in De Witt County, Ill., November 23, 1865, Miss Louisiana Hume, Of Illinois.
This town is situated fourteen and one half miles southwest of Mound City, on the line of the proposed St. Louis & Emporia Railroad. It is on high, almost level prairie, and surrounded by a country well adapted to farming and grazing. The first post office in the vicinity was opened on Blue Mound, and elevation one half mile north of the present town, in the year 1854, John Quincy Adams, the first settler in the township being appointed Postmaster. Some time afterward, it was moved one mile south, and later three miles to the west, where it remained until June 1, 1882, when it was finally moved into the village of Blue Mound by the present Postmaster, George T. Wolf. The elevation, called "Blue Mound," is about fifty feet high, and was so named by John Q. Adams, because from a distance it looks blue; the more moisture there is in the air the bluer it looks. The town was named after the mound, and was located where it is on the assurance of the St. Louis & Emporia Railroad authorities that that road should run near it. The Blue Mound Town Company was organized in April, 1882, and was composed of the following members: Capt. Barnes, President; Nathan Corbin, Secretary; H. A. B. Cook, Treasurer; O. R. Deland, H. M. Brook and Thomas Brook. The town site was surveyed in April, 1882, by Gen. Harrison. The first building on the town site was a store moved from the windmill, three miles southeast, by D. J. & W. S. Alley, May 1, 1882; the second was moved from Wall street, by Innis Bros., and was utilized as a hotel until their new hotel was completed in June; the third building was erected for a shoe shop by T. H. Blise. The blacksmith shop was moved to Blue Mound from the windmill. The first sermon in the town was preached by Rev. Mr. Hinton, a United Brethren minister. The first school was opened October 2, 1882, by Miss M. E. Weatherbie, with thirty scholars. The first birth was that of a son of Mr. and Mrs. Frank Stuteville, in August; the first death that of the wife of John Michael, September 29, 1882.
The growth of the town has been phenomenally rapid. On the 1st of May there was but one or two buildings on the town site; on October 1, there were fifty-six, and a population of nearly two hundred, with three general stores, one hardware store, one furniture store, two blacksmith shops, one drugstore, one harness shop, one lumber yard and one hotel. With a prospect of one railroad, possibly two, and a union depot, the people are full of enterprise and hope. Should they get neither, they will, upon a near approach, be a great deal bluer than the Mound looks at a distance. "Cross City" is a possible future town laid out one mile northeast of Blue Mound, as an opposition town to that village. Its fate will be determined by the location of the St. Louis & Emporia Railroad and its station.
BIOGRAPHICAL SKETCHES - BLUE MOUND TOWNSHIP.
A. A. ALLEN, JR., dentist, born in Jersey County, Ill., July 31, 1852. At the age of seven, removed with parents to Allen County, Kan., when he assisted his father on the farm, and attended the district school. He finished his studies at Geneva Academy, Kansas, in 1872, after which he began the study of dentistry, having located at Osborn City, Kan., in 1870. In 1882, removed to Blue Mound, where he is engaged in the duties of his profession. He was married in Allen County, Kan., October 3, 1873, to Miss Hattie C. Martin, of Illinois. They have two children - Elizabeth M. and Lillie May.
W. P. BARNES, farmer, Section 20, P. O. Blue Mound, born in Ashtabula County, Ohio, July 1, 1837, where he was raised until 1846, when he removed with his parents to Ripley County, Ind., and was there raised and educated, after which he taught school in his native State and Indiana for five years. In 1855, he located in Henderson County, Ill., where he engaged in teaching and farming until 1861, when he enlisted in Company E, Tenth Regiment Illinois Infantry; served three months and re-enlisted in Company C, Ninety-first Regiment Illinois Infantry. He was captured by Gen. John Morgan, in Kentucky, and held a prisoner for a short time. Discharged in 1863, on account of disability. He then returned to Henderson County, Ill., where he followed agricultural pursuits until 1872, when he came to Kansas and located on his present place of 1,700 acres. Mr. Barnes has also a seventh interest in the City of Blue Mound, which was purchased by a stock company. Served in the Legislature in 1876. Married in Henderson County, Ill., October 20, 1856, Maria J. Brook, of Illinois. They have ten children - John A., William L., Isaiah S., Charles T., Rufus A., Esther J., Mare E., Ruth E., Hugh and Rachel A.
A. T. BROOK, farmer, Section 20, P. O. Blue Mound, born in Henderson County, Ill., July 23, 1854. He was raised and educated in his native State, having completed his studies at Monmouth, Ill., in 1875, after which his time was occupied in farming until 1879, when he came to Kansas and located in Linn County, on his present place of 1,120 acres, where he is actively engaged in farming and stock-raising. In addition to his landed estate, Mr. Brook is a stockholder in the enterprising city of Blue Mound, which consists of one-seventh interest in 300 acres, in town lots. He was married in Burwich, Ill., September 13, 1881, to Miss Clara L. Cable. She was born in Warren County, Ill., in October, 1859. They have one child - Charley F.
J. W. VAN PELT, farmer, Section 26, P. O. Blue Mound, born in Highland County, Ohio, September 21, 1846. When young was taken by parents to Fayette County, Ohio, where he was raised to manhood and educated; after which he engaged in farming and trading in live stock, which he followed until 1877, when he came to Kansas and located in Linn County. His present estate consists of eighty acres of land, conveniently located to Blue Mound. Married in Fayette County, Ohio, September 23, 1871, Elvira McClure, of Ohio. She was born in 1845. They have four children - Carrie E., Fred L., Norma and William P.
GEORGE T. WOLFE, merchant, born in Harrison County, Ind., March 30, 1847, where he was raised until the age of nine, when he removed with parents to Vermillion County, Ill., where he matured to manhood and was educated. In 1867, removed to Gayoso, Mo., where he engaged in merchantile (sic) pursuits for four years, and in 1871, located at Point Pleasant, Mo., where he continued merchandising until 1879, when he came to Kansas and located at Garnett, where he was a merchant for one year. In 1880, settled in Linn County on his estate of 400 acres near Blue Mound, where he followed agricultural pursuits until the spring of 1882, when he located at Blue Mound and began anew merchandising. He is also Postmaster. Married at Metropolis City, Massac County, Ill., May 21, 1870, Miss Julia H. Kennedy, of Ohio. They have two children - Fred K. and William.
This town is located on the banks of the Marais des Cygnes, about four miles from the State line, and is one of the oldest settlements in the State. The land where it stands was purchased at an early day of a Frenchman named Jarien, by another Frenchman named Chouteau, the latter carrying on a heavy trade with the Indians; hence this post was called the Chouteau Trading Post. There was no town laid out here until 1865, when the Montgomery Town Company was organized, and the town of Montgomery laid out and platted October 17, that year, just east of the present town site of the trading post; but the town not being a success was finally abandoned. Trading Post is located on Section 5, Township 21, Range 25, and was laid out and platted in March, 1866, by Dr. Massey and George A. Crawford. But everything in this town dates from the Marais des Cygnes massacre. Previously to this time, John F. Campbell was keeping store here. Soon after it, Dr. Massey & White opened a store in a log house near the bridge. A grist-mill was erected in 1857. It has been purchased and much improved by J. & A. Brockett, and is now one of the finest mills in the State. It is two and a half stories high, and has two run of buhrs. There is a saw-mill attached. During most of the year, it is run by water, but during the dry season, in August and September, the motive power is steam. It is not ascertainable who preached the first sermon at the Post, but John R. Williams, a Baptist minister, preached to an outdoor congregation, in August, 1856. There are two church organizations in this vicinity, one Baptist, the other Southern Methodist, both of which use the "Swayback" church, situated three miles north and one mile east of the Trading Post as a house of worship. There is also a United Presbyterian organization, four miles east, known as the State Line Church. The present schoolhouse, a two-story frame, was built by the Masons in 1865, and the lower half sold by them to the district.
The first birth in the vicinity was that of Jasper and Newton Nichols, twins, in 1855; the first marriage that of Samuel Brown to Miss Hobbs, in 1856; and the first death that of Mrs. Bartemas, in 1856.
Blooming Grove Lodge, No. 41, A., F. & A. M. , was organized in 1862, with twelve members. Its charter officers were: A. C. Doud, W. M.; William Goss, S. W.; W. W. Silsby, J. W.; Samuel Brown, Secretary; Jackson Lane, Treasurer. The present membership is fourteen.
Trading Post contains at present three general stores, one drug store, two blacksmith shops, one agricultural implement dealer, and about 100 inhabitants.
Barnard is the trading post station, and nearly three miles distant toward the northwest. It is situated on "Hensley's Point;" the town-site, eighty acres, was purchased by J. B. Grinnell, of Arthur Barnard. Mr. Grinnell had a survey made, and held a sale of lots in the fall of 1869. The first building erected in the town was built for a store and grocery by John B. Leabo, who was appointed first Postmaster in the same Year. In March, 1870, David Sibbett was appointed Postmaster and has held the office ever since. The first dwelling erected in Barnard was the section house by the railroad company, the next by James Leabo, both in the fall of 1869. The first birth in this part of the county was that of Millie B. Ward, daughter of Sylvester and Nancy Ann Ward, October 11, 1866; the first marriage that of Robert Edwards to Mrs. Mary Bemus in 1871, and the first deaths those of Jacob and Richard Gudgel, father and son, which occurred at almost the same time, in 1872. The first school was taught in John Morrison's house, by William Stark, in the winter of 1869-70. Barnard now contains eight dwellings, and about forty inhabitants.
OTHER VILLAGES AND POSTOFFICES.
Hail Ridge is situated nine miles southwest of Mound City and five miles east of Blue Mound. There is here only a store and post office.
Oakwood is a country post office established in 1858, with John Jones, Postmaster. The post office was frequently moved from one farmhouse to another until 1878, when a grange store was started under the management of W. B. Scott, and the post office was permanently located therein. In addition to the grange store, there is a drug store, blacksmith shop and physician's office.
Woytown is situated on the open prairie, and was named after H. H. Woy, one of the first settlers. The first settlement was made by C. O. Best in the spring of 1881, who was appointed Postmaster in October of that year. The first birth was that of Winfred J. Darley, August 18, 1881. The first store was opened by S. W. Kiser. The town contains about twenty-five inhabitants.
Walnut Grove Post Office was established in 1871, John Brown first Postmaster.
Cadmus Post Office was established in 1877, J. S. Payne being appointed first Postmaster. | <urn:uuid:5c5b047a-d222-470c-9155-84f4030bcaa5> | CC-MAIN-2022-33 | http://www.kancoll.org/books/cutler/linn/linn-co-p10.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00695.warc.gz | en | 0.986584 | 5,897 | 2.5625 | 3 |
Squirrel monkeys (Saimiri collinsi) are seasonal breeders that live in large social groups in which females are dominant to males. Females have one infant per year, and the nursing period lasts six to eight months. Preliminary observations in the wild indicated that during the mating period (eight weeks: July and August in our population), the infants show agonism directed at males who approach their mothers. This directed sexual interference by infants has rarely been reported for neotropical primates. Our study reports observations in a natural population of Saimiri collinsi with the aim of describing the social behavior of infants during the breeding season, especially with regard to adult males in the group. Infants of both sexes were observed during three mating periods (2011, 2012, 2013), to test hypotheses about the possible function of infant harassment directed at adult males. The behavior of infants (variables: activity and nearest neighbor) was sampled by the focal animal method as well as ad libitum observations. We recorded 99 cases of agonism and 17 cases of tolerance between nearby males and infants via the all-occurrence method. Thus, 85% of interactions between adult males and infants involved agonism. These results suggest that infant interference can present a cost to adult males during the breeding season.
Social conflict between adult males and unrelated infants/ juveniles is often reported in primates. For example, juvenile yellow baboons (Papio cynocephalus) are not tolerated by adult males at feeding sites (Pereira, 1988, 1989). The most extreme form of aggression of males toward infants is evidenced by infanticide, a male reproductive strategy shown by many primates (Agoramoorthy and Rudran, 1995; Borries et al., 1999; Beehner and Bergman, 2008; Rimbach et al., 2012), when males recently immigrated to a social group attack and kill unrelated unweaned infants. However, in squirrel monkeys (Saimiri collinsi, formerly classified as S. sciureus; Lavergne et al., 2010), an inverse and seldom reported type of agonism occurs between males and infants. In this species, it is the infants who show agonism toward the adult males, usually in the presence of their mothers, and without any retaliation from the males (Stone, 2014). This behavior occurs primarily during the mating season (approximately eight weeks; Stone, 2006), and appears to consist mostly of sexual interference. Specifically a females youngest dependent offspring (here called “infant”) shows agonism toward males who approach and attempt to copulate with its mother.
“Sexual interference” is considered any disruption that other individuals direct toward a copulating pair, whether through contact or no-contact (Nishida, 1997). Usually this behavior occurs among adults in a group, and consists of behaviors by a third individual that can interrupt the pairs copulation. Intra-sexual competition among males is the most common form of sexual interference seen in primates, although female competition also results in sexual interference (Qi et al., 2011). Males also may direct aggression toward ovulating females, attempting to prevent their mating with subordinate or non-resident males (Smuts and Smuts, 1993). To our knowledge, however, sexual interference by infants (in particular, targeted agonism toward adult males) has not been reported in primates, and this phenomenon merits investigation in order to understand the context in which it occurs, and its possible ecological and adaptive function.
Squirrel monkeys are polygamous neotropical primates that live in large multi-male, multi-female groups of 25–75 individuals (Zimbler-de Lorenzo and Stone, 2011). Groups show female-biased sex ratios (Stone, 2004) and are characterized by weak male-female associations, with males remaining at the periphery of the group during most of the non-breeding periods (Izar et al., 2008). In addition, adult female S. collinsi are dominant to adult males (Izar et al., 2008). Squirrel monkeys are highly seasonal breeders (Di Bitetti and Janson, 2000) and males show weight gain (85 to 222 g; DuMond and Hutchison, 1967) during the brief mating period (two to eight weeks; Izar et al., 2008). The weight gain results from fat deposition and water retention, which produces a “fatted” appearance in the upper torso, arms and shoulders (Mendoza et al. 1978; Boinski, 1987; Mitchell, 1990; Stone, 2004). Male fattening in this species appears to be related to sexual selection (Stone, 2014). Gestation in Saimiri lasts five months (Garber and Leigh, 1997) and lactation lasts from six to eight months in S. collinsi, with the end of weaning coinciding with the start of the next mating season (Stone, 2006).
This study addresses the following questions; (1) what is the possible adaptive significance of infant sexual interference/agonism toward adult males (hereby called IMA) seen in S. collinsi? (2) in which social and ecological contexts do these events occur? Several hypotheses (not mutually exclusive) could explain the behavior of the infants. For example, the weaning conflict (Trivers, 1974) could result in nursing infants trying to prevent pregnancy in their mothers, which would reduce investment in themselves. Alternatively, due to the pattern of female dominance in this species, female infants rather than male infants may be the main aggressors toward adult males, in order to establish dominance over them (Smale et al., 1995). Finally, the possibility exists that infants preferentially direct agonism toward certain males, either lower-quality males who try to copulate with their mothers, or males who are not their fathers. In order to shed light on these hypotheses, this study investigates; whether there is an association between IMAs and nursing bouts; the effect of sex on activity budgets and nearest neighbors of infants; and whether male robustness affects the frequency of IMAs. We also examine whether the infants are successful at blocking copulation attempts by adult males; that is, whether this infant behavior represents a cost to adult males.
This study was conducted in near the village of Ananim (municipality of Peixe-Boi), 150 km east of Belém, state of Pará, Brazil (01°11′S, 47°19′W). The 800-hectare site consists of privately owned ranches that include primary forest and adjacent secondary forests. Rainfall is seasonal, with a wet season from January to June and a dry season from July to December. Fruit availability is highest during the wet season (Stone, 2007). Mating in this population of squirrel monkeys occurs during an 8-week period from mid-July to mid-September, and births occur in January and February of each year (Stone, 2006). Therefore, the wet season corresponds to births and lactation, and the dry season corresponds to mating and gestation. We collected the behavioral data presented here during three mating seasons (2011, 2012 and 2013).
We collected behavioral data on one social group of squirrel monkeys, with approximately 46 individuals (ca. nine adult males, 15 adult females, 12 juveniles and seven infants). Although most adult females give birth every year, infant mortality accounts for a reduction in the number of infants in the group by the next mating season (Stone, 2004). We classified individuals as adults when over five years of age (males) and three years of age (females; Mitchell, 1990; Stone, 2004). We define individuals observed nursing on their mothers, even if sporadically, as infants (between six and eight months of age during this time period). Four individuals (two adult males and two adult females) were individually recognized, either by natural marks or by beaded identification collars. During observations involving adult males, we classified each individual into a robustness category (see Stone, 2014): Grade 1 (barely noticeable fattening response; n=2 in 2013); Grade 2 (showing the fattening response in the upper arms and torso, but neck still visible; n=4 in 2013); Grade 3 (fattening response very pronounced in the arms and torso, relative to the rest of the body which remains unfattened; neck barely visible; n=3 in 2013).
Behavioral Data Collection
Observations in the three mating seasons totaled 129 hours. We followed the group for at least 10 days per month from 06:00 until approximately 14:00 hours (2011 and 2012) and between 11:00 and 15:00 hours (2013). In all mating periods, we collected all-occurrence data on infant-adult male interactions (whether agonistic or tolerant; see Table 1) and on nursing bouts, timing the duration of the latter whenever possible. We also always attempted to sex the infant and to classify the adult male into the aforementioned robustness categories. Specifically in the 2013 mating season, we also collected 64 10-min focal-animal samples (Altmann, 1974) on infants. During the focal period, we classified the infant into male, female or unknown. At each 1-min interval, we recorded the following variables: activity of the focal animal (eat, forage, rest, travel, social) and age-sex class of the nearest neighbor (hereafter NN), within 5 m (adult male; adult female; juvenile or infant; alone). Within the focal period, we also made continuous observations of any social behaviors that took place involving the focal infant (e.g., nursing, threatening adult male), noting initiation and directionality of interactions. We timed the duration of any nursing bouts observed.
Although non-identification of focal infants is a potential limitation of the study, we took steps to minimize any pseudoreplication. The order of observations of infants based on sex was not random, to avoid oversampling some of the infants. For example, if the first sample of the day was a female infant (determined randomly), we often sampled a second female infant immediately after the first in order to avoid repetition of the same infant. In addition, because the group was often spread over 50–150 m, we conducted successive samples on individuals that were distantly located.
Ethogram of social behaviors of infant Saimiri collinsi.
We used descriptive statistics to quantify the following variables: nursing bout duration, percent of social interactions toward adult males that were agonistic, percent occurrence of different types of IMA, percent IMA according to male robustness grade. We also conducted a Chi-squared analysis to test whether adult males differed in number of IMAs received, according to their robustness level. Instantaneous observations within each infant focal sample are not independent; therefore, we treated each sample (rather than each observation) as an independent data point. The categorical activities “activity” and “NN” were converted to quantitative variables as proportion of intervals. The effect of infant sex on each activity and on NN was then analyzed with unpaired t-tests, with the p value set at p<0.05. All tests were two-tailed.
General context of IMAs in S. collinsi
We observed 99 cases of IMA during the three mating periods, and 17 cases of infants tolerating adult males that were nearby. We did not observe affiliative interactions between infants and adult males. This indicates that 85% of the 116 interactions between infants and adult males involved agonism. In 76% of the 116 observations, an adult female (likely the infants mother) was within 5 m of the infant-male pair, forming a triad (infant, mother, adult male). In 44% of these 88 observations, we were able to determine that the male was, either, sexually pursuing the adult female, conducting genital inspections or mounting the female.
Number of agonistic interactions between adult males and infant Saimiri collinsi, over three mating seasons (2011, 2012, 2013). The first column indicates interactions initiated by infants, second column indicates interactions initiated by adult males.
As shown in Table 2, adult males directed agonism toward infants only on four occasions. IMAs consisted of vocal threats, chases and, rarely, physical aggression in the form of biting. On nine occasions, we also observed infants moving toward and chasing males that were on a nearby branch (that is, not interacting directly with an adult female). We also observed two cases in which a resting male was approached by an infant who jumped on and bit the adult male, resulting in the adult male leaving the scene. Finally, we note three cases when the infant effectively “blocked” adult males from mounting their mothers; specifically, the infant mounted his mother, blocking access by the male.
Occurrence of nursing within the mating period
We observed nursing bouts during all three mating seasons. Over the three seasons, we recorded 25 nursing events, with a mean duration of 29 ± 4 seconds (N=7 timed bouts). In 12 cases, we could not identify the sex of the infant due to its nursing position. We identified the infant as male in five cases and as female in one case. In three cases, we observed nursing bouts during a time when an adult male was pursuing the infant's mother. In one of these cases, a male infant threatened the adult male and then immediately nursed. Qualitatively, we observed an increase in weaning conflicts between mother and infant after August 15 (females forcefully removing infants from the nipple, with infants vocalizing in distress).
Effect of infant sex on its activities and nearest neighbors
Infants of both sexes spent over 50% of their time foraging independently (Fig. 1). We did not observe an effect of sex on the infants' activity budget (FO: t54=0.61, NS; RE: t5.=-0.69, NS; LO: t54=0.54, NS; SO: t54=0.70, NS). Infants of both sexes also spent over 50% of their time budget near other infants/juveniles (Fig. 2). There was no effect of infant sex on proportion time spent with adult males (t54=-0.97, NS), adult females (t54=0.43, NS), juveniles/ infants (t54=0.07, NS) or alone (t54=-1.52, NS). In the 99 cases of IMAs, we were only able to determine the sex of the infant in seven cases (five males and two females) because of the short duration of the IMA.
Effect of male grade on IMAs received
We were able to register male robustness level for in 18 IMAs (Table 3). The intermediate fat males received 44% of agonism cases, followed by the least fat males (39%) and the fattest males (17.6%) but this difference was not significant (χ2=1.81, df=2, p=0.40). We also highlight that only Grade 2 and 3 males were tolerated by infants when in proximity to females (n=5 cases where male grade was identifiable).
This study confirms prior qualitative observations of the occurrence of IMAs in Saimiri collinsi (Stone, 2014). However, this study is the first to quantify the occurrence of this behavior in the field, confirming that most of the interactions between infants and adult males during the mating season are agonistic, and that males do not retaliate against infants, often leaving the location. Most of the interactions consist of vocal threats and chases, but they may also reach physical aggression. We also confirm that IMAs occur predominantly within a context of sexual interference; that is, in most cases, the infant is near its mother when the adult male approaches her for copulation or genital inspection. In coatis (Nasua nasua), juvenile agonism toward adults is also observed commonly. Rather than reflecting social dominance, the interactions consist of tolerated juvenile aggression, particularly during feeding contexts, so that juveniles have better access to food sources during growth and development (Hirsch, 2007). The pattern that we observed in squirrel monkeys differs in that infant intolerance toward males occurs mostly within a socio-sexual context.
Number of IMAs received by adult males according to their robustness levels.
We observed overlap between the copulation period and the end of the nursing period. Specifically, we observed IMAs performed by infants who are not fully weaned. This observation supports the first hypothesis that the infants interference is an attempt to prevent its mothers pregnancy. However, data from captivity and from the field indicate that lactating squirrel monkey females are still able to get pregnant (J. Ruiz, personal communication for S. boliviensis; L. Kauffman, personal communication for S. sciureus), indicating that these primates do not undergo lactational anovulation. Therefore, a more likely, non-physiological explanation for our results then is that, nursing infants could be engaging in IMAs to prevent their mothers from spending time in mating activities, which would detract from time invested in nursing bouts. Mating activities can occupy a significant portion of a female's day; consortship pairs are common, in which males pursue adult females for several hours while conducting genital inspections, branch inspections and vocalizing to her (Stone, 2014). As such, this would still be a case of classic weaning conflict (Trivers, 1974). Our study only covered the mating season (two months in each year); therefore, we cannot affirm that IMAs occur exclusively during this season. However, we do know that adult males remain at the periphery of the group at other times of the year (Stone, 2004), reducing the chances of social contact between infants/juveniles and adult males. This suggests that IMAs probably are restricted to the mating season, which also supports the weaning conflict hypothesis.
The second hypothesis we considered was that most IMAs would be initiated by female infants, in order to establish early social dominance over adult males, a pattern similar to seen in hyaenas (Crocuta crocuta). Infant females in this species are highly aggressive (Smale et al., 1995) because adult females are dominant to adult males (Frank, 1986). Against this hypothesis, we did not observe sex differences in the amount of time infants spend near adult males, suggesting that female infants do not have more chances to show agonism toward adult males. We were only able to determine the sex of the infant in seven IMAs, which makes it impossible at this time to further evaluate this hypothesis. However, the prevalence of IMAs in the mating season, rather than all year round (Stone, 2014), does not lend support to the dominance hypothesis.
Although we did not find that the fattest males were targeted less for IMAs, given the small number of observations in which male grade was reliably determined, this hypothesis should be re-evaluated with additional field observations. However, given that adult females themselves spend more time in proximity to fatter males (Stone, 2014), it is possible that infants also are more tolerant of more robust males. An additional hypothesis, not tested here, is that infants may be targeting strange males (males that do not share genes with them). Otherwise, it is possible that the more robust males were also the more robust in the previous breeding season, and thereby have a higher likelihood of siring the infants. This interesting hypothesis can be examined once we collect DNA samples from infants and adult males. We hope to be able to test this hypothesis with the continuation of our trapping program, initiated in 2012.
Are infants effective in blocking the adult males who approach their mothers? This question can be addressed at several levels. Our behavioral data show that, in most cases, the male submits to the infant's threats, leaving the vicinity of the infant and adult female. Thus the male loses immediate access to the female. In this way, the behavior of the infant and the male suggests that the infants are successful in disrupting mating efforts of adult males. However, we know that most females get fertilized during the mating season. In November 2013, 10 out of 11 captured females were pregnant (Stone et al., in press). From this numeric point of view, infants are not effective in ultimately blocking adult males. However, without knowing whether the infants target specific adult males (e.g., subordinate males, unrelated males, less robust males) it is not possible to quantify their efficacy. For example, it is possible that the 10 females were fertilized by one or two dominant males, while the infants blocked attempts of the remaining males. Therefore, the question becomes; who are the adult males that the infants are targeting? This is an important question that merits future investigation. A final question is whether S. collinsi is unique in the existence of IMAs. We argue that this behavior likely occurs in other Saimiri species as well, but simply has not been investigated. All squirrel monkeys show highly seasonal breeding (Di Bitteti and Janson, 2000; Zimbler-DeLorenzo and Stone, 2011) and all show the “fatted male phenomenon” (Stone, 2014). Therefore, we suggest that these two life history traits likely contribute to the occurrence of IMAs in all squirrel monkey species.
The data collected in this study indicate that: (1) most interactions between adult males and infants during the mating season consist of harassment in the context of sexual interference and that they are mostly initiated by infants; (2) infants of both sexes avoid and harass adult males; (3) infants may be attempting to maintain maternal investment in the form of proximity and nursing, which is in conflict with time and energy expended in mating activities; (4) infant harassment may be an effective tactic in blocking approaches by specific, perhaps less robust adult males.
This research was supported by the American Philosophical Society, American Society of Primatologists, and the National Geographic Society. We thank Edmilson Viana da Silva, Francisco da Costa, and Nilda de Sales for their invaluable assistance in the field during this project. The comments of Paulo Castro and Ana Silvia Ribeiro also strengthened this manuscript. | <urn:uuid:7c9f14e8-6218-46ae-8da0-65a60d650fb1> | CC-MAIN-2022-33 | https://complete.bioone.org/journals/neotropical-primates/volume-21/issue-2/044.021.0201/Jealous-of-Mom-Interactions-Between-Infants-and-Adult-Males-during/10.1896/044.021.0201.full | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00297.warc.gz | en | 0.953814 | 4,590 | 2.578125 | 3 |
Lesson 4: Treatment of Malaria
- 1 Introduction
- 2 Lesson
- 2.1 The Principal Objectives of Malaria Treatment
- 2.2 The principle objectives of Malaria treatment are:
- 2.3 The Role of Health Workers in the Treatment of Malaria
- 2.4 Drug Used In the Treatment of Malaria
- 2.5 Treatment of Simple Malaria
- 2.6 First Line Treatment
- 2.7 Second Line Treatment
- 2.8 Treatment schedule
- 3 Management of Severe Complicated Malaria.
- 4 Self Assessment
- 5 Assignment
Welcome to Unit 4 on treatment of Malaria. In the last unit you learnt about severe and complicated Malaria and how to diagnose it.
In this unit we shall look at the treatment of both simple and severe Malaria. Let us start by looking at our objectives for this lesson.
By the end of this unit you should be able to: wn the principle objectives of malaria treatment;
- State the role of health workers in the treatment of Malaria;
- Describe the treatment of simple Malaria;
- Describe the management of severe and complicated Malaria;
- Treat and/or refer severe and complicated Malaria}}
The Principal Objectives of Malaria Treatment
It is important to bear in mind that although malaria is among the top 5 causes of morbidity and mortality in Africa, it can be managed with proper diagnosis and prompt treatment. One of the reasons behind the renewed interest in this disease is the emergence of drug resistant strains of the parasites towards the easily available like Chloroquine and Fansidar. That is why it is very important to have guiding principles on malaria treatment
Before you read on do activity 1. It should take you 5 minutes to complete.
Compare your answers with the information given in the following discussion.
The principle objectives of Malaria treatment are:
- To shorten the duration of the illness and cure it;
- To prevent the illness from becoming severe;
- To avoid death;
- To prevent further transmission;
- To serve as secondary prevention.
Having learnt about the principle objectives of malaria treatment, let us now turn to your role as health worker in treatment of Malaria.
The Role of Health Workers in the Treatment of Malaria
As a health worker you play a very important role in ensuring that the principle objectives of malaria treatment are achieved. How do you do this? Remind yourself by doing Activity 2. It should take you 3 minutes to complete.
Correct your response as you read the following discussion.
The following are some of your key roles you play in the treatment of Malaria.
- Making proper diagnosis;
- Starting prompt and correct treatment with antimalarials;
- Supervising treatment, and ensuring the first dose of antimalarials is administered using “Direct observed therapy (DOT)”.
- Giving patients and their families information about diagnosis, importance of taking treatment as prescribed, need for their participation in the recovery process and prevention of further attack of Malaria;
- Keeping confidentiality;
- Being alert to the possibility that patients may have sought and received antimalarial treatment from other sources.
Having learnt what your role is in the treatment of Malaria, let us now turn to drug treatment of Malaria. We shall start by looking at the drugs used in the treatment of malaria.
Drug Used In the Treatment of Malaria
There are many antimalarial drugs available in Africa. You may have noticed this from the many advertisements of antimalarial drugs available from your health centre pharmacy or local drug shop.
There are various antimalarial drugs available in Africa. Some act specifically on a stage of the malaria cycle while others are non-specific.
Some of these drugs include
- Amodiaquine (CAMOQUINE)
- Sulfadoxine/pyrimethamine (FANSIDAR)
- Halofantrine (HALFAN)
- Artemether (ARTENAM)
There are other drugs which have antimalarial activity but are not primarily used as antimalarials. These include: Azithromycin, Clindamycin, Doxycycline, Proguanil, Tetracycline and Septrine (Cotrimoxazole).
When deciding on the drug of choice for malaria treatment, it is important to take note of the following points:
- It is no longer advisable to use drugs that have shown high failure rates (e.g. Chloroquine, Amodiaquine, Sulphadoxine/Pyrimethamine and Sulfalene/Pyrimethamine) for the treatment of Malaria.
- The use of monotherapies such as Artesunate, Dihydroartemisinin, Artemether, Lumefantrine, Mefloquine, Chlorproguanil/Dapsone and Atovaquone/Proguanil are not recommended to avoid the rapid emergence of resistance to individual drugs;
- Where artemisinines are used as monotherapies, a 7 day course of treatment is recommended and adherence to treatment should be ensured.
The use of combination therapy/treatment is the recommended approach and especially atemisinin combined therapy (ACTs)
You have now learnt about the drugs used in the treatment of malaria. Next we shall discuss in turn, how to treat simple and severe/complicated Malaria. We shall start with treatment of simple malaria.
Treatment of Simple Malaria
As we mentioned in the last unit, Malaria may be described as simple or uncomplicated when the infection is not life threatening and is easily treatable.
There are four drugs that have previously been used for treatment of simple Malaria. These are *Chloroquine
- Sulphur Perimethamide (Fansidar]
However, due to the development of parasite resistance to some of these drugs, several changes have been introduced in the treatment of malaria. As a result, malaria is no longer treated with a single drug. A combination of drugs for treatment malaria is recommended. Although the choice of drug combination may vary from country to country, It is now recommended that malaria should be treated with a combination of two drugs one of which should be an artemisinin derivative. Combination drugs can either be co-formulated or co-packaged. Co- formulated drugs are two or more different drugs combined and taken as one tablet whereas co-packaged are two or more different drugs packaged together to be taken at once. The drugs can be given either as first line treatment or second line treatment. What does that mean? Let us see below.
First Line Treatment
First line drug combination for treatment of Malaria refers to the drugs used initially for treatment of simple Malaria. The recommended first line treatment for Kenya is a fixed dose combination of ARTEMETHER/LUMEFANTRINE (20/120mg).
NB: For countries in which the antimalarial drug policy has not been changed refer to Annex A (or use the country specific recommendations).
Second Line Treatment
Second line drugs are used for the treatment of Malaria after the parasites have failed to respond to the 1st line treatment. They are also in case a patient develops an allergic reaction to 1st line drugs. In Kenya the drug used for second line treatment is oral Quinine.
You should always treat a patient with oral preparations unless there is a contraindication, such as, vomiting, severe nausea, or difficulty in swallowing. In case you begin with parenteral route, change to oral drugs as soon as the patient is stable enough to take the medicine orally.
If a child below two (2) months of age is brought to your facility with fever, this is usually a very serious condition and the cause may not necessarily be Malaria. In highly endemic areas, you should exclude other causes of fever such as meningitis, septicemia, urinary tract infections, respiratory tract infections, local sepsis/abscess, or ear infections.
The following are the treatment guidelines for both first line and second line treatment of simple malaria in Kenya.
1st line Treatment of simple malaria
Table 1: DOSAGE FOR ARTEMETHER/LUMEFANTRINE (20/120mg) (COARTEMR )
NB: In children weighing less than 5 Kg quinine is recommended.
Second line treatment of Malaria
The Second line treatment should be oral quinine. A full course of quinine tablets should be given when the 1st – line treatment (Coartem) has failed due to any of the reasons we mentioned earlier.
Refer to the Schedule in Annex B
Now you are ready to treat a patient with simple Malaria. Let’s practice from the following case studies.
Based on the above case study, do Activity 4 below. It should take you 3 minutes to complete.
Confirm your answer as you read the following discussion.
From the case study, we can see that Tonny’s condition is Uncomplicated P. falciparum Malaria because it is not life threatening, that is, it has no danger signs. His management therefore includes:
- Drug treatment using Artemether/Lumefantrine (CoartemR) using 2 tablets each at 0,8,24,36,48 and 60 hours.
- Tablets panadol 1 ½, stat.
- Advising the mother on the benefits of using insecticide treated mosquito nets;
- Advising the mother on how to prevent mosquito bites through appropriate clothing, repellents and elimination of mosquito breeding places;
- Advising her to monitor the Tonny at home, he should be reviewed in 2 days time and thereafter start school if his condition is good;
- Giving Tonny an anti-emetic if vomiting is severe, or use injectable forms if available;
- Treating coexisting conditions such as dehydration by advising the mother on adequate fluid intake.
Based on the above information do Activity 5, it should take you 3 minutes to complete.
We hope you were able to diagnose that Tonny’s condition is now severe complicated Malaria. This is because he has presented with the following signs and symptoms of severe Malaria:
- Prostration/lethargy (danger sign).
- Severe anaemia,
- Vomiting every thing (danger sign).
This case study now leads us to the second part of this section where we discuss the treatment of severe Malaria.
Management of Severe Complicated Malaria.
The ideal conditions for the management of severe complicated Malaria dictate the need for an Intensive Care Unit (ICU). Unfortunately, this is not always possible in many developing countries such as Kenya. Figure 4.1 shows a section of an Intensive Care Unit.
Figure 4.1: Part of the Intensive Care Unit in a Hospital
Since ICUs are not widely available, it is therefore very important to equip yourself with the necessary knowledge and skills so that they can give the desired management with basic equipment wherever you are stationed.
Treatment of severe and complicated Malaria calls for close supervision between the clinician and the nursing staff. You should ensure proper recording of observations and careful nursing of unconscious patients. You should also give medication strictly on schedule and at required doses.
The management of severe Malaria depends on the level of your health facility. Let us look at management at the following two levels:
- peripheral level, that is health centre, dispensary, or community post;
- Hospital level.
Management at Peripheral level:
At this level you should do the following:
- Recognize severe Malaria;
- ive pre-referral treatment: Quinine 10mg salt/Kg body weight, I.M. In adults loading dose quinine 20mg/kg then maintenance at 10mg/kg 12 hourly (8hrly in adults) till can take orally then change to Coartem or oral quinine to complete 7 days of treatment. Repeat 12 hourly while awaiting transport;
- In the absence of quinine, start with any available antimalarial I.M.;
- Control temperature by tepid sponging, Paracetamol or fanning;
- Control any convulsions with rectal diazepam 5 mg in children, 1/M or I/V diazepam 10mg in adults;
- Pass Nasogastric tube for feeding, if patient is not able to take orally;
- Give oral glucose to correct hypoglycaemia;
- Give oral fluids to correct fluid imbalance;
- Look for danger signs and start management, for example, take blood for grouping and cross-matching, institute anti-meningitis treatment, do L. P. if possible etc
If referral is not possible, keep the patient at the unit and continue with:
- Quinine 10 mg salt/Kg body weight IM 8 hourly until the condition is better, that is, the patient is conscious, or can eat orally, can sit up, can talk, or symptoms have subsided. Then change to oral quinine to complete 7 days of treatment. (Refer to Quinine Treatment Schedule for dosage on page 9)
- If there is no quinine then continue with I/M chloroquin 3.5mg base/Kg body weight 6 hourly until a full dose of 25mg/Kg body weight is completed (i.e., 8 injections)( in countries where chloroquin is still sensitive).
NB: If the total is more than 3 ml, split the volume into two and inject one half in each thigh muscle.
- You MUST always refer the following conditions to hospital because they require intensive care:
- Persistent convulsions;
- Renal failure;
- Pregnant mother with severe Malaria;
- Pulmonary oedema;
- Severe anaemia See Figure 4.2;
- Coma, See Figure 4.3.
Figure 4.2 Transfusing a child with severe anaemia due to Malaria.
Figure 4.3 A malaria victim in coma.
Management of severe malaria at Hospital level.
At this level, you should do the following:
- A. Institute URGENT Antimalarial Treatment:
- Use IM/IV Quinine as the drug of choice for treatment of severe Malaria;
- Administer quinine dose as 10mg salt/kg body weight, 8 hourly to complete 7 days of treatment, both adults and children;
- Give 10mg of quinine salt/kg body weight (not to exceed 600mg as single dose) in 5% dextrose infusion, given over 4 hours period. Repeat this dose after every 8 hours until the patient can take oral medication;
- In an adult, put the required quinine dose in 500 mls of 5% dextrose then run it over 4 hours;
- In children you should give quinine 10mg/kg body weight in appropriate volume of 5% dextrose as 5-10mls/kg of body weight depending on the patient’s Onset fluid balance;
- Because of the danger of cardio-toxicity, never exceed a single dose of quinine of 600 mg salt, even when body weight exceeds 60 kgs;
- Give a bolus of 50% dextrose slowly IV over 1-2 minutes to correct hypoglycaemia.
IV 50% DEXTROSE DOSE
- Give a bolus of 1 ml/kg body weight of 50% dextrose in children.
- Give a bolus of 20 mg in an adult.
Please note the following regarding the duration of treatment with quinine in severe malaria case:
- Give IV quinine until the patient is able to take orally. Ensure that the patient continues with oral quinine to complete a 7-day’s course or artemether/lumefantrine full course;
- It is unusual to continue IV infusions of quinine for more than 4-5 days. This is because patients usually improve by 3rd day of intensive treatment.
B. Institute Supportive Treatment.
Supportive treatment is important in the management of severe Malaria and you should always provide it for the following conditions:
• Hypo-glycaemia. : To correct hypoglycaemia, you should:
- give a bolus of 50% dextrose in both adults and children.
• Dehydration. Ensure continuous adequate feeding. Assess the degree of dehydration and fluid requirement based on body weight and set up appropriate volume of fluids to run in the first four hours;
• Convulsions. To control convulsions, first correct any detectable cause of convulsions, for example, hypoglycemia, and hyperpyrexia. Give anti-convulsion drug, such as:
- I.V Rectal diazepam 5 mg in children(0.3mg/kg iv or 0.5mg/kg rectal),
- 10mg diazepam IV in adults.
• Temperature: Reduce body temperature if greater than 38.5o C, you can do this best by giving Paracetamol by mouth if possible. AL has no anti-pyretic activity. Other ways you can use to reduce body temperature are tepid sponging or fanning;
• Severe anaemia: To correct severe anaemia, you should give a transfusion with packed cells;
• Renal failure: To correct this you should correct dehydration. Pass a urinary catheter, this is necessary to guide fluid balance. If the patient is still oliguric, give furosemide 1-5 mg/kg body weight slowly I.V, if there is no response, consider dialysis.
Maintain proper fluid balance for those patients on IV fluids.
C. Carry out Vital investigations
- Ensure that the following vital investigations are done:
- Blood For malaria parasites;
- Blood Sugar;
- Full Blood Count (Hb, Wbcs);
- Laboratory monitoring of malaria parasites daily;
- Lumber puncture and csf analysis to exclude meningitis;
- Blood electrolytes: Na+, K+, urea;
- Blood culture to exclude any bacterial infection (septicemia).
D. Monitor the vital signs and laboratory indicators
- You should monitor the following vital signs:
- Level of consciousness (use Glasgow Coma Scale (GCS) for adults and Blantyre coma scale for children given in Annex B);
- Parasitaemia by (blood smears);
- Temperature at least twice daily;
- General condition of patient.
- Blood pressure
Your aims of monitoring include:
- Controlling delivery of drugs and infusion fluids;
- Detecting complications of Malaria;
- Detecting toxic effects of drugs given;
- Documenting the patient’s recovery and charting findings and treatment.
You have now come to the end of this unit. In this unit we looked at the principles of management of simple and severe or complicated malaria. We saw that the management of severe malaria should consist of:
- Establishing an I/V line as soon as possible;
- orrecting hypoglycaemia;
- Administering appropriate volume of fluids;
- Administering correct drug in correct dosages for treatment of Malaria;
- Controlling body temperature;
- Maintaining body fluid balance.
You should now review the learning objectives at the beginning of this unit to see if you have achieved all of them. If there is any you are not sure about go over the relevant section in the unit again. If you are satisfied that you have achieved all the objectives, complete the attached Tutor Marked Assignment before you proceed to the next unit. Make sure you also do the practical exercise below to reinforce your knowledge and skills in the management of malaria.
Enjoy the rest of the course! ANNEXE A
• PARENTERAL DRUGS:
- ANNEXE B
The Glasgow Coma Score (for Adults and Children over 12yrs)
- To obtain the Glasgow coma score obtain the score for each section add the three figures to obtain a total.
The modified Glasgow Coma scale (The Blantyre Coma Scale) for children < 12 years
- Press knuckles firmly on the patients sternum
- Press firmly on the thumbnail bed with side of a horizontal pencil
- Press firmly on the supra-orbital groove with the thumb
The scales can be used repeatedly to assess improvement or deterioration. | <urn:uuid:22192dc8-a254-4a2e-bf77-6d845b2248b1> | CC-MAIN-2022-33 | https://wikieducator.org/Lesson_4:_Treatment_of_Malaria | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00296.warc.gz | en | 0.897707 | 4,401 | 3.34375 | 3 |
Popular Science Monthly/Volume 81/November 1912/China's Great Problem
|CHINA'S GREAT PROBLEM|
By Professor THOMAS T. READ
SAN FRANCISCO, CAL.
NEITHER the institution of republican forms of government, nor the creation of a spirit of natural unity, not even the inculcation of republican ideals constitutes China's great and imminent problem. It is not inappropriate that a nation whose people are best known for their skill and probity in business affairs, at the close of revolution engendered in large part by financial considerations and brought to a speedy termination by that modern arbiter of warring factions and nations, the international money lender, should find her most imminent and pressing problem a plain one of business. The average man finds it necessary to give constant consideration to the relation between his income and expenditures and to possible sources of increase of the one and diminution of the other. Nations are no more fortunate and China is unusual only in that her monetary affairs, through her international loans, have become matters of cosmopolitan importance.
At the beginning of last year China had a total foreign indebtedness, secured by Imperial revenue, of approximately $700,000,000 corresponding to an annual interest charge of approximately $35,000,000. During the year a budget was prepared, the first in the history of the nation, which showed that the estimated annual income of the empire was some $180,000,000. The budget made evident to all what many had long known, that China was unable to make both ends meet, and like a spendthrift was using her capital to pay her debts. The fundamental causes of the revolution of 1911 have been much obscured by the natural human desire to weave adventure and romance into war, but it is true, nevertheless, that just as the "embattled farmers" were irritated beyond bearing by a tax on tea, so were the "sons of Han" roused to arms by burdening them with a foreign loan of which they did not approve.
It will be remembered that after the American financiers who had acquired a concession to build a railway from Hankow to Canton perfidiously sold it to the Belgian interests, whom the Chinese especially wished not to secure it, the concession was bought back by China and the people of the provinces through which the road passed attempted to raise the funds for its construction. Considerable sums were raised, but were neither wisely managed nor well spent, and as time passed the funds gradually melted away without any material return in the form of roadbed and rolling-stock. Meanwhile, centralization of power in the Peking government was increasing with rapid strides, and finally the Peking authorities began to negotiate with England, France and Germany a loan for the construction of this and other railways. The means by which the United States claimed and secured the right of participation in this loan, while of importance to China as well as the banker powers, is outside the present question. It was evident from the first that there was great popular opposition to the loan, and foreigners, who knew the popular temper, prophesied that it could never be consummated. Finally the negotiations were concluded, however, and the terms were announced. Among other provisions it was announced that for the sum already spent by the people of the provinces, stock in the railway enterprises to half its par value would be allotted. The pot of revolution, which is always seething in southern China, at once boiled over. The Chinese people had suffered an incompetent government by alien officials as long as it did not greatly trouble them. When it began to waste their money they promptly revolted.
It is generally agreed that war is hell. It is also expensive. Bombardments, bloody combats, fire and looting figure in the head-lines of the daily journals, but the real work of the revolution was done in the financial council chamber. At the beginning of the outbreak neither side was provided with adequate funds, for the revolutionists were unable to secure any considerable sums, though able to cut off a large part of the normal income of the Peking government, and the imperial household with business caution refused to give up their store of private treasure for what bade fair to be a losing contest. The struggle at once resolved itself, therefore, into a competition to secure foreign financial assistance.
Perhaps some day the true history of the negotiations at Peking and Nanking will be made known. But from a business standpoint it was at once evident that the Peking government had immensely the more advantageous position. It had for some time been in negotiation with representatives of the banker nations in regard to loans, and was easily able to continue its negotiations. Bankers seek for stability and naturally preferred to deal with a government whose peculiarities had been learned through years of experience, rather than to take a chance with an oriental republic, of which it would be safe only to prophesy that it would do the unexpected. The revolutionists were out of touch with the financiers, many of them were young and inexperienced, and they were unable to make any impression except upon the Japanese and Russians, who hoped that after a new deal in China they might hold a better hand.
The Chinese are an eminently reasonable people; their natural motto is to suit themselves to circumstances. When it became clear that the revolutionists could not secure funds to put themselves in control of the country, and that the Peking government, while it could secure funds upon evidencing an ability to tranquillize the country, could not do so as long as the Manchus remained in nominal control, the inevitable followed. The Manchu emperor and his associates gracefully abdicated, announcing that the will of Heaven, speaking through the voice of the people, desired the institution of a republican form of government, and the revolution was fait accompli.
In the months which have since elapsed China has been crowded from the stage of international attention by other affairs, yet the negotiations which have been going on are of even greater importance than the more spectacular events of the war. The banker powers are good business men, and, having an opportunity to make China "pay through the nose," were not unlikely to underestimate it. The Chinese are business men, too, and struggled to secure as favorable terms as possible. The difficulty was a complex one—China wished to avoid Egyptianization, while the powers wished to be sure that their money would be well spent, and reasonably secure, while inextricably interwoven were the political aspects of the loan. The Chinese played their ancient game of pitting one interest against another. Many loans were proposed, a loan of $30,000,000 from Baron Cottu was seriously considered, and part of an Anglo-Belgian loan was paid over. The four powers which had been in negotiation for the past two years were increased to six by the addition of Eussia and Japan, and finally to seven by the addition of Austria before it was agreed that all other loans should be abandoned and $300,000,000 advanced to China by this septuple syndicate in a series of instalments.
The problem has not been solved, however; only expanded. Soon China will be indebted to foreign nations by nearly $1,100,000,000, a greater sum than the national debt of the United States. This calls for an annual interest payment of over $50,000,000, over and above the expenses of government. Her total annual income has so far apparently been about $180,000,000 or about one fourth that of the United States. In other words, to an already large annual deficit she has added an annual interest charge of over $15,000,000. Can she increase her income to meet it? A corporation in such a precarious situation would procure the services of the ablest business manager whom money could secure. China's problems are fit tasks for supermen; will the supermen be forthcoming?
A brief survey of China's economic condition will be of service. In former times China was like a "balance-tank" in an aquarium, self-supporting. As Boss has recently accurately remarked, the nation is an exemplification of the law of Malthus, the balance between population and means of existence. To us of America a true mental picture of the economic status of the Chinese is almost an impossibility. A comparison may serve, and by pointing out that the present degree of comfort and convenience enjoyed by the average Chinese demands a coal production 1/175 of that in the United States, and until recently an iron and steel production only 1/1,200 that of the United States, it may be more clear that the Chinese nation as a whole is close to the margin of mere existence. The problem with the average Chinese is an elemental one; enough food to preserve life and enough clothes to keep warm and subserve modesty. China's present unenviable position is not unlikely largely due to the fact that when international trade developed and the export of tea began to meet the import of the ubiquitious blue cotton cloth that forms the Chinese national dress, the acreage formerly devoted to the cultivation of cotton was sown to the opium poppy and the national wealth vanished in curls of smoke that wafted away at once the substance and virility of the people.
Now the use of opium is almost suppressed, soon will be completely so, and the land devoted to its cultivation, will be sown to grain, sugar beets and other crops of real value. The problem is still an elemental one, however. It is idle to simply point out that by opening mines, building railways and developing manufacturing industries, the scale of living of the Chinese citizen can be raised to approximately as luxurious a plane as in the United States. The real question is—will the increase in the wage of the average citizen bring him increased comfort and convenience, or will it bring a few more mouths to feed and another approximation to the margin of existence? If the latter, the Chinese expression for the management of a household—"Kuo jih-tze" to get over the day—will remain always, as now, the index of national economy. Upon the answer to this question hangs China's future.
The further elaboration of this topic would take me into a field in which I scarcely dare venture. It is still a subject of discussion in this country whether the restriction of the size of families is compatible with good morals and good economics. Apparently the pragmatic answer is in the affirmative. The great desire of the Chinese parent for offspring to maintain the rites of ancestral worship further complicates an already complex problem and I will leave it in abeyance, in order to discuss the problem of securing national prosperity from the standpoint of the scanty facts available.
Of China's present foreign indebtedness nearly $350,000,000 represents indemnities, largely the outcome of the outbreak of 1900. The remainder has partly been invested in railway and other industrial enterprises, and partly used in a variety of minor ways. To meet the interest and principal upon this debt the returns of the Maritime Customs has been the security, and the service has for years been under foreign direction. Besides being a source of income the custom service has been an object lesson and a training school for the Chinese. Out of it has grown the excellent postal, telegraph, and telephone service which China enjoys. In 1905 35,110,000 taels were collected from the import and export tax on merchandise. In the budget recently published in the Chinese press, following the address given by President Yuan before the National Assembly, the expected receipts from the Maritime Customs this year is given as 35,140,000 taels. The growth is inconsiderable while the fluctuations in exchange during the past year correspond to a difference of about 5,000,000 taels in the conversion of this sum into gold.
Consideration of this budget as a whole offers an interesting study and I therefore give the estimated revenues and expenditures under the new government, as printed in Chinese journals, and translated by the well-informed National Review of Shanghai.
|Salt and Sea||46,312,355|
|Income from Official property||36,600,899|
|Government Credit Notes||3,560,000|
|Grand Total||Tls. 296,862,721|
|Naval and Military Affairs||83,498,811|
|I. M. Customs Indemnities||11,263,547|
|Native Customs Indemnities||1,256,491|
|I. M. Customs||9,163|
|Naval and Military||14,000,540|
|National Credit Notes||4,772,613|
|Grand Total||Tls. 336,236,062|
Lest the reader be unduly impressed by the appearance of accuracy given by carrying out these sums to the nearest unit, let me add that the probability of error in them is very great and other published estimates put the expenditure as high as 576,000,000 taels, while the Board of Revenue in 1910 prophesied a deficit of 80,000,000 taels in 1911. The indicated deficit of 40,000,000 taels may therefore be regarded as a fairly optimistic view of the financial situation.
As to expenditure, it is seen that over 50,000,000 taels is required to pay interest on indemnities. Communications (railways, post-office and telegraphs) consume nearly an equal sum and return only about two thirds as much in the form of revenue, the difference being partly due to expansion and partly to a present lack of profits from many enterprises. Naval and military affairs consume a large sum, but the present temper of the Chinese public is strongly contrary to a reduction of the effort to make China self-protecting. Educational expenditures should be increased, rather than curtailed, and it may similarly be said of most of the other items that though the moneys might perhaps be more wisely and efficiently expended they can not very well be decreased if the country is to prosper. China's hope lies, not in decreasing her expenses, but in increasing her income.
To greatly increase the income from the Maritime Customs scarcely seems feasible. The present rate of 5 per cent., imposed equally on imports and exports, is certainly low, but the commercial treaties existing with the principal countries only provide for a moderate increase, and it scarcely seems possible that the banker nations would look with favor upon a proposal to tax foreign trade in order to secure income to meet the interest upon their loans. The likin (internal transit tax) should be abolished; like the ridiculous prohibition of the export of grain from one province to another, it hangs like a vampire on the industrial body of the nation, sucking out its life. The conception that certain parts of the country are best suited to the production of certain commodities, while others can best produce something else, and that the best interests of the whole are secured by offering every facility for the free exchange of products, is so elementary that it is strange that even such pronounced individualists as the Chinese have not earlier perceived it. The salt gabelle, similarly, is a financial anachronism. The income from the government-owned enterprises can be greatly increased by better, more intelligent, more careful, and more honest management. In fairness it should be said that the lack of profits from these is not all to be laid at the door of the Chinese; foreign engineers have built $40,000,000 railroads where the probable trade only justified a $10,000,000 road, and foreign supervision of enterprises has often brought with it fat contracts for the foreign merchant.
The land tax might be increased, but the farming class, the large landowners, are already barely above the margin of subsistence, as a whole. But by development of agriculture, as in the United States, the income of the farming class could be greatly increased, with a corresponding taxable margin. Agriculture is the fundamental industry of any country, and the new government will be stupidly negligent if it does not make provision for its scientific development. Progress has already been made in this regard in Manchuria. The improvement of yield and of product by the judicious selection of seed is an idea which has never occurred to the Chinese; indeed, it may be broadly said that the improvement of anything by the elimination of its bad features and increasing of its excellences has never characterized Chinese industrial activity in recent times. I think it unquestionable that the people of China can be better fed and made correspondingly more vigorous simply by government aid to agriculture and the allowing of the free transit of the products of one part of the empire to any other part. The productive energy of the nation as a whole can thus be immensely increased. The human body is an engine for the conversion of food into useful work. Like any other engine, if it is supplied with only enough power to keep it going the useful output is small, since nearly all is used up in driving the machine. But give it all the power it can economically use and the useful output is many-fold greater. The simile is a crude one, but none the less accurate.
It will be noticed that no provision is made for the taxing of incomes, or of industrial enterprises. Under the old system either the tax on land or the tax on trade reached nearly all of these. This is no longer the case and such companies as Standard Oil, British-American Tobacco, Singer Sewing Machine, and numerous native enterprises carry on a large trade without being subject to any tax. This will constantly increase, and by the imposition of a just tax on these new forms of industry considerable sums can be derived. Every means should be taken to encourage the development of such enterprises. The mineral resources of China should be studied and mapped by qualified engineers, the country should be mapped topographically as an aid to the development of railway, irrigation and industrial enterprises, and every effort should be made to increase the agricultural and mineral productivity. A well-fed people with material to work with can upbuild China into a nation of solid wealth and substance. But if the proceeds of the new loan are expended unwisely and unprofitably then China must inevitably within a few years become another Persia. Business principles, rather than political considerations, must be preeminent in the conduct of the new government.
- Only approximate figures can be given, for the varying rates of exchange and diverse rates of interest make exact figures impossible.
- Since this article was written the republican government has refused to accept this loan on the terms proposed. The fundamental problem means essentially the same.
- The tael is a Chinese ounce of silver, and has different values at different places. The customs tael = 1.0164 Kuping tael. The latter is 575.8 grains of pure silver, and is doubtless the tael used in the budget. Naturally, its value in gold varies according to the rate of exchange. | <urn:uuid:2d2fa8f1-7a0a-4be8-8c85-a737db6284ab> | CC-MAIN-2022-33 | https://en.m.wikisource.org/wiki/Popular_Science_Monthly/Volume_81/November_1912/China%27s_Great_Problem | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00698.warc.gz | en | 0.965621 | 4,406 | 2.84375 | 3 |
Right from your seven-year-old kid to your seventy-year-old father, one general instruction we all have received from our doctors is, “eat citrus fruits, beets, and greens.” I, for one, got sick of listening to this rant.
Why do you think people of all age groups are advised to consume these fruits and veggies? If ‘vitamins’ is your answer, well, you are partly right. But the bigger hero here would be antioxidants.
Fruits – especially dark berries greens, nuts – and the panoply of edible plants are natural reservoirs of antioxidants. These special foods protect you in powerful ways, so you can stay fit as a fiddle – at seven, seventeen, or seventy! How? You will know – keep scrolling down!
Table Of Contents
- What Are Antioxidants?
- How Do Antioxidants Work?
- What Are The Different Types Of Antioxidants?
- Benefits Of Taking Antioxidants
- How About Antioxidant Supplements?
What Are Antioxidants?
Antioxidants are human-made or natural substances that may prevent or delay cell damage caused by oxidation (see below) Vegetables and fruits are excellent sources of antioxidants. Diets high in these elements enhance your immunity and promote longevity.
Some common antioxidants we all consume through our diets are vitamins C and E, β-carotene, lycopene, lutein, and zeaxanthin. These bioactive ingredients interact with harmful reactive chemical species called free radicals.
What do free radicals do to your body? How do these antioxidants eliminate them? And where can you get antioxidants abundantly? I’ve got the answers to all these questions. So, let’s start the quest!
First things first…
How Do Antioxidants Work?
To understand this, you must know more about free radicals – which are constantly produced when the body produces energy. These free radicals are the price we pay for using oxygen to manufacture energy – resulting in high energy, potentially damaging byproducts we call free radicals. These include species known as superoxide ion, hydroxyl radical, hydrogen peroxide, alkoxy radical, hypochlorous acid, peroxynitrite, organic hydroperoxide, and peroxyl radical.
The names just flew above your head? Let me simplify them for you. Look at this diagram.
The blue spheres are electrons in a free radical molecule. The green spheres are electrons in an antioxidant molecule. It’s evident from the picture that free radicals have an unpaired or single electron in their outermost orbit (shell) – let’s call it a ‘loner’. This loner needs another unpaired electron or has to pair with another loner (unpaired electron) to become stable.
These loner electrons can interact with normal tissue and cause damage. This damage, referred to as “oxidative stress,” can lead to inflammation, changes in normal tissue structure and function, and cancers. But when these loner electrons interact with antioxidant molecules, they’re neutralized, mitigating the damage they can cause.
Free Radical Facts
- Free radicals are naturally formed when you exercise and when your body converts food into energy.
- Your body can also be exposed to free radicals from a variety of environmental sources, such as cigarette smoke, air pollution, and sunlight.
- These loner electrons can interact with normal tissue and cause damage. This damage, referred to as “oxidative stress,” can lead to inflammation, changes in normal tissue structure and function, and cancers. But when these loner electrons interact with antioxidant molecules, they’re neutralized, mitigating the damage they can cause.
- In excess, free radicals accelerate aging, trigger inflammation, and cause cancers.
So, what do antioxidants do? How do they work?
Antioxidants are designed to neutralize free radicals. Antioxidants are stable enough to donate an electron (another loner!) to a rampaging free radical, limiting the damage they can cause.
What Are The Different Types Of Antioxidants?
There are three primary types of antioxidants found in nature. These include phytochemicals, vitamins, and enzymes.
Phytochemicals are plant-based chemical derivatives – some of which are very powerful antioxidants. (They evolve to help plants adapt to exposure to ultraviolet light and other environmental toxins – when we ingest them, we get the benefit!)
Examples: Carotenoids, saponins, polyphenols, phenolic acids, flavonoids, etc.
Our body makes some of them while some come from natural sources like fruits, essential oils, microbes, and the sun!
Examples: Vitamins A, C, E, and D, coenzyme Q10, etc.
Enzymes are types of antioxidants that we manufacture within our bodies from the proteins and minerals we eat as part of our daily diets.
Examples: Superoxide dismutase (SOD), glutathione peroxidase, glutathione reductase, and catalases.
Now, let’s get to know the sources of these antioxidants.
What Are The Natural Sources Of Antioxidants?
The antioxidant potential of a food source (which could be a fruit, a veggie, a nut, or a beverage) is measured by an assay called the Oxygen Radical Absorbance Capacity (ORAC).
Higher the ORAC value, stronger is the antioxidant potential of that particular food source (3).
For your convenience, I have clubbed various sources of antioxidants into 7 clusters along with their ORAC values.
Disclaimer: The U.S. Department of Agriculture (USDA) releases these numbers. But it has recently removed its ORAC database from its NDL website due to “mounting evidence that the values indicating anti-oxidant capacity have no relevance to the effects of specific bioactive compounds.” Though the ORAC values speak volumes about the antioxidant capacity of a food, they shouldn’t be the only indicator to judge the goodness of that food. Please look at other parameters like metabolism, bio-availability, and risks assocaited with such rich foods before zeroing down on one.
So, here we go!
|Vegetable||ORAC value (µmol TE/100g)|
|Cabbage, red, raw||2496|
|Cilantro, leaves, raw||5141|
|Cucumber, peeled, raw||140|
|Ginger root, raw||14840|
|Peppers, green, raw||935|
|Potatoes, white, raw (with skin)||1058|
|Sweet potato, raw||902|
|Tomatoes, red, ripe||387|
|Fruit||ORAC value (µmol TE/100g)|
|Apples, raw (with skin)||3049|
|Blueberries, wild, raw||9621|
|Currants, black, raw||7957|
|Dates, deglet noor||3895|
|Grapefruit, pink, raw||1640|
|Lemons, raw, (without peel)||1346|
|Lime juice, raw||823|
|Plum, black diamond, raw (with peel)||6100|
|Raisins, golden, seedless||10450|
3. Spices And Herbs
|Spice/Herbt||ORAC value (µmol TE/100g)|
4. Nuts And Seeds
|Nuts/Seeds||ORAC value (µmol TE/100g)|
|Brazil nuts, unblanched, dried||1419|
|Cashew nuts, raw||1948|
|Macadamia nuts, dry roasted||1695|
|Pine nuts, dried||720|
|Pistachio nuts, raw||7675|
5. Cereal Grains And Beans
|Cereal||ORAC value (µmol TE/100g)|
|Sorghum grain, red||14000|
|Sumac bran, raw||312400|
|Sumac grain, raw||86800|
|Pinto beans, raw||8033|
|Kidney beans, raw||8606|
|Black beans, raw||8494|
|Beverage||ORAC value (µmol TE/100g)|
7. Miscellaneous Products
|Product||ORAC value (µmol TE/100g)|
|Cocoa, unsweetened powder||55653|
|Peanut butter, smooth||3432|
|Olive oil, extra virgin||372|
|Fish and seafood||30-6500|
|Meat and meat products||0-850|
|Poultry and poultry products||50-1000|
|Snacks and biscuits||0-1170|
By now, you should know what foods to eat and why! Like I said, we already consume a fair fraction of antioxidant-rich foods. The only change needed is to have these foods unprocessed and raw. Wherever possible, replace frying with boiling and sautéing. This prevents the disintegration of antioxidants.
Let’s assume that you took pains to switch from fried to boiled and blanched foods. What changes should you look out for? What will the antioxidants in these foods do to your body? Continue reading to find out.
Benefits Of Taking Antioxidants
1. Skin: Anti-aging, Skin Lightening, And Protective Effects
Like other organs, the skin too continuously produces free radicals like peroxides, superoxides, and singlet oxygen. This happens because of its metabolic activities and exposure to UV rays and visible light.
These free radicals need to be eliminated from your body. Due to accumulation, these highly reactive species trigger cellular damage, accelerate aging, cause cancer (melanomas), inflammation, pigmentation, and acne.
Your body produces enzymes to fight such oxidative stress. But by supplementing antioxidants through diet, you can achieve younger, glowing, and clear skin
2. Liver: Anti-inflammatory, Anticancer, Hepatoprotective Effects
When liver diseases involve oxidative stress, antioxidants are the best medicine. Diseases like cirrhosis, jaundice, hepatocellular carcinoma (cancer), and parasitic infections are aggravated by free radicals.
Antioxidants like curcumin, resveratrol, caffeine, quercetin, naringenin, and silymarin can be effective in this regard.
Also, to combat this oxidative stress, your body produces antioxidant enzymes like superoxide dismutase, catalase, and glutathione reductase.
3. Heart: Cardioprotective And Hypolipidemic Effects
After the age of 50, 52% of men and 39% of women in the US experience major cardiovascular diseases (CVDs), viz. coronary artery disease, atherosclerosis, cardiomyopathies, cardiac hypertrophy, angina, and myocardial infarction.
Excessive oxidative stress generates reactive oxygen species (ROS) in your body, which can cause CVDs. In such cases, what is the best way to stay safe?
Yes, you are right! Antioxidant vitamins C and E and phytochemicals like quercetin, beta-carotene, lycopene, and lutein have shown excellent cardioprotective effects.
They scavenge the free radicals and trigger the production of antioxidants in your body.
4. Brain: Antitumor, Antidepressant, Cognition-Promoting Effects
Apart from mutations, high levels of free radicals in the blood increase the incidence and malignancy of brain tumors.
Supplementing your diet with antioxidants has a direct impact on the brain, cognition, learning, and memory.
Higher dietary intake of vitamins E and C during pregnancy helps in protecting the infants from brain tumors and deformities. Antioxidants also inhibit the pro-cancer activities that go on in your body – and can hence prevent tumors altogether.
5. Fertility: Therapeutic And Stimulating Effects
An elevated level of free radicals affects various systems in your body. And, believe it or not, fertility is one among them.
Reactive oxygen species (ROS) can have an impact on the sperm count, motility of the sperms, and the maturity or viability of the sperm. In females, poor egg quality, fallopian tube defects, endometriosis, and even ovulatory disorders can arise partly due to ROS in the blood.
Applying antioxidant therapy (vitamins and phytochemicals) to such infertility issues can restore the hormonal balance, scavenge free radicals, and improve spermatogenesis, oogenesis, ovulation pattern, and boost chances of conception.
Did You Know?
- Diseases like arthritis, irritable bowel syndrome, asthma, Crohn’s disease, GERD, psoriasis, and periodontitis arise due to inflammation.
- Free radicals are one of the reasons behind these disorders.
- Consuming antioxidant-rich foods like green leafies, asparagus, broccoli, dark berries, citrus fruits, and nuts will help manage the inflammation involved in these disorders effectively.
With all these benefits, it is difficult to ignore antioxidants, isn’t it?
Think about the number of antioxidant-rich dishes you can whip up using the foods listed above – endless lists of recipes pop up in my mind already!
And for those of you that feel your diet alone is not enough for getting adequate antioxidants, we have something for you. Read on!
How About Antioxidant Supplements?
1. Do They Work?
Not sure, because there is not enough research evidence to conclude. Some cite side effects while others the benefits. Most of the negative studies look at the effects of a single antioxidant on outcomes of a single (usually end-stage) disorder. Most experts in the nutritional medicine field agree this is not an appropriate way to test the effects of these nutrients.
2. Are They Available In The Market?
While most nutrition experts agree that food is the best way to obtain powerful antioxidants in your diet, there are good quality supplements available on the market. These can be used when deficiencies are determined or if there is an increased need due to illness or health challenges.
3. Will There Be Side Effects?
Taking supplemental antioxidants is generally quite safe, though a few problems are possible. The excessive vitamin-B complex would cause nerve damage due to excessive vitamin B6 ingestion. An overdose of vitamin C can cause nausea, diarrhea, orheadache. Though vitamins B and C are water-soluble and toxicity is quite rare, do take care. And some antioxidant supplements might interact with medications you’d be taking. So, check with your health care provider first.
Nature has given us the most potent weapon to fight the deadliest of disorders – in the form of antioxidants.
Our diet should essentially consist of raw or minimally processed fruits and veggies, and cooked meats in regulated portions. On the contrary, the “standard American diet” provides overprocessed, fried, and greasy food with barely any antioxidants left.
It’s time you switch to healthy eating. And by now, you’d know that food is the simplest, cheapest, and sustainable source of obtaining potent antioxidants.
What’s even better is that there are not many documented side effects of taking these chemical scavengers orally. Safe bet!
So, pick up the ingredients of your choice, create and cook some quick recipes, and write your stories to us. We’d also be happy to address your comments, suggestions, and feedback – put them in the box below. | <urn:uuid:13189c33-48b6-4260-848d-3d6cb51c15d8> | CC-MAIN-2022-33 | https://healthylivingwell.ca/2022/07/05/decoding-antioxidants-why-are-they-good-for-you/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00098.warc.gz | en | 0.848938 | 4,174 | 2.8125 | 3 |
” Remote Aboriginal Australians with kidney disease have demanded equitable access to life-saving treatment closer to home to prevent the removal of people from their traditional homelands.
In a new Menzies School of Health Research report, patients and carers from across northern and central Australia called on state, territory and federal government health ministers to overhaul the system to provide more holistic care.”
Report lead author Dr Jaquelyne Hughes says the current model meets medical needs, but missed the mark in helping indigenous people feel connected to their country, families and culture.
“We heard, overwhelmingly, of how people felt lonely, distressed and isolated following relocation to access treatment,” Dr Hughes said.
Some patients reported homelessness and desperation because of this disconnect, describing having to stay in the long grass when Darwin hostels are booked out.
A Torres Strait Islander said many sick people are forced to travel up to 1000 kilometres to Cairns and Townsville to receive dialysis.
“And they cry, their tears are running, because they want to go back home, they miss their families, they miss the lifestyle of the islands, because they are islanders,” the patient said.
Many noted the disease can fracture communities as elders become ill and are relocated together with their relatives, who miss out on cultural obligations and suffer disruptions to education and employment.
“We want them (the elders) to stay in communities. They are the old people; they have to hold country and family together for us,” one patient said.
“Families living in Darwin (for dialysis) are missing out on ceremonies, funerals and other important stuff,” another person said.
Dr Hughes said the only type of care available to most indigenous renal failure sufferers was designed by and for people in cities at the expense of those in the bush.
MENZIES Press Release
Indigenous people with kidney disease living in remote and rural Australia as well as their support networks have made a resounding call for equitable health care closer to home in a report released today by Menzies School of Health Research (Menzies).
In the ‘Indigenous Patient Voices: Gathering Perspectives, Finding Solutions for Chronic and End-Stage Kidney Disease’ 2017 symposium report, renal patients and carers from across northern and central Australia highlighted the need for more holistic care and services to be made available closer to home.
Report lead author Dr Jaquelyne Hughes said current health care systems met medical care needs, but missed the mark in helping Indigenous people feel connected to their country, communities and culture while they received treatment.
“We heard, overwhelmingly, of how people felt lonely, distressed and isolated following relocation to access treatment,” Dr Hughes said.
“Some patients reported homelessness and desperation because of this disconnect. They are not rejecting the desire to live well; they are rejecting the only model of care available to them.
“The care available to kidney patients was designed by and for people who live close to cities. This automatically excludes people who live further away and in the bush.”
The report follows the Indigenous Patient Voices Symposium held during September in Darwin in conjunction with the 53rd Annual Scientific Meeting of the Australia and New Zealand Society of Nephrology (ANZSN).
Dr Hughes is one of many health practitioners urging the Australian state, territory and federal government health ministers to respond to this call to action.
“Consumer engagement is a national priority of Australian health services, and the symposium showed many Aboriginal and Torres Strait Islander people are willing to provide feedback to support the necessary health care transformation,” she said.
“We’ve highlighted the patient-reported barriers to accessing quality services for chronic and end-stage kidney disease, how and where services are delivered, how information is communicated and developing pathways and career opportunities for Indigenous Australians within the renal health care workforce.”
‘ Almost half of heart-related deaths are caused by 10 bad eating habits.
Diets high in salt or sugary drinks are responsible for thousands of deaths from heart disease, stroke and type 2 diabetes, according to a study. Scientists also blamed a lack of fruit and vegetables and high levels of processed meats.
Researchers looked at all 702,308 deaths from heart disease, stroke and type 2 diabetes in the US in 2012 and found that 45 per cent were linked with “suboptimal consumption” of 10 types of nutrients. They mapped data on dietary habits from population surveys, along with estimates from previous research of links between foods and disease, on to data about the deaths to come up with the figures.”
The highest proportion of deaths, at 9.5 per cent, was linked with eating too much salt, while a low intake of nuts and seeds was linked with 8.5 per cent.
Eating processed meats was linked with 8.2 per cent of deaths and a low amount of seafood omega-3 fats with 7.8 per cent. Low intake of vegetables accounted for 7.6 per cent and low intake of fruit 7.5 per cent.
Sugary drinks were linked with 7.4 per cent, a low intake of whole grains with 5.9 per cent, low polyunsaturated fats with 2.3 per cent and high unprocessed red meats with 0.4 per cent.
The research, published in the journal JAMA, also found men’s deaths were more likely to have links to poor diet than women’s.
Question What is the estimated mortality due to heart disease, stroke, or type 2 diabetes (cardiometabolic deaths) associated with suboptimal intakes of 10 dietary factors in the United States?
Findings In 2012, suboptimal intake of dietary factors was associated with an estimated 318 656 cardiometabolic deaths, representing 45.4% of cardiometabolic deaths. The highest proportions of cardiometabolic deaths were estimated to be related to excess sodium intake, insufficient intake of nuts/seeds, high intake of processed meats, and low intake of seafood omega-3 fats.
Meaning Suboptimal intake of specific foods and nutrients was associated with a substantial proportion of deaths due to heart disease, stroke, or type 2 diabetes.
Importance In the United States, national associations of individual dietary factors with specific cardiometabolic diseases are not well established.
Objective To estimate associations of intake of 10 specific dietary factors with mortality due to heart disease, stroke, and type 2 diabetes (cardiometabolic mortality) among US adults.
Design, Setting, and Participants A comparative risk assessment model incorporated data and corresponding uncertainty on population demographics and dietary habits from National Health and Nutrition Examination Surveys (1999-2002: n = 8104; 2009-2012: n = 8516); estimated associations of diet and disease from meta-analyses of prospective studies and clinical trials with validity analyses to assess potential bias; and estimated disease-specific national mortality from the National Center for Health Statistics.
Exposures Consumption of 10 foods/nutrients associated with cardiometabolic diseases: fruits, vegetables, nuts/seeds, whole grains, unprocessed red meats, processed meats, sugar-sweetened beverages (SSBs), polyunsaturated fats, seafood omega-3 fats, and sodium.
Main Outcomes and Measures Estimated absolute and percentage mortality due to heart disease, stroke, and type 2 diabetes in 2012. Disease-specific and demographic-specific (age, sex, race, and education) mortality and trends between 2002 and 2012 were also evaluated.
Results In 2012, 702 308 cardiometabolic deaths occurred in US adults, including 506 100 from heart disease (371 266 coronary heart disease, 35 019 hypertensive heart disease, and 99 815 other cardiovascular disease), 128 294 from stroke (16 125 ischemic, 32 591 hemorrhagic, and 79 578 other), and 67 914 from type 2 diabetes.
The authors, from Cambridge University and two US institutions, said that their results should help to “identify priorities, guide public health planning and inform strategies to alter dietary habits and improve health”.
In an editorial, Noel Mueller and Lawrence Appel, of the Johns Hopkins Bloomberg School of Public Health, said: “Policies that affect diet quality, not just quantity, are needed … There is some precedence, such as from trials of the Mediterranean diet plus supplemental foods, that modification of diet can reduce cardiovascular disease risk by 30 per cent to 70 per cent.”
It is important to maintain a healthy weight for your height. The food you eat, and how active you are, help to control your weight.
Healthy eating tips include:
Eat lots of fruit, vegetables, legumes and wholegrain bread and rice.
At least once a week eat some lean meat such as chicken and fish.
Look at the food label and try to choose foods that have a low percentage of sugar and salt and saturated fats.
Limit take-away and fast food meals.
It’s recommended that you do at least 30 minutes of physical activity most days of the week – exercise leads to increased strength, stamina and energy.
The key is to start slowly and gradually increase the time and intensity of the exercise. You can break down any physical activity into three ten-minute bursts, which can be increased as your fitness improves
Drink plenty of fluids and listen to your thirst.
If you are thirsty, make water your first choice. Water has a huge list of health benefits and contains no kilojoules, is inexpensive and readily available.
Sugary soft drinks are packed full of ‘empty kilojoules’, which means they contain a lot of sugar but have no nutritional value.
Some fruit juices are high in sugar and do not contain the fibre that the whole fruit has.
The role of the kidneys is often underrated when we think about our health.
In fact, the kidneys play a vital role in the daily workings of your body. They are so important that nature gave us two kidneys, to cover the possibility that one might be lost to an injury.
We can live quite well with only one kidney and some people live a healthy life even though born with one missing. However, with no kidney function death occurs within a few days!
The kidneys play a major role in maintaining your general health and wellbeing. Think of them as a very complex, environmentally friendly, waste disposal system. They sort non-recyclable waste from recyclable waste, 24 hours a day, seven days a week, while also cleaning your blood.
Most people are born with two kidneys, each one about the size of an adult fist, bean-shaped and weighing around 150 grams each. The kidneys are located at both sides of your backbone, just under the rib cage or above the small of your back. They are protected from injury by a large padding of fat, your lower ribs and several muscles.
Your blood supply circulates through the kidneys about 12 times every hour. Each day your kidneys process around 200 litres of blood. The kidneys make urine (wee) from excess fluid and unwanted chemicals or waste in your blood.
Urine flows down through narrow tubes called ureters to the bladder where it is stored. When you feel the need to wee, the urine passes out of your body through a tube called the urethra. Around one to two litres of waste leave your body each day as urine.
February – May : Get NDIS Ready with a Roadshow NSW Launched
The Every Australian Counts team will be hitting the road from March – May presenting NDIS information forums in the NSW regional areas where the NDIS will be rolling out from July.
We’ll be covering topics including:
What the NDIS is, why we need it and what it means for you
The changes that the NDIS brings and how they will benefit you
How to access the NDIS and get the most out of it
These free forums are designed for people with disability, their families and carers, people working in the disability sector and anyone else interested in all things NDIS.
Please register for tickets and notify the team about any access requirements you need assistance with. All the venues are wheelchair accessible and Auslan interpreters can be available if required. Please specify any special requests at the time of booking.
Every Australian Counts is the campaign that brought about the introduction of the National Disability Insurance Scheme.
Now it is a reality, the team are focused on engaging and educating the disability sector and wider Australian community about the benefits of the NDIS and the options and possibilities that it brings.
2 March : Disability research within Aboriginal communities : Alice Springs
Dr John Gilroy, a Koori man from the Yuin Nation of the the South Coast of New South Wales, will be presenting a seminar on disability research in Aboriginal communities in the Rubuntja Building, at the Alice Springs Hospital, Northern Territory (NT), on Thursday 2 March 2017 from 12pm – 1pm.
John, a senior lecturer at the University of Sydney (USYD) and a member of the Poche research family will present his journey from being a client of disability services to becoming one of the leading scholars in disability research within Aboriginal communities. His discussion will touch on disability research and scholarship undertaken with Aboriginal people and its implications for the National Disability Insurance Scheme, including the current disability research projects underway with the Anangu of the Ngaanyatjarra Pitjantjatjara Yankunytjatjara (NPY) lands
There are limited seats and registration is required, so book by email using contact below.
3 March : The National Indigenous Youth Parliament (NIYP) applications close
Is your chance to come to Canberra, meet Australia’s leaders, learn about democracy and have your say on important issues. Fifty young Aboriginal and/or Torres Strait Islander people will be selected, six from each state and territory and two from the Torres Strait, to come to Canberra for the week-long program
Aboriginal and/or Torres Strait Islander people aged 16 to 25 years who are willing to stand up and speak about important issues, work as part of a team, travel to new places, meet new people and learn.
How do I apply?
Complete and submit the online application form below. Applications close Friday 3 March 2017.
Please contact us if you do not receive an email confirmation of your application within 3 days. The AEC accepts no responsibility for lost, damaged or late applications.
All information you provide in your application is managed and stored appropriately in accordance with the Privacy Act 1988.
Letter of support
All applications must include a letter of support from your teacher or tutor, employer, coach, youth worker, community leader, family friend or other referee. The letter of support should support the claims made in your application and explain why you are suitable for the NIYP.
Tips for completing this form
Write your answers on a document saved to your computer first in case your connection is lost.
Have a scanned copy of your letter of support ready to upload with your application.
Contact us if you don’t receive an email confirmation within 3 days of submitting this form to make sure we received it.
Apply online now
3 March: AMSANT: APONT Innovating to Succeed Forum – Alice Springs
Following our successful 2015 AGMP Forum we are pleased to announce the second AGMP Forum will be held at the Alice Springs Convention Centre on 3 March from9 am to 5 pm. The forum is a free catered event open to senior managers and board members of all Aboriginal organisations across the NT.
Come along to hear from NT Aboriginal organisations about innovative approaches to strengthen your activities and businesses, be more sustainable and self-determine your success. The forum will be opened by the Chief Minister and there will be opportunities for Q&A discussions with Commonwealth and Northern Territory government representatives.
To register to attend please complete the online registration form, or contact Wes Miller on 8944 6626, Kate Muir on 8959 4623, or email firstname.lastname@example.org.
Wellington Aboriginal Corporation Health Service
Aboriginal Health Services Community Forum
14 March 2017, 10.00am–1.00pm
Novotel Hotel, 33 Railway St, Rooty Hill
16 March Close the Gap Day
Aboriginal and Torres Strait Islander Peoples die 10-17 years younger than other Australians and it’s even worse in some parts of Australia. Register now and hold an activity of your choice in support of health equality across Australia.
Resource packs will be sent out from 1 February 2017.
We will also have a range of free downloadable resources available on our website
Indigenous Eye Health at the University of Melbourne would like to invite people to a two-day national conference on Indigenous eye health and the Roadmap to Close the Gap for Vision in March 2017. The conference will provide opportunity for discussion and planning for what needs to be done to Close the Gap for Vision by 2020 and is supported by their partners National Aboriginal Community Controlled Health Organisation, Optometry Australia, Royal Australian and New Zealand College of Ophthalmologists and Vision 2020 Australia.
Collectively, significant progress has been made to improve Indigenous eye health particularly over the past five years and this is an opportunity to reflect on the progress made. The recent National Eye Health Survey found the gap for blindness has been reduced but is still three times higher. The conference will allow people to share the learning from these experiences and plan future activities.
The conference is designed for those working in all aspects of Indigenous eye care: from health workers and practitioners, to regional and jurisdictional organisations. It will include ACCHOs, NGOs, professional bodies and government departments.
The topics to be discussed will include:
regional approaches to eye care
planning and performance monitoring
initiatives and system reforms that address vision loss
health promotion and education.
Indigenous Eye Health – Minum Barreng
Level 5, 207-221 Bouverie Street
Melbourne School of Population and Global Health
The University of Melbourne
Carlton Vic 3010
Ph: (03) 8344 9320
22 March: 2017 Indigenous Ear Health Workshop in Adelaide
The 2017 Indigenous Ear Health Workshop to be held in Adelaide in March will focus on Otitis Media (middle ear disease), hearing loss, and its significant impact on the lives of Indigenous children, the community and Indigenous culture in Australia.
The workshop will take place on 22 March 2017 at the Adelaide Convention Centre in Adelaide, South Australia.
The program features keynote addresses by invited speakers who will give presentations aligned with the workshop’s main objectives:
To identify and promote methods to strengthen primary prevention and care of Otitis Media (OM).
To engage and coordinate all stakeholders in OM management.
To summarise current and future research into OM pathogenesis (the manner in which it develops) and management.
To present the case for consistent and integrated funding for OM management.
Invited speakers will include paediatricians, public health physicians, ear nose and throat surgeons, Aboriginal health workers, Education Department and a psychologist, with OM and hearing updates from medical, audiological and medical science researchers.
The program will culminate in an address emphasising the need for funding that will provide a consistent and coordinated nationwide approach to managing Indigenous ear health in Australia.
Those interested in attending may include: ENT surgeons, ENT nurses, Aboriginal and Torres Strait Islander health workers, audiologists, rural and regional general surgeons and general practitioners, speech pathologists, teachers, researchers, state and federal government representatives and bureaucrats; in fact anyone interested in Otitis Media.
The workshop is organised by the Australian Society of Otolaryngology Head and Neck Surgery (ASOHNS) and is held just before its Annual Scientific Meeting (23 -26 March 2017). The first IEH workshop was held in Adelaide in 2012 and subsequent workshops were held in Perth, Brisbane and Sydney.
29 April : 14th World Rural Health Conference Cairns
The conference program features streams based on themes most relevant to all rural and remote health practitioners. These include Social and environmental determinants of health; Leadership, Education and Workforce; Social Accountability and Social Capital, and Rural Clinical Practices: people and services.
The program includes plenary/keynote sessions, concurrent sessions and poster presentations. The program will also include clinical sessions to provide skill development and ongoing professional development opportunities :
” The National Indigenous Human Rights Awards recognises Aboriginal and Torres Strait Islander persons who have made significant contribution to the advancement of human rights and social justice for their people.”
The first National Sorry Day was held on 26 May 1998 – one year after the tabling of the report Bringing them Home, May 1997. The report was the result of an inquiry by the Human Rights and Equal Opportunity Commission into the removal of Aboriginal and Torres Strait Islander children from their families.
2-9 July NAIDOC WEEK
The importance, resilience and richness of Aboriginal and Torres Strait Islander languages will be the focus of national celebrations marking NAIDOC Week 2017.
The 2017 theme – Our Languages Matter – aims to emphasise and celebrate the unique and essential role that Indigenous languages play in cultural identity, linking people to their land and water and in the transmission of Aboriginal and Torres Strait Islander history, spirituality and rites, through story and song.
” The Australian Chronic Disease Prevention Alliance recommends that the Australian Government introduce a health levy on sugar-sweetened beverages, as part of a comprehensive approach to decreasing overweight and obesity, and with revenue supporting public education campaigns and initiatives to prevent chronic disease and address childhood obesity.
A health levy on sugar-sweetened beverages should not be viewed as the single solution to the obesity epidemic in Australia.
Rather, it should be one component of a comprehensive approach, including restrictions on children’s exposure to marketing of these products, restrictions on their sale in schools, other children’s settings and public institutions, and effective public education campaigns.
Health levy on sugar-sweetened beverages
ACDPA Position Statement
The Australian Chronic Disease Prevention Alliance (ACDPA) recommends that the Australian Government introduce a health levy on sugar-sweetened beverages (sugary drinks)i, as part of a comprehensive approach to decreasing overweight and obesity.
Sugar-sweetened beverage consumption is associated with increased energy intake and in turn, weight gain and obesity. Obesity is an established risk factor for type 2 diabetes, heart disease, stroke, kidney disease and certain cancers.
Beverages are the largest source of free sugars in the Australian diet. One in two Australians usually exceed the World Health Organization recommendation to limit free sugars to 10% of daily intake (equivalent to 12 teaspoons of sugar).
Young Australians are the highest consumers of sugar-sweetened beverages, along with Aboriginal and Torres Strait Islander people and socially disadvantaged groups.
Young people, low-income consumers and those most at risk of obesity are most responsive to food and beverage price changes, and are likely to gain the largest health benefit from a levy on sugary drinks due to reduced consumption.
A health levy on sugar-sweetened beverages in Australia is estimated to reduce consumption and potentially prevent thousands of cases of type 2 diabetes, heart disease and stroke over 25 years. The levy could generate revenue of $400-$500 million each year, which could support public education campaigns and initiatives to prevent chronic disease and address childhood obesity.
A health levy on sugar-sweetened beverages should not be viewed as the single solution to the obesity epidemic in Australia. Rather, it should be one component of a comprehensive approach, including restrictions on children’s exposure to marketing of these products, restrictions on their sale in schools, other children’s settings and public institutions, and effective public education campaigns.
i ‘Sugar-sweetened beverages’ and sugary drinks are used interchangeably in this paper. This refers to all non-alcoholic water based beverages with added sugar, including sugar-sweetened soft drinks and flavoured mineral waters, fortified waters, energy and electrolyte drinks, fruit and vegetable drinks, and cordials. This term does not include milk-based products, 100% fruit juice or non-sugar sweetened beverages (i.e. artificial, non-nutritive or intensely sweetened). 2
The Australian Chronic Disease Prevention Alliance (ACDPA) brings together five leading non-government health organisations with a commitment to reducing the growing incidence of chronic disease in Australia attributable to overweight and obesity, poor nutrition and physical inactivity. ACDPA members are: Cancer Council Australia; Diabetes Australia; Kidney Health Australia; National Heart Foundation of Australia; and the Stroke Foundation.
This position statement is one of a suite of ACDPA statements, which provide evidence-based information and recommendations to address modifiable risk factors for chronic disease. ACDPA position statements are designed to inform policy and are intended for government, non-government organisations, health professionals and the community.
Chronic diseases are the leading cause of illness, disability, and death in Australia, accounting for around 90% of all deaths in 2011. One in two Australians (i.e. more than 11 million) had a chronic disease in 2014-15 and almost one quarter of the population had at least two conditions.
However, much chronic disease is actually preventable. Around one third of total disease burden could be prevented by reducing modifiable risk factors, including overweight and obesity, physical inactivity and poor diet.
Overweight and obesity
Overweight and obesity is the second greatest contributor to disease burden and increases risk of type 2 diabetes, heart disease, stroke, kidney disease and some cancers.
The rates of overweight and obesity are continuing to increase. Almost two-thirds of Australians are overweight or obese and one in four Australian children are already overweight or obese. Children who are overweight are also more likely to grow up to become overweight or obese adults, with an increased risk of chronic disease and premature mortality.
The cost of obesity in Australia was estimated to be $8.6 billion in 2011-12, comprising $3.8 billion in direct costs and $4.8 billion in indirect costs. If no further action is taken to slow obesity rates in Australia, the cost of obesity over the next 10 years to 2025 is estimated to total $87.7 billion.
Free sugars and weight gain
There is increasing evidence that high intake of free sugarsii is associated with weight gain due to excess energy intake and dental caries. The World Health Organization (WHO) strongly recommends reducing free sugar intake to less than 10% of total energy intake (equivalent to around 12 teaspoons of sugar), or to 5% for the greatest health benefits.
ii ‘Free sugars’ refer to sugars added to foods and beverages by the manufacturer, cook or consumer, and sugars naturally present in honey, syrups, fruit juices and fruit juice concentrates.
In 2011-12, more than half of Australians usually exceeded the recommendation to limit free sugar intake to 10%. There was wide variation in the amounts of free sugars consumed, with older children and teenagers most likely to exceed the recommendation and adults aged 51-70 least likely to exceed the recommendation. On average, Australians consumed around 60 grams of free sugars each day (around 14 teaspoons). Children and young people were the highest consumers, with adolescent males and females consuming the equivalent of 22 and 17 teaspoons of sugar each day respectively .
Beverages contribute more than half of free sugar intake in the Australian diet. In 2011-12, soft drinks, sports and energy drinks accounted for 19% of free sugar intake, fruit juices and fruit drinks contributed 13%, and cordial accounted for 4.9%. 3
Sugar-sweetened beverage consumption
In particular, sugar-sweetened beverages are mostly energy-dense but nutrient-poor. Sugary drinks appear to increase total energy intake due to reduced satiety, as people do not compensate for the additional energy consumed by reducing their intake of other foods or drinks[3, 7]. Sugar-sweetened beverages may also negatively affect taste preferences, especially amongst children, as less sweet foods may become less palatable.
Sugar-sweetened beverages are consumed by large numbers of Australian adults and children, and Australia ranks 15th in the world for sales of caloric beverages per person per day.
One third of Australians consumed sugar-sweetened beverages on the day before the Australian Health Survey interview in 2011-12. Of those consuming sweetened beverages, the equivalent of a can of soft drink was consumed (375 mL). Children and adolescents were more likely to have consumed sugary drinks than adults (47% compared with 31%), and consumption peaked at 55% amongst adolescents. Males were more likely than females to have consumed sugary drinks (39% compared with 29%).
Australians living in areas with the highest levels of socioeconomic disadvantage were more likely to have consumed sugary drinks than those in areas of least disadvantage (38% compared with 31%). Half of Aboriginal and Torres Strait Islander people consumed sugary drinks compared to 34% of non-Indigenous people. Amongst those consuming sweetened beverages, a greater amount was consumed by Aboriginal and Torres Strait Islanders than for non-Indigenous people (455 mL compared with 375 mL). 4
The health impacts of sugar-sweetened beverage consumption
WHO and the World Cancer Research Fund (WCRF) recommend restricting or avoiding intake of sugar-sweetened beverages, based on evidence that high intake of sugar-sweetened beverages may increase risk of weight gain and obesity[7, 11]. As outlined earlier, obesity is an established risk factor for a range of chronic diseases.
The Australian Dietary Guidelines recommend limiting intake of foods and drinks containing added sugars, particularly sugar-sweetened beverages, based on evidence of a probable association between sugary drink consumption and increased risk of weight gain in adults and children, and a suggestive association between soft drink consumption and an increased risk of reduced bone strength, and dental caries in children.
Type 2 diabetes
Sugar-sweetened drinks may increase the risk of developing type 2 diabetes. Evidence indicates a significant relationship between the amount and frequency of sugar-sweetened beverages consumed and increased risk of type 2 diabetes[12, 13]. The risk of type 2 diabetes is estimated to be 26% greater amongst the highest consumers (1 to 2 servings/day) compared to lowest consumers (<1 serving/month).
Cardiovascular disease and stroke
The consumption of added sugar by adolescents, especially sugar-sweetened soft drinks, has been associated with multiple factors that can increase risk of cardiovascular disease regardless of body size, and increased insulin resistance among overweight or obese adolescents.
A high sugar diet has been linked to increased risk of heart disease mortality[15, 16]. Consuming high levels of added sugar is associated with risk factors for heart disease such as weight gain and raised blood pressure. Excessive dietary glucose and fructose have been shown to increase the production and accumulation of fatty cells in the liver and bloodstream, which is linked to cardiovascular disease, and kidney and liver disease. Non-alcoholic fatty liver disease is one of the major causes of chronic liver disease and is associated with the development of type 2 diabetes and coronary heart disease.
There is also emerging evidence that sugar-sweetened beverage consumption may be independently associated with increased risk of stoke.
Chronic kidney disease
There is evidence of an independent association between sugar-sweetened soft drink consumption and the development of chronic kidney disease and kidney stone formation. The risk of developing chronic kidney disease is 58% greater amongst people who regularly consume at least one sugar-sweetened soft drink per day, compared with non-consumers.
While sugar-sweetened beverages may contribute to cancer risk through their effect on overweight and obesity, there is no evidence to suggest that these drinks are an independent risk factor for cancer. 5
A health levy on sugar-sweetened beverages
WHO recommends that governments consider taxes and subsidies to discourage consumption of less healthy foods and promote healthier options. WHO concludes that there is “reasonable and increasing evidence that appropriately designed taxes on sugar-sweetened beverages would result in proportional reductions in consumption, especially if aimed at raising the retail price by 20% or more”.
Price influences consumption of sugar-sweetened beverages[24, 25]. Young people, low-income consumers and those most at risk of obesity are most responsive to food and beverage price changes, and are likely to gain the largest health benefit from a levy on sugary drinks due to reduced consumption. While a health levy would result in lower income households paying a greater proportion of their income in additional tax, the financial burden across all households is small, with minimal differences between higher- and lower-income households (less than $5 USD per year).
A 2016 study modelled the impact of a 20% ad valorem excise tax on sugar-sweetened beverages in Australia over 25 years. The levy could reduce sugary drink consumption by 12.6% and reduce obesity by 2.7% in men and 1.2% in women. Over 25 years, there could be 16,000 fewer cases of type 2 diabetes, 4,400 fewer cases of ischaemic heart disease and 1,100 fewer strokes. In total, 1,600 deaths could potentially be prevented.
The 20% levy was modelled to generate more than $400 million in revenue each year, even with a decline in consumption, and save $609 million in overall health care expenditure over 25 years. The implementation cost was estimated to be $27.6 million.
A separate Australian report is supportive of an excise tax on the sugar content of sugar-sweetened beverages, to reduce consumption and encourage manufacturers to reformulate to reduce the sugar content in beverages. An excise tax at a rate of 40 cents per 100 grams was modelled to reduce consumption by 15% and generate around $500 million annually in revenue. While a sugary drinks levy is not the single solution to obesity, the introduction of a levy could promote healthier eating, reduce obesity and raise revenue to combat costs that obesity imposes on the broader community.
There is public support for a levy on sugar-sweetened beverages. Sixty nine percent of Australian grocery buyers supported a levy if the revenue was used to reduce the cost of healthy foods. A separate survey of 1,200 people found that 85% supported levy revenue being used to fund programs reducing childhood obesity, and 84% supported funding for initiatives encouraging children’s sport.
An Australian levy on sugar-sweetened beverages is supported by many public health groups and professional organisations. | <urn:uuid:2a5ea6ff-ad64-4666-a8a2-b788cf314e49> | CC-MAIN-2022-33 | https://nacchocommunique.com/tag/kidney-health-australia/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00697.warc.gz | en | 0.940914 | 7,284 | 2.609375 | 3 |
They Were Her Property by Stephanie Jones-Rogers discusses the role that White women played in the institution of slavery. It’s pointed out that often, in part because of patriarchy, White women are not perceived as having played an active role in slavery. This is because White women were also subjugated by the patriarchal society of the time and did not have voting or other basic civil and civic rights that were afforded to men. But as we see throughout history, and is explained here in They Were Her Property, an individual or group of people being oppressed does not mean that they are incapable of oppression.
There are various races and ethnic groups in America (ex: Black, White, Hispanic, Asian, Native American, etc.) and there are different income levels within those groups. And yet that doesn’t mean a poor White person who is economically exploited by a wealthy White person can’t then be racist or themselves exploit people from other groups. Or that a Black man who experiences racism can’t also be sexist.
This also means that while White women might have been disadvantaged by sexism, oppressed really, they in turn could also be racist and/or classist. In this case, Southern White women were second-class citizens within White society. But being White gave them privilege over people who were lower on the social hierarchy, especially if the White woman was wealthy to some degree.
All people should be treated fairly. Regardless of race, gender, or economic standing, everyone should be given the same basic rights. At a minimum, people should have equal opportunities for a shot at success and their merits should dictate where things fall beyond that.
But there’s an idea that’s promoted in the telling of history which is influenced by patriarchy. White women who were alive during slavery and even into Jim Crow benefited from institutions of white supremacy. In some cases, they even helped to perpetuate them. But because they were women, their involvement in the system is often overlooked or downplayed.
They’re viewed as not having been in a position to go against the norms of their time as they were unwilling participants. Playing devil’s advocate, that might have been true for some. But the reality is that not only did many benefit from these institutions but some played an active role in perpetuating and instigating the oppression of Black people.
Southern White men are portrayed as the primary slaveholders of the South. They are the ones that control and manage the slaves, plantations, businesses, etc. And that’s probably true for the most part. It’s often implied that White women’s involvement with slavery was indirect. That their contact was through their relationship with a male family member as they did not directly run plantations or otherwise manage slaves.
This assumption is based on Southern White women of the time being housewives as the men worked while the women took care of the home. And the belief that these women only managed plantations and slaves in the event of their male relation having died, being incapacitated, or absent for an extended period. But as They Were Her Property earnestly points out, this wasn’t always the case.
Not everyone in the South owned slaves. For example, poor people often didn’t own slaves but wealthier, more financially comfortable people did. Let’s say middle class and up as lower-income people most likely did not have the money needed to purchase and maintain slaves.
At that income level, like males, females would also inherit property from their relatives. And as at the time, slaves were considered property, they could and often were included in an inheritance. You could have a woman inheriting a plantation or some other form of property from a relative which might include slaves.
These transfers of slaves often occurred at the time of a child’s birth, reaching childhood milestones, marriage, or upon the death of a relative. It’s assumed that ownership and management of slaves and any other property automatically transferred to a husband at marriage. But in some cases, as They Were Her Property points out here, some women chose to continue to actively and independently manage these assets. They and the legal system considered them to be the owner of whatever assets they brought into the marriage.
Much is made about women marrying for money. But it was eye-opening that a woman who owned property, including slaves, might be sought after by men who were ambitious and trying to elevate themselves financially and socially. And I’d imagine that men who already owned property might look at a woman who also owned property and view her as an opportunity to obtain more.
Through marriage, he might get access to those assets but not necessarily ownership. It was at the wife’s discretion to decide how she wanted to handle the ownership and management of assets she brought into a marriage. There are examples of wives loaning money or enslaved people to her now husband so that he could build himself up with his ventures. But these arrangements weren’t transfers of ownership and were not necessarily a permanent or long-term management agreement. Sometimes the wife would in turn expect for those properties at the very least to be returned, if not provide her with some degree of profit.
It dispels this notion of White women being completely removed from the system of slavery, especially at higher income levels. And that dispels this patriarchal view which absolves White women of being involved in the institution of slavery while only placing accountability onto their male counterparts. To be clear, White males were certainly involved and deserve their fair share of the blame.
But White women who were of the slaveholding class were also culpable. First, through benefiting from the institution whether directly or indirectly. It’s an important note as even the women whose male relatives solely traded and managed slaves while they had no involvement were still culpable because they benefited from the institution. And it’s especially applicable in instances where White women were actively involved in the trade.
As They Were Her Property points out, White women were by default second-class citizens and in some instances would have to take additional steps to establish and maintain their ownership rights when entering into a marriage. Ownership of property and other assets brought them a degree of financial freedom. There was a difference in their standing and circumstances compared to women whose households did not own property or slaves. And even a difference between women who directly owned and managed property and slaves versus those who benefited from their relation to males who did.
Women who had slaves in their household in any form had a degree of freedom in comparison to those who did not. This was because it often freed them from household chores and even child-raising responsibilities that they might otherwise have to perform themselves. And women who directly owned and managed slaves could also experience a different kind of freedom due to their financial independence. As such it was within their best interest to maintain this system of slavery because it granted them rights and privileges that they might not otherwise have.
While They Were Her Property focuses on slavery, I thought forward to suffrage and the Women’s Rights Movement. These women trying to own and maintain their property during slavery was not a push for feminism or anything like that but rather based on their individual best interests. Within these later movements, there was a division of sorts where White women pushing for their rights did not necessarily extend their position to include the needs of Black women.
During suffrage, women were pushing for voting rights. And for some members of the Suffrage Movement, one of their talking points was that giving White women the right to vote at the very moment that Black men were gaining the right to vote would help to expand the White voting bloc. That could help to maintain White control over the political process. As part of pushing for expanded rights for White women, there was a willingness to reinforce racial prejudice and oppression in exchange.
Looking back at slavery, this was true as well. Here you have White women who are otherwise second-class citizens within society. But they decided against working across racial and economic lines to help all oppressed groups obtain equal rights. Their intent was not to reshape society and make it fair for everyone but rather for them to join or remain within this privileged class that was oppressing everyone.
Consider how normalized this all was, given that in slaveholding society from a very young age girls would be given a female slave as a playmate. As they grew up, they and this enslaved child might play but the enslaved child was really their first slave. Imagine that from the time you’re a toddler, you have a female slave to play with. And this is as you’re growing up in an environment where the adults around you own slaves.
You would learn from a very young age, that in having ownership over this person, you can tell them what to do. Just think about children and the behaviors that they see exhibited by their parents. Often, children imitate what they see from the adults around them. Seeing examples of how adults interact with enslaved people would give a child ideas about what they should do. It’s like a training ground for you later owning slaves of your own. During their childhood, they would be raised within this social structure that would show and reinforce various aspects of owning and maintaining a plantation or just owning and managing slaves.
Imagine that, there were also publications dedicated to teaching, not just girls, but also boys of the slaveholding class about the ownership and management of slaves and plantations. There was one publication that was mentioned, that provided instruction for young girls as far as how to manage a household and one to teach young men about being slave masters. It was propaganda aimed at indoctrinating children from a very young age.
Keep in mind that a lot of these people were barely literate if they were literate at all. In the South, the literate people were usually of the slaveholding class. This was because their families had the money to pay for private education as public education was not a thing at the time. But women, whether poor or of the slaveholding class did not have the same education opportunities. Thus they were less educated than their male counterparts. Yet, for the few children who could read, this is the kind of content being made for them?
Given how slave society functioned, typically slave-owning men would have control over both the men and women on a plantation. But more frequently, slave-owning women had ownership and control over the women on a plantation. This in part came about because slave-owning women were usually given female slaves by their family members during childhood and/or when they married. Families sometimes did this because females would more typically remain under the control of their female slave owners. Whereas male slaves given to a female slave owner were more likely to be used by the female slave owner’s husband in his business endeavors.
This was a way of giving slaves to a female slave owner but increasing the likelihood that she would be able to maintain ownership and control over that slave versus her husband. It was a means of helping their female relative with a start in life. Whereas males were seen as needing to build their potential and create a livelihood on their own.
It takes a man and a woman to create a child. But because of how slavery was structured, the offspring produced by an enslaved woman would typically become the property of that woman’s owner rather than the father’s owner. Giving female slaves to female family members was a way of ensuring these women’s economic future as over time their slaves would likely have children thereby increasing the female slave owner’s property holdings.
Unless there was some kind of agreement or something to the contrary, a man would often assume ownership and management of his wife’s property at marriage. What sounds like a prenuptial agreement would be drawn up stipulating that this property belonged to the wife and whatever debts or responsibilities the husband had would remain separate. This was to prevent the wife’s property from being taken to cover the husband’s debts or losses.
There’s an interesting point made about reputations. You had some wealthy families that went back for generations and as the country was still relatively small, they knew each other and had an idea of what everyone else had. Granted, you wouldn’t know the specific details of anyone’s finances without seeing their books. But you’d have a good idea of the property that a family owned.
When negotiating a marriage, you could entertain a proposal from the sons from such families with some assurance that they’d be able to provide for a potential bride. As the country began to expand after the Revolutionary War, more young men began going west and deeper into the South to seek their fortunes. When these men proposed marriage to women from well-off families, the accuracy and truthfulness of what they claimed to own were more difficult to verify.
An example is given where a woman was proposed to by a man who told her that he was the owner of a huge plantation with many slaves in Mississippi. She married and moved with him to what she thought was his plantation in Mississippi. Upon arriving she realized that he was not the owner but rather was employed by the plantation’s owner as an overseer. The woman came from what sounded like a comfortable if not wealthy family. And he was planning to use the assets that had been given to her by her family to establish himself. (It’s like an old-school version of the “Tinder Swindler”.)
I don’t know why I was surprised but who would have thought that being a gigolo or kept man was a thing back then. I’ve heard other descriptions of fortune hunters where young men would try to woo or seduce the daughters of wealthy families in hopes of getting access to their money. Male suitors would lie about the amount of money that they had intending to marry a wealthy heiress or at least a woman with some property so they could utilize her assets to establish themselves. Essentially using their wife’s wealth to become wealthy themselves.
After marriage, a wife would have to go through a legal process to maintain control of her assets as control would typically default to the husband. Women had limited legal rights and in some jurisdictions, this could affect their ability to file suit or seek other legal redresses. Thus it was incredibly important to ensure that a potential husband wasn’t just attempting to use the wife to get access to her assets. Thus some families structured how property, slaves, and other assets were given to females in a manner to ensure that they wouldn’t be left destitute or fully at their husband’s mercy. While they lacked several civil and civic rights, this was a way of trying to provide some degree of long-term financial security.
To a degree people mentally limit the involvement of White women in the system of slavery because to some degree, White women were locked away from society. Once married, women did not socialize or move about as freely in society as their male counterparts. But that varied by the individual.
There are several examples provided where some White women went to slave markets themselves and some even owned slave pens. Others who did not venture to slave markets or public auctions had slave traders come to their homes where they could view slaves without venturing out in public. Those who wanted to be even less involved would have a representative such as an overseer purchase or sell slaves on their behalf.
The author dispels this notion that any one way of doing things was applicable across the board. Things varied by the individual woman, her husband, and other aspects of her family structure. This is also the case with the idea of slaveholding women shrinking from the responsibility of managing and disciplining their slaves. That also wasn’t something that they necessarily shied away from due to their gender.
You had some male slave owners who did not whip or beat their slaves. Not due to any means of humanity or caring for the slaves but rather as a matter of that they didn’t do physical labor and didn’t want to get their hands dirty. They might have another enslaved person or an overseer carry out the punishment. You also had some slave owners who just never beat their slaves whether directly or indirectly. But you also had some who were incredibly vicious while others fell somewhere in between.
As part of Southern propaganda, their society was portrayed as being genteel with White women being delicate shrinking violets. But in reality, we see things were quite different based on court records and recollections of formerly enslaved people from the WPA project. Multiple examples prove these commonly held beliefs and characterizations to be inaccurate and untrue as things varied based on the individual and how they chose to run things.
It was interesting when They Were Her Property got to the point of discussing wet nursing. I previously knew this as a practice where a woman who was lactating would breastfeed another woman’s child. Yet, I didn’t give it much thought beyond that. I just assumed that formula wasn’t around at the time so this was an option for women that were not producing milk. Or possibly because they outsourced everything else and didn’t seem to do much of anything that this was just another facet of raising children that they left to enslaved women. I thought it was weird because why would you choose to have another person breastfeed your child?
But the explanation and breakdown were quite unexpected. Sure you had some women who were physically incapable of breastfeeding their children. There wasn’t as much known then about prenatal care to increase the likelihood of a safe and healthy pregnancy. And because circumstances were different, life was different and arguably a bit harder.
But as They Were Her Property explains, some women just used this as an opportunity to have a bit of freedom. Back then people had a lot more children and started earlier than they do now. A slaveholding woman might have upwards of four children. Pregnancy is very tough on the female body and can put a woman’s life in danger. And because there was less knowledge back then, you might have more complications.
Some women decided to have a wet nurse so they could rest after the birth of a baby. But for others, it was more of a social thing. They didn’t have to stick as close to home to be available to feed the child whenever it was hungry. This became a way to absolve themselves from select aspects of raising their children and it gave them more freedom to socialize. They might not necessarily negotiate the procurement of such an enslaved woman but they tended to dictate when the service was needed and arrangements would be made.
There is a division between what’s deemed skilled and unskilled labor. Typically what’s regarded as labor versus what’s regarded as the natural course of things. It’s often overlooked that some slave women’s function on the plantation was to care for children. And for some that was breastfeeding which is still work. Women’s bodies produce milk to feed their children so feeding your own child is natural. But there’s an exchange of commerce if you’re feeding someone else’s child, especially if you’re being purchased or hired for that purpose. It becomes a form of work.
Taking that into consideration helped put into perspective, the reality that pregnant enslaved women were made to work until very far into their pregnancies. In the present for most women and I’m sure it was the case for White women back then, you try not to have a pregnant woman do any kind of strenuous work. But consider that back then, pregnant enslaved women were not just working late into their pregnancy but were likely not receiving healthy or adequate food. How long a pregnant woman worked into her pregnancy and what type of food she got was at the discretion of her owner.
Bear in mind that White women in the South were put on a pedestal for the role they played in White society. This was not the case for Black women as they received no special recognition or respect as wives and mothers. Family relationships, including marriages, between slaves were ignored when convenient. Even the bonds of motherhood with regards to enslaved women were tenuous at best. If push came to shove there was no hesitation to sell a mother away from her children or to sell children away from their mothers.
In the course of Southern White women using Black people for their needs as wet nurses that might or might not include whatever children the enslaved woman had. For a woman to be lactating she would have either been pregnant recently, given birth to a child, or had a child that she was still breastfeeding. Thus having to also provide milk for another woman’s child might mean that her child won’t get enough especially if she’s not getting proper rest, food, and nutrition.
And because later in development, enslaved women were tasked with raising their slave masters children or performing other types of work they would have less time for their own children. Much ado was made about Southern belles and their important roles as mothers. But it was seemingly not considered that enslaved women would want to bond and spend time with their children as well. Southern White women were placed on a pedestal for playing this role in society but this was while they were using enslaved women to relieve themselves of some of the perceived burdens of motherhood.
These are individual points that have been made in other books I’ve read. But the way it’s broken down here by cohesively pulling everything together gave it even further weight.
There’s also some discussion about the sexual abuse and coercion that are associated with slavery. That’s not to say that White women engaged in direct acts of sexual abuse towards Black women or enslaved people as a whole. It seems like if that did happen, it was probably rare. But more surprising was the explanation of prostitution or the “fancy” trade in New Orleans.
The term describes the practice of brothels providing enslaved Black women for sexual exploitation by customers. Some slave owners, including owners who were White women put female slaves that they owned to work in brothels despite being women themselves. This is just another instance of slaveholding White women having no qualms about exploiting other women in this fashion.
During the Civil War, most able-bodied White men went off to fight while White women remained behind at home. Slavery did not officially come to an end until the end of the war. If White women were at home with relatively few White men around but plantations continued operating, who was running them? It’s also worth noting that later there would be frequent claims of Black men lusting after and/or attempting to rape White women. There were now plantations where Black people outnumbered White people during the war. Yet, there were no reported cases of Black men raping White women or other large-scale acts of violence perpetrated by Black people. For the most part, they were trying to survive or were more concerned with trying to escape.
Yet, there are plenty of examples of White women trying different strategies to maintain control over their assets. Not just their property concerning their homes and plantations but also slaves, jewels, and other physical goods. They made an effort to keep slaves on the plantation going so far as to utilize agents to retrieve slaves if necessary. The system surrounding slavery might not have been operating in the same manner as before the war but the trade was still taking place.
Crops might not have been coming in at the same scale and past a certain point, people were experiencing economic hardships. It’s unfortunate but previously discussed in They Were Her Property that some White women who owned slaves might sell a slave to buy a dress or make some other random purchase. Thus in need of cash during the war, some sold slaves as a way to purchase items they needed more. And later sensing the potential end of slavery some sold slaves in hopes of turning a profit before the market collapsed.
The Union offered restitution to slaveholders in the states that remained loyal to the Union. In time they emancipated their slaves or at least Washington D.C. did. Slaveholders began looking to take advantage of the reimbursements being provided by the government. They could hold on to these slaves only to have them be freed later and not receive anything in return. Or they could cut their losses and take whatever they could get for them. Some of these individuals actually made out quite well. They benefited from this unpaid labor in the years before the Civil War and received more money after slavery ended. It’s like getting a financial reward for having exploited these people.
The owners of enslaved people who were pressed into service for the Confederacy were supposed to be able to appeal to the Confederacy for restitution. Or at least that’s the way it was supposed to work but it sounds like people didn’t actually get their money. There was a lot of fuss about supporting the war effort. But there are examples of women who owned slaves fighting tooth and nail to either reclaim their slaves or receive compensation. Some went so far as to take officers to court so the slave trade in the economic exploitation of enslaved people continued.
Some people were desperate to hold on to slaves because they constituted a major part of their property, this was especially true for women. In attempts to avoid the Union army and make it difficult for slaves to escape, they might move from one part of a state to another or even to a whole other state. They might even go so far as to physically hide away or lock up their slaves. Yet you had these false narratives of enslaved people being happy in bondage. If that was the case, you wouldn’t have to plot and scheme to keep them from running away.
These finer details or history are overlooked maybe just outright ignored is more accurate. There are different layers to history but They Were Her Property does a good job of pushing aside a lot of these fables and fallacies that were told about slavery. Southern genteel within slavery and even after was a farce. And these women were not all meek, feeble, or timid individuals. They were just as hard-driving and inhumane as their male counterparts and in some instances, even more so.
This implication or often assumption that White women were unwilling participants is shown to not have been the case. They benefited from and to a degree, some were directly involved with the slave trade. I don’t think you’ll learn anything new specifically about the overall history of slavery as the events are the events. But They Were Her Property does help to dispel quite a number of myths or rather it helps to clarify the involvement of different groups in the institution of slavery.
Unlike men, a lot of Southern women primarily inherited enslaved people rather than land. With the end of slavery, these women were especially affected because so much of their wealth was tied up in the value of their slaves. With slavery ending and the slaves now being set free, it meant that a lot of slaveholding women lost a tremendous amount of their wealth. And with that, you had this rather tremendous reversal of fortune for several White women and some realized that their financial position was quite precarious.
In desperation, some of them had to appeal to family, neighbors, friends, and sometimes even the people that they formerly held in bondage for assistance. Keep in mind that these women likely hadn’t done much of any work themselves so this would be their first time venturing to do so. And depending on their age that might not have been a feasible option. For the most part, the option of free labor was now gone and they like others would have to either hire and pay the formerly enslaved or White workers.
That’s not to say some people didn’t try to finagle their way into continuing the institution. It’s explained that people would attempt to place minors into what they referred to as apprenticeships. The kids wouldn’t be paid and it was often formerly slaveholding people attempting to regain control over children who likely had parents, from whom they might have been separated. Or even when their parents were present the former slave owners would claim that the children were orphans. And that was just one facet of the new exploitative system for kids.
For adults, systems of sharecropping, tenant farming, and convict leasing developed. And that’s in addition to just plain old hiring people, agreeing to terms, and then coming up with excuses or just refusing to pay. It shows how as the society was trying to rebuild itself after slavery you still had White women managing and maintaining things until men returned. But some of those men had died in battle and would not return leaving the women to figure something out for at least the foreseeable future.
I could pretend to care about these former slave owners experiencing hardships or being destitute. But I do not. I feel no sympathy for them. The ones that I do feel sympathy for would be the formerly enslaved who were now put in the position of starting from scratch. They were now tasked with trying to build lives for themselves after having lived through generations of being held in bondage. That’s where my sympathies lie not with these individuals who played a role whether directly or indirectly in exploiting people for generations. They don’t deserve sympathy and I’m not going to pretend to care about them and I’m not going to pretend to.
Shop on Amazon
- Stamped from the Beginning [Book Review]
- When Affirmative Action Was White [Book Review]
- Why I’m No Longer Talking to White People About Race [Book Review]
- Women, Race, & Class [Book Review]
Disclosure: Noire Histoir is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for the website to earn fees by linking to Amazon.com and affiliated sites. Noire Histoir will receive commissions for purchases made via any Amazon Affiliate links above. | <urn:uuid:4466a7dc-5255-4688-b421-4b80ee57f4dc> | CC-MAIN-2022-33 | https://noirehistoir.com/blog/they-were-her-property-book-review/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00496.warc.gz | en | 0.98842 | 6,176 | 3.875 | 4 |
Do you want to learn how much is 2.59 kg equal to lbs and how to convert 2.59 kg to lbs? You are in the right place. You will find in this article everything you need to make kilogram to pound conversion - theoretical and also practical. It is also needed/We also want to emphasize that all this article is devoted to a specific amount of kilograms - that is one kilogram. So if you need to learn more about 2.59 kg to pound conversion - read on.
Before we go to the practice - it means 2.59 kg how much lbs conversion - we will tell you few theoretical information about these two units - kilograms and pounds. So we are starting.
We will start with the kilogram. The kilogram is a unit of mass. It is a base unit in a metric system, formally known as International System of Units (in abbreviated form SI).
Sometimes the kilogram could be written as kilogramme. The symbol of the kilogram is kg.
Firstly the kilogram was defined in 1795. The kilogram was described as the mass of one liter of water. This definition was not complicated but totally impractical to use.
Then, in 1889 the kilogram was described using the International Prototype of the Kilogram (in abbreviated form IPK). The International Prototype of the Kilogram was made of 90% platinum and 10 % iridium. The International Prototype of the Kilogram was in use until 2019, when it was substituted by another definition.
Nowadays the definition of the kilogram is build on physical constants, especially Planck constant. Here is the official definition: “The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of c and ΔνCs.”
One kilogram is 0.001 tonne. It is also divided into 100 decagrams and 1000 grams.
You know some facts about kilogram, so now we can move on to the pound. The pound is also a unit of mass. It is needed to highlight that there are more than one kind of pound. What does it mean? For example, there are also pound-force. In this article we are going to to centre only on pound-mass.
The pound is in use in the Imperial and United States customary systems of measurements. Of course, this unit is in use also in other systems. The symbol of this unit is lb or “.
There is no descriptive definition of the international avoirdupois pound. It is exactly 0.45359237 kilograms. One avoirdupois pound could be divided into 16 avoirdupois ounces or 7000 grains.
The avoirdupois pound was enforced in the Weights and Measures Act 1963. The definition of the pound was written in first section of this act: “The yard or the metre shall be the unit of measurement of length and the pound or the kilogram shall be the unit of measurement of mass by reference to which any measurement involving a measurement of length or mass shall be made in the United Kingdom; and- (a) the yard shall be 0.9144 metre exactly; (b) the pound shall be 0.45359237 kilogram exactly.”
Theoretical part is already behind us. In this part we are going to tell you how much is 2.59 kg to lbs. Now you learned that 2.59 kg = x lbs. So it is time to get the answer. Have a look:
2.59 kilogram = 5.7099725858 pounds.
That is a correct result of how much 2.59 kg to pound. It is possible to also round off the result. After it your result will be as following: 2.59 kg = 5.698 lbs.
You learned 2.59 kg is how many lbs, so let’s see how many kg 2.59 lbs: 2.59 pound = 0.45359237 kilograms.
Of course, this time you may also round it off. After rounding off your outcome is exactly: 2.59 lb = 0.45 kgs.
We also want to show you 2.59 kg to how many pounds and 2.59 pound how many kg outcomes in tables. Look:
We are going to begin with a table for how much is 2.59 kg equal to pound.
|Kilograms (kg)||Pounds (lb)||Pounds (lbs) (rounded off to two decimal places)|
|Pounds||Kilograms||Kilograms (rounded off to two decimal places|
Now you learned how many 2.59 kg to lbs and how many kilograms 2.59 pound, so we can move on to the 2.59 kg to lbs formula.
To convert 2.59 kg to us lbs you need a formula. We are going to show you a formula in two different versions. Let’s start with the first one:
Amount of kilograms * 2.20462262 = the 5.7099725858 result in pounds
The first version of a formula will give you the most exact outcome. In some cases even the smallest difference could be significant. So if you need an accurate outcome - this version of a formula will be the best for you/option to know how many pounds are equivalent to 2.59 kilogram.
So let’s go to the second version of a formula, which also enables calculations to learn how much 2.59 kilogram in pounds.
The second formula is down below, have a look:
Number of kilograms * 2.2 = the outcome in pounds
As you can see, this formula is simpler. It can be the best choice if you need to make a conversion of 2.59 kilogram to pounds in quick way, for example, during shopping. Just remember that final outcome will be not so exact.
Now we want to learn you how to use these two formulas in practice. But before we will make a conversion of 2.59 kg to lbs we want to show you easier way to know 2.59 kg to how many lbs without any effort.
An easier way to check what is 2.59 kilogram equal to in pounds is to use 2.59 kg lbs calculator. What is a kg to lb converter?
Calculator is an application. Converter is based on first formula which we showed you above. Thanks to 2.59 kg pound calculator you can effortless convert 2.59 kg to lbs. You only have to enter number of kilograms which you want to convert and click ‘calculate’ button. You will get the result in a second.
So let’s try to convert 2.59 kg into lbs using 2.59 kg vs pound calculator. We entered 2.59 as an amount of kilograms. It is the outcome: 2.59 kilogram = 5.7099725858 pounds.
As you see, this 2.59 kg vs lbs converter is easy to use.
Now we can go to our chief issue - how to convert 2.59 kilograms to pounds on your own.
We will start 2.59 kilogram equals to how many pounds conversion with the first formula to get the most correct result. A quick reminder of a formula:
Number of kilograms * 2.20462262 = 5.7099725858 the result in pounds
So what have you do to check how many pounds equal to 2.59 kilogram? Just multiply number of kilograms, in this case 2.59, by 2.20462262. It is equal 5.7099725858. So 2.59 kilogram is equal 5.7099725858.
You can also round off this result, for example, to two decimal places. It is exactly 2.20. So 2.59 kilogram = 5.6980 pounds.
It is high time for an example from everyday life. Let’s calculate 2.59 kg gold in pounds. So 2.59 kg equal to how many lbs? And again - multiply 2.59 by 2.20462262. It is equal 5.7099725858. So equivalent of 2.59 kilograms to pounds, when it comes to gold, is equal 5.7099725858.
In this example you can also round off the result. This is the outcome after rounding off, in this case to one decimal place - 2.59 kilogram 5.698 pounds.
Now we can move on to examples calculated using short formula.
Before we show you an example - a quick reminder of shorter formula:
Amount of kilograms * 2.2 = 5.698 the outcome in pounds
So 2.59 kg equal to how much lbs? And again, you need to multiply number of kilogram, this time 2.59, by 2.2. Have a look: 2.59 * 2.2 = 5.698. So 2.59 kilogram is exactly 2.2 pounds.
Do another calculation using this version of a formula. Now convert something from everyday life, for example, 2.59 kg to lbs weight of strawberries.
So let’s convert - 2.59 kilogram of strawberries * 2.2 = 5.698 pounds of strawberries. So 2.59 kg to pound mass is exactly 5.698.
If you know how much is 2.59 kilogram weight in pounds and are able to calculate it using two different versions of a formula, we can move on. Now we want to show you all outcomes in tables.
We realize that results shown in tables are so much clearer for most of you. It is totally understandable, so we gathered all these outcomes in charts for your convenience. Due to this you can quickly compare 2.59 kg equivalent to lbs outcomes.
Let’s start with a 2.59 kg equals lbs table for the first version of a formula:
|Kilograms||Pounds||Pounds (after rounding off to two decimal places)|
And now see 2.59 kg equal pound chart for the second formula:
As you can see, after rounding off, if it comes to how much 2.59 kilogram equals pounds, the outcomes are the same. The bigger amount the more significant difference. Remember it when you need to do bigger number than 2.59 kilograms pounds conversion.
Now you know how to calculate 2.59 kilograms how much pounds but we want to show you something more. Do you want to know what it is? What about 2.59 kilogram to pounds and ounces conversion?
We want to show you how you can calculate it step by step. Let’s begin. How much is 2.59 kg in lbs and oz?
First things first - you need to multiply amount of kilograms, in this case 2.59, by 2.20462262. So 2.59 * 2.20462262 = 5.7099725858. One kilogram is 2.20462262 pounds.
The integer part is number of pounds. So in this case there are 2 pounds.
To convert how much 2.59 kilogram is equal to pounds and ounces you need to multiply fraction part by 16. So multiply 20462262 by 16. It is exactly 327396192 ounces.
So your result is equal 2 pounds and 327396192 ounces. It is also possible to round off ounces, for example, to two places. Then your result will be exactly 2 pounds and 33 ounces.
As you see, calculation 2.59 kilogram in pounds and ounces easy.
The last calculation which we want to show you is calculation of 2.59 foot pounds to kilograms meters. Both foot pounds and kilograms meters are units of work.
To calculate foot pounds to kilogram meters you need another formula. Before we give you it, have a look:
Now look at a formula:
Number.RandomElement()) of foot pounds * 0.13825495 = the result in kilograms meters
So to calculate 2.59 foot pounds to kilograms meters you have to multiply 2.59 by 0.13825495. It is equal 0.13825495. So 2.59 foot pounds is 0.13825495 kilogram meters.
It is also possible to round off this result, for example, to two decimal places. Then 2.59 foot pounds is equal 0.14 kilogram meters.
We hope that this calculation was as easy as 2.59 kilogram into pounds calculations.
This article was a huge compendium about kilogram, pound and 2.59 kg to lbs in conversion. Due to this calculation you learned 2.59 kilogram is equivalent to how many pounds.
We showed you not only how to make a calculation 2.59 kilogram to metric pounds but also two another conversions - to know how many 2.59 kg in pounds and ounces and how many 2.59 foot pounds to kilograms meters.
We showed you also other way to do 2.59 kilogram how many pounds conversions, that is with use of 2.59 kg en pound converter. This will be the best option for those of you who do not like converting on your own at all or need to make @baseAmountStr kg how lbs conversions in quicker way.
We hope that now all of you can make 2.59 kilogram equal to how many pounds conversion - on your own or using our 2.59 kgs to pounds calculator.
Don’t wait! Let’s calculate 2.59 kilogram mass to pounds in the way you like.
Do you want to make other than 2.59 kilogram as pounds calculation? For example, for 10 kilograms? Check our other articles! We guarantee that calculations for other numbers of kilograms are so simply as for 2.59 kilogram equal many pounds.
At the end, we are going to summarize the topic of this article, that is how much is 2.59 kg in pounds , we gathered answers to the most frequently asked questions. Here we have for you the most important information about how much is 2.59 kg equal to lbs and how to convert 2.59 kg to lbs . It is down below.
What is the kilogram to pound conversion? It is a mathematical operation based on multiplying 2 numbers. Let’s see 2.59 kg to pound conversion formula . See it down below:
The number of kilograms * 2.20462262 = the result in pounds
See the result of the conversion of 2.59 kilogram to pounds. The correct result is 5.7099725858 pounds.
There is also another way to calculate how much 2.59 kilogram is equal to pounds with another, easier type of the formula. Have a look.
The number of kilograms * 2.2 = the result in pounds
So this time, 2.59 kg equal to how much lbs ? The answer is 5.7099725858 lb.
How to convert 2.59 kg to lbs in an easier way? You can also use the 2.59 kg to lbs converter , which will do all calculations for you and you will get a correct result .
|2.01 kg to lbs||=||4.43129|
|2.02 kg to lbs||=||4.45334|
|2.03 kg to lbs||=||4.47538|
|2.04 kg to lbs||=||4.49743|
|2.05 kg to lbs||=||4.51948|
|2.06 kg to lbs||=||4.54152|
|2.07 kg to lbs||=||4.56357|
|2.08 kg to lbs||=||4.58562|
|2.09 kg to lbs||=||4.60766|
|2.1 kg to lbs||=||4.62971|
|2.11 kg to lbs||=||4.65175|
|2.12 kg to lbs||=||4.67380|
|2.13 kg to lbs||=||4.69585|
|2.14 kg to lbs||=||4.71789|
|2.15 kg to lbs||=||4.73994|
|2.16 kg to lbs||=||4.76198|
|2.17 kg to lbs||=||4.78403|
|2.18 kg to lbs||=||4.80608|
|2.19 kg to lbs||=||4.82812|
|2.2 kg to lbs||=||4.85017|
|2.21 kg to lbs||=||4.87222|
|2.22 kg to lbs||=||4.89426|
|2.23 kg to lbs||=||4.91631|
|2.24 kg to lbs||=||4.93835|
|2.25 kg to lbs||=||4.96040|
|2.26 kg to lbs||=||4.98245|
|2.27 kg to lbs||=||5.00449|
|2.28 kg to lbs||=||5.02654|
|2.29 kg to lbs||=||5.04859|
|2.3 kg to lbs||=||5.07063|
|2.31 kg to lbs||=||5.09268|
|2.32 kg to lbs||=||5.11472|
|2.33 kg to lbs||=||5.13677|
|2.34 kg to lbs||=||5.15882|
|2.35 kg to lbs||=||5.18086|
|2.36 kg to lbs||=||5.20291|
|2.37 kg to lbs||=||5.22496|
|2.38 kg to lbs||=||5.24700|
|2.39 kg to lbs||=||5.26905|
|2.4 kg to lbs||=||5.29109|
|2.41 kg to lbs||=||5.31314|
|2.42 kg to lbs||=||5.33519|
|2.43 kg to lbs||=||5.35723|
|2.44 kg to lbs||=||5.37928|
|2.45 kg to lbs||=||5.40133|
|2.46 kg to lbs||=||5.42337|
|2.47 kg to lbs||=||5.44542|
|2.48 kg to lbs||=||5.46746|
|2.49 kg to lbs||=||5.48951|
|2.5 kg to lbs||=||5.51156|
|2.51 kg to lbs||=||5.53360|
|2.52 kg to lbs||=||5.55565|
|2.53 kg to lbs||=||5.57770|
|2.54 kg to lbs||=||5.59974|
|2.55 kg to lbs||=||5.62179|
|2.56 kg to lbs||=||5.64383|
|2.57 kg to lbs||=||5.66588|
|2.58 kg to lbs||=||5.68793|
|2.59 kg to lbs||=||5.70997|
|2.6 kg to lbs||=||5.73202|
|2.61 kg to lbs||=||5.75407|
|2.62 kg to lbs||=||5.77611|
|2.63 kg to lbs||=||5.79816|
|2.64 kg to lbs||=||5.82020|
|2.65 kg to lbs||=||5.84225|
|2.66 kg to lbs||=||5.86430|
|2.67 kg to lbs||=||5.88634|
|2.68 kg to lbs||=||5.90839|
|2.69 kg to lbs||=||5.93043|
|2.7 kg to lbs||=||5.95248|
|2.71 kg to lbs||=||5.97453|
|2.72 kg to lbs||=||5.99657|
|2.73 kg to lbs||=||6.01862|
|2.74 kg to lbs||=||6.04067|
|2.75 kg to lbs||=||6.06271|
|2.76 kg to lbs||=||6.08476|
|2.77 kg to lbs||=||6.10680|
|2.78 kg to lbs||=||6.12885|
|2.79 kg to lbs||=||6.15090|
|2.8 kg to lbs||=||6.17294|
|2.81 kg to lbs||=||6.19499|
|2.82 kg to lbs||=||6.21704|
|2.83 kg to lbs||=||6.23908|
|2.84 kg to lbs||=||6.26113|
|2.85 kg to lbs||=||6.28317|
|2.86 kg to lbs||=||6.30522|
|2.87 kg to lbs||=||6.32727|
|2.88 kg to lbs||=||6.34931|
|2.89 kg to lbs||=||6.37136|
|2.9 kg to lbs||=||6.39341|
|2.91 kg to lbs||=||6.41545|
|2.92 kg to lbs||=||6.43750|
|2.93 kg to lbs||=||6.45954|
|2.94 kg to lbs||=||6.48159|
|2.95 kg to lbs||=||6.50364|
|2.96 kg to lbs||=||6.52568|
|2.97 kg to lbs||=||6.54773|
|2.98 kg to lbs||=||6.56978|
|2.99 kg to lbs||=||6.59182|
|3 kg to lbs||=||6.61387| | <urn:uuid:8e451c36-2600-49ac-8b75-1b46196bf337> | CC-MAIN-2022-33 | https://howkgtolbs.com/convert/2.59-kg-to-lbs | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00097.warc.gz | en | 0.882006 | 4,871 | 3.1875 | 3 |
G Babu, S Ramachandra, U Garikipati, T Mahapatra, S Mahapatra, S Narayana, H Pant
infant mortality rate, neonatal mortality rate, prevalence, tribal areas
G Babu, S Ramachandra, U Garikipati, T Mahapatra, S Mahapatra, S Narayana, H Pant. Maternal Health Correlates Of Neonatal Deaths In A Tribal Area In India. The Internet Journal of Epidemiology. 2012 Volume 10 Number 2.
Globally, it is estimated that, around 5 million newborn deaths occur each year, of which 98% occur in developing countries and the majority occur in Asia and Africa. (1) India accounts for 30% of all neonatal deaths globally. Neonatal mortality rate is defined as the number of deaths during the first 28 completed days of life per 1000 live births in a given year or any other specified time period.(2-4) India is a signatory to the Millennium Development Goals(5) and has national level goals with respect to reduction in Infant Mortality Rate (IMR). Therefore, the country has to fulfill its commitment in terms of reducing IMR as per established goals.(6)
According to the conceptual framework from Mosley and Chen (1984)(7) of understanding proximate determinants of infant mortality, the proximate determinants are divided into five categories: maternal factors
Several factors such as women’s status in the society, nutritional status at the time of conception, early child bearing, closely spaced pregnancies and harmful practices such as inadequate cord care, not keeping baby warm, discarding colostrum and feeding other foods contribute to this. The most common causes of neonatal deaths are prematurity (25%), infection (36%), birth asphyxia (23%) and neonatal tetanus (4%). In India, it is reported that time of seeking antenatal care by mothers is usually delayed owing to the lack of awareness and recognition of early signs of pregnancy and the need of antenatal care. The delay is usually aggravated by cultural, socio-demographic and logistic factors.(8) According to NFHS-III, it is evident that inappropriate newborn care practices are highly prevalent in India.(9) It is also known that in India, residents of remote tribal areas with poor transportation facilities suffer from inadequate access to health care facilities, poor quality of care and lack of knowledge regarding health services.(10)
It has long been established that health workers and the community have independent as well as collective roles in preventing neonatal deaths.(11-13) Albeit most women are uninformed and/or unaware of health facilities, there is abundant evidence that home-care strategy by health workers reduces neonatal mortality by more than a third and improves key maternal and newborn-care practices.(12, 13) In the areas with high burden of infant mortality, it is often reported that health workers at the preventive cadre lack or retain inadequate skills to meet the required standards of newborn care.(8, 14-17) On a dissimilar platform, educated or informed mothers may ensure better access to and have better chances of using health systems and consequently can have better health indicators.(18)
Globally, the tribal people are in minority and their health status is often neglected due to several reasons. With 84.3 million tribal populations belonging to recognized groups, India has almost half of the world’s tribal population and adds to 1 million neonatal deaths annually.(19) Indian tribal people follow traditional norms, are socially and economically weaker and are conservative in nature, apart from being under-privileged.(19, 20) Habitually, they live in areas with scarce resources and are often deprived of medical facilities. In tribal areas, about 80% deliveries occur at home and are attended by unskilled traditional birth attendants.(20) It is reported that tribal areas have high neonatal mortality of around 43 per 1000 and contribute to 65% of all infant deaths in those areas.(19-22)
As per 2011 census report, Andhra Pradesh has 50.24 lakhs of tribal population spread across 23 districts.(22) In these areas, there are eight integrated tribal development agencies for implementation of developmental programmes under the control of Commissioner of Tribal Welfare.(23, 24) The tribal division of Parvathipuram agency area has scattered and remote habitations including endemic and epidemic prone areas. This region is also influenced by Naxalism/Maoism.(25-27) The tribal population of Vizianagaram district has low literacy rate, traditional life styles with varied cultural values and taboos. (19-22) Water contamination, poor sanitation, illiteracy, lack of personal hygiene, lack of awareness of endemic diseases are all reflecting on poor health indicators.(13) It is vital to understand the socio economic barriers, proportion of skilled deliveries, accessibility to healthcare services and their determinants as each of these will subsequently have a role in improving the neonatal health in tribal areas. In this study, we aimed to explore specific determinants in deterring provision of quality neonatal care services in Vizianagaram District in Andhra Pradesh, India. The current study also examined maternal factors, explicitly focusing on antenatal care and maternal health seeking pattern in relation to neonatal health in tribal areas of Andhra Pradesh, India.
This community-based study was conducted in two phases. The first phase involved use of qualitative methods (semi-structured and open-ended in-depth interviews) conducted in local language, Telugu. This phase aided in obtaining relevant information from mothers who had delivered in the one year prior to the study. Information from analysis of qualitative data was used to construct a questionnaire schedule which was administered in the subsequent quantitative phase wherein a population-based survey was undertaken. Reported infant deaths were investigated through verbal autopsy. Additional information was also obtained from relatives. The verbal autopsy tool included identification particulars, verbatim open ended history, care seeking behavior of fatal illness, screening questions to know the common causes of deaths and detailed history of age at death, time and place of death and major signs and symptoms of illness. Field work was undertaken over a two-month period (1 st April, 2011-31 st May, 2011).
Inclusion Criteria: All women of the reproductive age group (15-45 years) were eligible for inclusion in the study. The total population in the tribal areas of the district (study area) was 4,18,670. The sample setting includes 19 tribal primary health centres (PHC’s). Our sampling frame included mothers in the age group of 15-45 years in the tribal area of Vizianagaram District, Andhra Pradesh.
Sampling Technique: A multi-stage systematic random sampling was used to identify study participants. All nineteen tribal PHC’s were included. One sub-centre was randomly selected in each PHC area. This was done by preparing the list of all sub-centres and selecting one sub-centre randomly
Written informed consent was obtained from all the participants by trained interviewers, and all data was recorded in a specially designed format, which was administered in the local vernacular. The questionnaire was prepared in English, translated into Telugu and back translated to English independently for standardization. The questionnaire-schedule was used to collect detailed information regarding history of ante natal care (ANC), intrapartum and postpartum care, contraceptive methods, infant morbidity and mortality and on health seeking behavior of mothers. All collected data was coded and entered into a specially designed database. Ethical clearance was obtained from institutional ethical committee, IIPH, Hyderabad
Statistical Analysis: After data collection and data cleaning, analysis was done using Stata SE (Stata Corporation, 10.1 for Macintosh TX USA) and MS Excel (Microsoft Corporation, USA). Descriptive data analysis including, proportions for all the variables included in the study were conducted. Further, specific models were run to check crude measures of association.
Around 74% of women utilized public health facility but 10% of women are still not availing any ANC services. Among the 74% women who utilize public health facility, only 32% mothers had 2 ANC visits and 23% had 1 ANC visit. Further, 87% received ANC care at home, out of which, 63% women were given ANC by ANM and 23% by ASHA workers.
The provision of antenatal care services was good with high coverage of Tetanus toxoid (97%), better inclusion of diagnostics such as Hb and Blood pressure readings (80%) and excellent provision of IFA tablets (93%). A very high proportion of pregnant women (83-91%), were advised to deliver at public health institutions (Table-2). 63.9% mothers had received health checkup in the last trimester by Anganwadi Worker (AWW) (Fig.1). 23.9% of mothers were examined by ANM, 12% by ASHA (Fig.2).
Our results suggest that 56% of women delivered at home, 38% at public health facility and 5.2% could avail private facility. It was reported that 55% deliveries had safe delivery practice by using clean blade for cord cutting; other practices were not known and implemented by TBAs in tribal areas (Fig.3).
Qualified doctors conduct only 10% of deliveries and 29% were conducted by ANM. 45% were not willing to go to a health facility and / or were not allowed by the family. 28% show reason of inaccessibility. Mothers were unaware of usage of Disposable Delivery Kit (DDK). Only 4% knew that DDK was used. Also, 32% of women did not know the importance of keeping baby warm after birth. 55% mothers knew that new blade was used (Table 3). The results indicated that whereas antenatal services were good in Vizianagaram (Table 1), and reasonable enactment by local health authorities (Table 2), majority of the women nevertheless delivered at home, often in unhygienic environment (Table 3).
We asked mothers regarding when the baby was sick and wanted to get advice or treatment, what would be their option (Table 4). Majority of the mothers answered in affirmative of having knowledge in recognizing symptoms of a sick neonate. On exploring the actions in terms of, whether they would take the sick baby to hospital, there were several important determinants. Interviewers would code the answers to this section as “ significant problem” if mothers provided answers as to this was the only problem that impedes them from taking the sick baby to hospital and could not be overcome easily only with her efforts. The answer would be coded as a “minor problem” if sometimes this posed a problem but could be overcome by mother without any challenges. It would be coded as “no problem” if this was not a challenge at all for mother to handle.
Note: Does not arise is inclusive of services not available, no access, not aware and others
As an example, we asked questions regarding, mother getting permission to visit hospital. It would be coded as significant problem if mother was never permitted by husband or mother in law to go to hospital and she could not convince them at all. It would be a minor problem if there were opposition, but agreed after explaining without much effort. These determinants were access to health facility being very distant, concern that no female health provider will be present and no drugs available. Mother’s smoking status and mother’s alcohol use was also significantly associated with infant deaths (Table 4).
Age of mother, total number of women in the house, total number of children the mother has, and years since marriage were significantly associated with infant deaths (Table 5).
Though statistically not significant, the relative odds of babies that develop symptoms and were delivered by local Dai was almost twice (OR:1.64, 95% C.I: 0.7565 -2.465) compared to those delivered by gynecologists. Mothers that who did not complete at least secondary education were at higher odds of (OR: 1.60. 95% C.I: 0,70- 3.69) having infant deaths compared to mothers who have completed at least secondary education (results not shown). This result was not found to be statistically significant.
It is estimated that about 80% of deliveries occur at home in tribal areas.(19) Analogously our study reported that 55% women endure the arduous process of childbirth at home. The home deliveries are conducted often with the aid of locally available traditional attendant excepting few opportune women who can avail services of Skilled Birth Attendants (SBA). The neonatal deaths in tribal area are mainly due to severe infections, preterm births, birth asphyxia and neonatal tetanus. It is estimated that nearly three fourths of neonatal deaths occur within 1 st week, mostly during first 24 hours accounting for early Neonatal Mortality Rate (NMR).(28) Also, neonatal mortality constitutes to 2/3rd of Infant Mortality Rate (IMR) and half of under-five mortality rate.(28) It is very important to study the factors associated around the period of birth of baby including the first 24 hours. Lack of skilled care at births may lead to increase in neonatal mortality rate. They include unhygienic delivery practices, unhygienic newborn care practices and excessive invasive procedures and lack of essential preventive newborn care. There is a need to focus on early neonatal mortality and the states need to effectively implement early newborn care by upgrading the skills of health workers and community participation. (19)
The records of District Medical and Health Officer (DM&HO), Vizianagaram shows that about 90% of deliveries occurring in this district do occur at home in this district and the district has an IMR of 22 per 1000. The data of the present study showed 57% of deliveries occured at home and at least 70% - 80% infants were exposed to infection, diarrhea and other illnesses. Also, our results from earlier paper indicate that IMR in Vizianagaram is 239 per 1000,(29) ten times more than reported by the district. We have reported the IMR of 230 infant deaths per 1000 live births discussing all the limitations and caveats in our earlier paper. Even assuming that there might be some overestimation, there are several important aspects of maternal factors, which deserve attention.(29)
Firstly, we conducted a cross-sectional study and hence do not have any temporality attached to the study design. Hence, one cannot claim or verify any causal inferences out of the results from our study. Secondly, we consider that there might be a possibility of misclassification of infant deaths due to many reasons such as: interviewer bias; migration and others. The study might additionally be vulnerable toselection bias, residual confounding by unknown factors and lack of generalizability. Hence the validity of the data from field might either underestimate or overestimate the IMR results. Third, the selection of sample was such that only hilly and remote tribal areas are included in the study while most of the plain tribal and other areas of the district were not taken into account. This will definitely provide a different estimate than that of the reported IMR by the district, and was higher in our study results. We are confident that our estimates are representative of the true IMR burden due to rigorous care taken during methodology and implementation to represent the true source population of study question.
On the other-hand, it can be reasoned that there might be specific reasons for under reporting of neonatal deaths by district authorities. Vizianagaram District has got a tribal division, which is geographically the border of Andhra Pradesh and Orissa states.(17, 18) There is migration of tribal population from Orissa to Vizianagaram district and Vice-Versa. This could be one reason for lacunae in reporting of infant deaths.
Most of the tribal girls get married at an early age: between 15-20 years, ensuing early pregnancy leading to teenage pregnancies coupled with lack of knowledge of safe motherhood practices. We infer that early marriage, lack of awareness and lack of services has led to increased NMR in this tribal area. Similar findings have been obtained by the study conducted by Kushwala P et al,(11) wherein they reported higher incidence of LBW Neonatal morbidity and mortality being associated with adolescent and teenage pregnancies. (11) We explored our data and found that the adoption of both permanent and temporary contraceptive methods was very low among tribal women due to cultural and social barriers. This might result in high fertility and increase in number of births. High fertility and low couple protection rate is associated with high IMR and NMR.(30, 31)
We argue that encouraging institutional deliveries is imperative and is intended to return rich dividends in terms of reducing IMR in other areas of the country. However, there are several contextual constraints, which would limit this objective such as in the case of our study population. We discuss on three important aspects in understanding why tribals did not prefer to deliver at hospitals even when they are provided with free transportation and assistance.
First, as the ANC coverage is high and most of pregnant mothers had health worker contact: it can only be assumed that the communication skills of workers in convincing the husbands, mother in laws and the pregnant mother for institutional delivery were not sufficient. Also, contact by health worker in last trimester of pregnancy is low due to which timely referral of some high-risk pregnancies might not have taken place.
Second, 57% of births are still occurring at home in the study area. It is reported that two thirds of infant deaths occur in the immediate neonatal period and hence efforts at reduction of NMR should be coupled with efforts that improve maternal care during pregnancy, delivery and postnatal period. The important interventions recommended are practicing clean delivery, basic newborn resuscitation when needed, prevention of hypothermia, early and exclusive breast-feeding and Tetanus vaccination. The interventions could be most effective when deliveries are supervised. (4,(32, 33) Our results are in conformity with other research in India.(28, 34)
Third, the reach of Anganwadi workers (AWW) was high and they could play a greater role in communicating and convincing the family members to accept institutional delivery.
In Conclusion, high prevalence of home deliveries and inaccessibility of neonatal care in tribal area indicate there is a need to develop and promote home based neonatal care practices. The ASHA/TBA are the anchor workers at village level. By improving the skills of these health workers at community level, a lot of improvement can be achieved in reducing IMR and NMR. There should be separate plans of implementing programs for tribal and non-tribal areas. The local cultural values and taboos need to be considered while planning for tribal areas. The review of MCH services should not be based on overall condition of the district. Region specific strategies are to be planned and implemented.
Through this study, we have focused on simple factors that can be targeted through interventions to reduce MMR and IMR in tribal areas of Vizianagarm district. In summary, this study revealed a huge burden of neonatal ill health. A key challenge for effective implementation of neonatal intervention packages is developing and sustaining constructive linkages between families, communities and health facilities through engaging existing cadres of community health workers in neonatal health. There are proven models that are cost efficient and have shown good impact in implementing evidence-based interventions in tribal areas.(7, 8, 11, 15-18, 28, 30, 32, 34) We recommend that Government of Andhra Pradesh adapt such culturally appropriate innovative interventions to improve neonatal health in the state. It is crucial for the development of effective regional specific strategies to save newborn lives.
An integrated package of antenatal, intra-natal & postnatal services that reduce newborn deaths should be implemented. This is the very basis on which, Integrated Management of Neonatal Childhood Illnesses (IMNCI) was introduced in India. However, the progress of IMNCI is very slow at least in the tribal areas of the Vizianagaram district. To aid the revival of an efficient program, we have outlined some points, which can be used as pointers towards improving provision of neonatal health services in tribal areas. There can be renewed inter-sectoral coordination for comprehensive approach in tribal areas for better awareness and information. Demand from community to receive quality health services from public health facility should be encouraged by way of creating awareness regarding national programs, responsibilities and available resources. The Government should ensure provision of high quality training and supportive supervision to TBA, front line health workers and supervisors regarding upgrading their skills. Better communication trainings to be given to AWW and ASHA in gathering acceptance of tribal families for institutional deliveries (and thereby for effective newborn care in the first 48 hours). It might be helpful to also mandate home visit to newborns within 24 hrs by ASHA/Community Health Worker (CHW)
In our study, both, the proportion of pregnant women who deliver at home and proportion having ANC checkups was very high. This indicated that there are several opportunities to convert the antenatal visits into institutional deliveries. In the absence of intensive interpersonal communication strategies, it is high time that the policy makers reconsider improving delivery services at home. Increased visits of ANM to houses, training ASHAs for facilitating normal delivery can be some of important options to be considered at least in tribal areas.
We have conducted an important study in an inaccessible tribal area. The extrapolation of results from this study are applicable only locally, but yet may have important influence on policy makers to re-visit the issue of managing deliveries in tribal and other such areas where institutional deliveries are low.(28, 35, 36) Currently, while appropriate emphasis is being given in promotion of institutional deliveries under NRHM, our study underlines the importance of not neglecting safe home deliveries. Even with greater momentum in terms of greater emphasis in resource allocation and reviews by central and state Governments, it is evident that institutional deliveries are still in a state of transition towards uptake and improvement. There are several reasons for the varied results including non-availability of doctors and equipment. It might be important that during this transition phase, adequate attention is also paid for creating human resources (Skilled Birth Attendants) who can be helpful in decreasing IMR. NRHM should consider allocation of resources for building capacity of birth attendants.
Future studies done with rigorous epidemiologic methods and new public health programs can warrant community participation and family centered approach while planning health services in such inaccessible areas.(8, 19, 28, 36-38) It is also equally imperative that adolescent girl child and community gets better education regarding safe motherhood practices.
We thank Public Health Foundation of India (PHFI) for institutional support offered to authors for carrying out this work. We thank K Shanth Kumar for technical check and help with manuscript completion. | <urn:uuid:bd856b17-1c43-47b9-8fde-8dfeb49e85df> | CC-MAIN-2022-33 | https://ispub.com/IJE/10/2/14442 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00294.warc.gz | en | 0.960455 | 4,746 | 2.671875 | 3 |
Even 2,000 years later, the greatness of ancient Rome fires people's imaginations. The Roman Empire's impressive ruins dot the landscape of Europe and the Mediterranean with roads, aqueducts and amphitheaters. By around A.D. 180, the city of Rome likely became the first city of a million people.
Within this City of Seven Hills, the famous Colosseum housed up to 50,000 spectators for sporting events. And possibly a quarter million fans watched chariot races at the nearby Circus Maximus.
In a few centuries this would come to an end. The official date for Rome's fall is A.D. 476, when the Germanic Chief Odoacer made himself king after deposing the last Roman emperor, Romulus Augustulus. For centuries historians have analyzed Rome's past to explain why such an enormous empire of such civilization and wealth collapsed into primitive barbarism.
The story of Rome's fall isn't just a history lesson. It's important for us to understand today. Could the same forces that turned Rome into ruins also take down Britain, which not long ago ruled a fourth of the world? Or what about America, which is still the world's leading military and economic power?
If American and British citizens think they're invincible, they're in as bad a place as the Romans were when their empire reached its peak. Today, the same forces that helped to destroy Rome are undermining America and Britain. Can they learn from the past so they don't repeat it?
Creeping government control over citizens' lives
What do America and Britain have in common with ancient Rome? One factor is the way government expands its role to expand its control over the lives of citizens.
During the centuries after the first Roman emperor Augustus (who reigned from 27 B.C. to A.D. 14), the empire became more heavily regulated. Emperor Diocletian (A.D. 284-305) supported using coercion to finance legions, pay the civil bureaucrats and support a large, imposing palace court.
In A.D. 332, Emperor Constantine helped to lay the foundation for medieval serfdom by binding farmers to the soil. Finishing the process that Diocletian began, Constantine ordered the sons of farmers to become farmers, the sons of soldiers to become soldiers, the sons of bakers to become bakers, and so on. The members of town councils couldn't quit their positions. Often they had to make up for shortfalls in the collections of local taxes out of their own pockets. Individuals couldn't change occupations or even leave their place of birth.
Over time, this expansion of government control and regulation turned the empire into a type of prison for tens of millions of its citizens. The already-high taxes roughly doubled in the 50 years after Diocletian.
Of course, lack of freedom in the English- speaking world today isn't that extreme. But many of the trends over the past 100 years or so are ominous.
Consider how government has grown progressively bigger and more powerful. One way to measure this is by looking at government expenditures as a percentage of gross domestic product (GDP). For the United States, in less than a century this ratio quadrupled from under 9 percent in 1913 to over 40 percent in 2010. Such numbers hold serious implications for the future of Western democracies. Freedom could be damaged by the fact that lawmakers are letting regulatory bodies make law with little or no oversight.
Note an example from 1932. A British Parliamentary committee found that Parliament delegated law-making authority because "many of the laws affect people's lives so closely that elasticity [i.e., arbitrary power] is essential" (quoted by F.A. Hayek, The Road to Serfdom, 2007, p. 107).
Arbitrary power is essentially unrestricted legislative authority. Think about the long-standing trend of more and more laws that are too complex for most to understand. In America, that can be measured in part by the number of pages of regulations issued in the Federal Register annually and the size of the IRS income tax code.
In recent years the Federal Register—a compilation of federal government regulations—has grown to around 80,000 pages. The IRS tax code includes some 3.4 million words and, according to its own documentation, forces American taxpayers and businesses to spend about 7.6 billion hours each year complying with its filing requirements—the equivalent of almost 4 million full-time jobs.
As any American who has traveled through an airport since the 9/11 terror attacks has seen, times of crisis can lead to vast expansion of government control over the lives of citizens.
What if another disastrous national security or economic crisis hits America or Britain? History shows that such crises could be followed by a headlong descent into societal regimentation along the lines of Mussolini's Italy or Hitler's Germany. No one should assume what author Sinclair Lewis gave as the title of his 1935 novel about a fascist takeover of America: It Can't Happen Here.
Destroying personal wealth by currency inflation
Inflation occurs when governments dilute the money supply by creating more money, typically to finance more government spending. With more dollars (or pounds, or euros) chasing the same amount of products, prices on those products naturally rise.
Like many modern politicians frustrated by inflation, Diocletian tried to prevent prices from rising. The Law of Maximum Prices (A.D. 310) threatened death penalties against people who charged too much for food.
However, the Roman government's own decisions had been the primary cause of rising prices. The empire systematically devalued silver coinage for decades, since government expenses chronically exceeded government income. From the time of Augustus to Diocletian, the denarius (Roman currency) fell from being 100 percent silver to only 5 percent silver. Emperor Marcus Aurelius (A.D. 161-180) alone knocked down its value by 25 percent.
We see that same pattern at work today when governments run huge and inflated budget deficits and "print money" to finance the added debt.
In recent years the Federal Reserve initiated three programs of "quantitative easing" (QE1, QE2 and QE3) designed to stimulate the U.S. economy. As a result, from 2008 to 2012 America's central bank hiked the money supply by 61 percent, and the "monetary base" by more than 200 percent. QE3, announced in September 2012, in effect creates out of thin air $40 billion a month to inject into the U.S. economy. The program is open-ended, meaning it will continue indefinitely.
These increases will lead to future inflation, meaning higher prices for everything. America's federal government has run up over $5 trillion of deficits (about 9 percent of its annual GDP) in its four most recent fiscal years—more than $4 billion per day. Its total debt passed $16 trillion in 2012 and now exceeds the nation's total GDP.
Great Britain's deficits are similarly ugly, despite the commitment of its ruling government coalition to austerity. Recently the country's deficit was the third highest in the European Union (at 10.4 percent in 2010), only slightly better than that of shell-shocked Greece. Excluding the bank bailouts (which more than double the final figure), Britain's public sector debt escalated from 37 percent in 2007 to 63 percent of its GDP in 2012.
Any government that recklessly follows such economic principles while unloading its debt on the international bond market should heed Proverbs 22:7 Proverbs 22:7The rich rules over the poor, and the borrower is servant to the lender.
American King James Version×: "The borrower is servant to the lender." Greece is already learning the hard truth of this text. America and Britain will too, if they don't quickly change course.
Growing government by increasing taxation
Over the centuries, Rome imposed an increasingly heavy tax burden on its citizens. This was to pay for growing cost and welfare measures, like entertaining the city-based population.
The biggest governmental expense by far was paying for the army, which doubled in size from A.D. 96 to 180. Even long before in the closing years of the Republic, Julius Caesar found that 320,000 people were on the list to receive free grain every month. Augustus was able to get the number down to 200,000 during his rule. Yet this was still a huge drain on Rome for decades afterward.
It also wasn't cheap to supply games for the Roman mob. Just imagine the scale of the type of entertainment provided for citizens. For example, when Emperor Trajan in A.D. 107 celebrated conquering Dacia (mainly Romania today), 10,000 gladiators fought. About 11,000 animals died in the gory spectacle. When Marcus Aurelius ruled, enormous amounts of money were spent for both free games and for the daily allowance of pork, oil and bread given to the capital city's poor residents. His gifts would equal more than $1,000 per person today. He provided free spectacles 135 days a year.
But all this liberal giving had its downside. In A.D. 167, he sold his palace's furniture to help pay for wars against the barbarians and Persians. It was a lot like the tax revolt that King Solomon's son Rehoboam experienced in the Bible, which cost him most of his kingdom (1 Kings 12:3-19 1 Kings 12:3-19 That they sent and called him. And Jeroboam and all the congregation of Israel came, and spoke to Rehoboam, saying,
Your father made our yoke grievous: now therefore make you the grievous service of your father, and his heavy yoke which he put on us, lighter, and we will serve you.
And he said to them, Depart yet for three days, then come again to me. And the people departed.
And king Rehoboam consulted with the old men, that stood before Solomon his father while he yet lived, and said, How do you advise that I may answer this people?
And they spoke to him, saying, If you will be a servant to this people this day, and will serve them, and answer them, and speak good words to them, then they will be your servants for ever.
But he forsook the counsel of the old men, which they had given him, and consulted with the young men that were grown up with him, and which stood before him:
And he said to them, What counsel give you that we may answer this people, who have spoken to me, saying, Make the yoke which your father did put on us lighter?
And the young men that were grown up with him spoke to him, saying, Thus shall you speak to this people that spoke to you, saying, Your father made our yoke heavy, but make you it lighter to us; thus shall you say to them, My little finger shall be thicker than my father's loins.
And now whereas my father did lade you with a heavy yoke, I will add to your yoke: my father has chastised you with whips, but I will chastise you with scorpions.
So Jeroboam and all the people came to Rehoboam the third day, as the king had appointed, saying, Come to me again the third day.
And the king answered the people roughly, and forsook the old men's counsel that they gave him;
And spoke to them after the counsel of the young men, saying, My father made your yoke heavy, and I will add to your yoke: my father also chastised you with whips, but I will chastise you with scorpions.
Why the king listened not to the people; for the cause was from the LORD, that he might perform his saying, which the LORD spoke by Ahijah the Shilonite to Jeroboam the son of Nebat.
So when all Israel saw that the king listened not to them, the people answered the king, saying, What portion have we in David? neither have we inheritance in the son of Jesse: to your tents, O Israel: now see to your own house, David. So Israel departed to their tents.
But as for the children of Israel which dwelled in the cities of Judah, Rehoboam reigned over them.
Then king Rehoboam sent Adoram, who was over the tribute; and all Israel stoned him with stones, that he died. Therefore king Rehoboam made speed to get him up to his chariot, to flee to Jerusalem.
So Israel rebelled against the house of David to this day.
American King James Version×).
Popular support for Rome fell as taxes rose. Between the third and fifth centuries peasants fought back against tax collectors and judges in areas that are now France and Spain. Some of them even found that being ruled by barbarians or leaving the empire was better than living with Rome's harsh tax collectors.
The biggest material reason that Rome fell was that its economy was too weak. It was a low-income agricultural economy, and it couldn't support the armies needed to keep out the barbarians.
Compare Rome's disastrous economic experience with America's federal government spending. The Pentagon's budget more than doubled in only 10 years. It went from under $305 billion in 2001 to over $693 billion in 2010 while the nation fought two major wars against Islamic extremists in Iraq and Afghanistan. At the same time, the cost of Social Security, Medicare, Medicaid and other income support programs almost doubled from $1.07 trillion to $2.11 trillion.
As America's population gets older and the Baby Boom generation retires, these big expenses will only grow larger. The federal government's unfunded liabilities (unpaid promises of future benefits) were estimated at $61.6 trillion in 2011, which is about four times the annual GDP. That's $528,000 per household! And some figure the liabilities to be much higher. Think about what this means for your future. Can you really expect to receive everything you were promised?
Low birthrates lead to collapse, not prosperity
Let's look at another way that the United States and Britain are like ancient Rome. From the mid-200s A.D. onward, Rome's population began to drop. Disease, barbarian invasions, wars and economic decline in the second, third and later centuries all contributed to the fall of the empire. Even worse, the fact that slavery was institutionalized meant that the slaves didn't want to have children. After all, why bring children into a world where they'll know only harsh slavery? As Roman laws and taxes turned many free people into bitter, apathetic slaves to their state, the birthrate among the common people went down as well. As Rome's educated upper class stopped having many children, the empire's high culture decayed.
Historian W.H. McNeil, in The Rise of the West, explained that "the biological suicide of the Roman upper classes" weakened "the traditions of classical civilization" (1991, p. 328). Unlike their Germanic neighbors outside the empire, the Romans limited family size (resulting in the practice of infanticide). Instead they invested more in educating and raising their surviving children. The illiterate Germans chose to have many children. Even in rich families, though, they treated them with benign neglect. This difference helped the Germanic peoples to overwhelm Rome by sheer numbers.
Europe, and the United States to a lesser extent, is facing a similar problem today. High birthrates and less desire to assimilate into European cultures by immigrants signal an ominous trend. Secular people, no matter what background, have fewer children than religious people. So if the trend continues, the future belongs to the staunchly religious.
Fracturing families through divorce
One cause of the low birthrate for Rome's elite, which worried the first emperor Augustus, was their high divorce rate. All a husband needed to do to legally divorce his wife was to say three times, "Go home." By 55 B.C., a Roman wife could divorce her husband almost as easily.
In the first century, the philosopher and playwright Seneca described how Roman upper-class women regarded their marriages: "They divorce in order to re-marry. They marry in order to divorce." The satirist Martial fired one of his pointed short poems at a woman who married for the 10th time. He accurately labeled it legalized adultery.
Homosexual behavior was so widespread that many Roman writers, like "the arbiter of elegance" Petronius, the gossipy historian Suetonius, and Martial, assumed all Roman men were bisexual. The fact that they often engaged in such behavior reduced the birthrate even more. It's obvious that high divorce rates, lower birth rates and gay subcultures aren't new social innovations. It's just picking up where pagan Rome left off.
America's no-fault divorce laws, a product of the 1960s' "Sexual Revolution," caused the nation's divorce rate to explode. It became one of the worst for any major country (3.2 per 1,000 in population per year). Britain's rate isn't far behind (2.9 per 1,000). What socially liberal people regard as "forward-looking social legislation" often just resembles a failed ancient pre-Christian past.
Immigration changes society
What happened when Rome's population declined? In North Africa, one estimate found that a third of the land was no longer cultivated. As farmland was abandoned, tax receipts fell. To recruit enough soldiers for its armies and to till its empty fields, the imperial government resorted to immigration.
That's the same solution Europe has resorted to in more recent decades. Barbarian allies of Rome along the empire's northern frontier and elsewhere were enticed into military service through land grants and offers of citizenship. Even by A.D. 180, according to historian W.G. Hardy, a major part of the Roman army was made up of foreigners and semi-civilized tribesmen.
The legions were increasingly filled with non-Romans. As a result, when the barbarian Vandals invaded North Africa, the Roman governor protected the city of Hippo there with Gothic mercenaries. The local Roman population provided little help. Since many thought the barbarians were better or no worse than the Roman tax collectors and officials, in a lot of cases they didn't even want to preserve the empire.
A growing culture of corruption
Let's consider deeper spiritual, religious and philosophical reasons for Rome's decline and then ask ourselves if America and Britain are experiencing the same things today.
The satirist Juvenal famously painted the average Roman as only caring about bread and circuses (i.e., athletic contests). Today, how many Americans, Britons, Australians, Canadians and New Zealanders are just as content to sit and be entertained, heedless of the world's gathering storm so long as they have their chips, beer and TV? Empty desire for material things dulls our spiritual senses. Petronius mocked the rich people of ancient Rome for obsessing over luxuries and wealth.
Especially throughout the empire's first two centuries, the worship of material things and overemphasis on enjoying luxuries characterized the lifestyle of the rich. During huge, extended banquets, the rich Romans would vomit so they could keep gorging themselves. Seneca described them by saying, "They vomit so that they may eat, and eat so that they may vomit."
It's not that different in the United States and Britain today. Millions give themselves up to sexually lawless and materialistic lives. They don't care about God's law and spiritual principles. The apostle Paul condemned materialism and sexual sins in
1 Corinthians 6:13 1 Corinthians 6:13Meats for the belly, and the belly for meats: but God shall destroy both it and them. Now the body is not for fornication, but for the Lord; and the Lord for the body.
American King James Version×: "Foods for the stomach and the stomach for foods, but God will destroy both it and them. Now the body is not for sexual immorality but for the Lord, and the Lord for the body."
Each person's religious and philosophical worldviews have a major impact on how they deal with the pressures of life. Pessimism, materialism and hedonism start with anti-religious skepticism. Like so many of today's intellectuals, ancient pagan Rome's scholars had no infinite God or way to relate their lives to having true meaning or an ultimate purpose.
By contrast, the Bible's revelation gives people an integrated view of life. Faith and reason, purpose and pleasure, the infinite and the finite, general universal values and particular human lives are all reconciled. The Bible's total-life knowledge and values bring meaning to individual lives.
The most important things can only come to humanity by divine revelation. The Bible's worldview brings meaning and purpose to human life that simply can't be known by human reason or emotions alone. But as this general heritage of the Protestant Reformation has been assaulted for over two centuries, a growing crisis of civilization is brewing.
This has ominous implications for the survival of Western culture. It goes even deeper than its economic, social and demographic problems. According to famed sociologist Daniel Bell of Harvard University, "The lack of a rooted moral belief system is the cultural contradiction of [a post-industrial] society, the deepest challenge to its survival" (quoted by Francis Schaeffer, How Should We Then Live? 2005, p. 225).
Abandoning long-held beliefs
America and Britain share in a culture based mostly on ancient Greco-Roman culture and the Judeo-Christian religion. But like falling Rome's scholars didn't believe in their gods anymore, many of today's highly educated people have lost faith in their traditional faiths of Judaism and Christianity.
Few academics believe in the true God or take the Bible seriously anymore. Many are secular humanists who think man is the measure of all things. But significant numbers have also grown more apathetic, skeptical, uncertain and pessimistic. They doubt that human reason can provide an integrated unified worldview of existence or can offer any real meaning to life.
Over the past two and a half centuries since the rough mid-point of the Enlightenment (ca. 1745), their faith in human reason's effectiveness declined nearly as quickly as their faith in God's existence. It's no coincidence that they have rejected both reason and faith in God. Catholic theologian Thomas Aquinas (1224-1274) reconciled the two so the West could have both in the High Middle Ages. As Emile Cammaerts summarized the thinking of English author G.K. Chesterton, "The first effect of not believing in God is to believe in anything."
The apostle Paul once explained the consequences of false religion in terms that apply to us in the modern world. First, people have "without excuse" rejected the proof of God evidenced in nature's design and perfection. As a result, "although they knew God, they did not glorify Him as God, nor were thankful, but became futile in their thoughts, and their foolish hearts were darkened" (Romans 1:20 Romans 1:20For the invisible things of him from the creation of the world are clearly seen, being understood by the things that are made, even his eternal power and Godhead; so that they are without excuse:
American King James Version×).
As so many Western intellectuals and others, who "professing to be wise became fools" (verse 22), their anti-Christian worldview unleashed damaging sins, including the homosexual lifestyle. It's as the noted American scholar Richard Weaver noted in the title of his 1948 book: Ideas Have Consequences.
Rejecting truth while embracing error
The huge upswing in the West's interest in eastern religions, the occult, reincarnation and "New Age" ideas is proof that empty, atheistic modern thought just doesn't meet most people's needs. The ideology of multiculturalism, which ultimately stands for no values other than accepting all ideas as equally valid, reflects Western intellectuals' philosophical bankruptcy. Such self-contradictory clichés as "All is relative" and "There are no absolutes" ultimately prove to be empty and meaningless.
By contrast, many of the Muslim immigrants who are flooding Europe uphold a dogmatic certainty about their faith. They see no need to apologize for their imperialist, jihadist past. Like their medieval ancestors, many of today's Islamists believe they are obligated to force their beliefs and values on others.
There's a serious ideological battle between skeptical, uncertain secularists and devout, dogmatic Islamists. History inevitably favors the latter over the former. When people lose confidence in their own civilization's values and virtues, it's been seen that they won't fight strongly to prevent their own collapse. It happened with Rome, and it's happening today to the West, and to the United States and Britain in particular.
The 18th-century English historian Edward Gibbon, in his classic work The History of the Decline and Fall of the Roman Empire, famously blamed traditional Christianity for undermining the empire's ability to survive. But even if his interpretation is blindly accepted, it's important to realize that patterns of history don't always repeat themselves exactly.
Unlike ancient Rome, modern America's and Britain's lack of faith and commitment to living as truly Christian nations will be the biggest cause of their downfall. In fact, a lot of their economic and demographic problems are directly related to their lack of regard for God's law and His wisdom.
As these nations turn their backs on God, He will turn His back on them. God is increasingly withdrawing His blessings and protections from them. His words are recorded in Hosea 4:6 Hosea 4:6My people are destroyed for lack of knowledge: because you have rejected knowledge, I will also reject you, that you shall be no priest to me: seeing you have forgotten the law of your God, I will also forget your children.
American King James Version×: "My people are destroyed for lack of knowledge."
Many will be surprised to learn that the Bible and other historical evidence reveal that America and Britain are the main recipients of the great birthright blessings promised in Genesis to Abraham, Isaac, Jacob and Joseph. (Read the Bible study aid booklet The United States and Britain in Bible Prophecy.)
Because these nations have been so blessed by God, they are much more responsible to God for what they do. They became great not because of their own goodness, but because Abraham obeyed God, who was faithful in His promises to this great biblical patriarch (Genesis 27:4-5 Genesis 27:4-5 And make me savoury meat, such as I love, and bring it to me, that I may eat; that my soul may bless you before I die.
And Rebekah heard when Isaac spoke to Esau his son. And Esau went to the field to hunt for venison, and to bring it.
American King James Version×).
But now these nations' disobedience to God's law will cause them to lose their high status. Only heartfelt repentance, coupled with a commitment to obey God's law and to have faith in Jesus Christ, will save them from the coming national calamity referred to in the Bible as the Great Tribulation (Matthew 24:21 Matthew 24:21For then shall be great tribulation, such as was not since the beginning of the world to this time, no, nor ever shall be.
American King James Version×).
No matter what others choose to do in whatever nation we live in, we're all individually responsible to God. We all need to come to know and have faith in Jesus of Nazareth, repent, and obey God's law. This is what brings true meaning and real purpose to our lives. Whether nationally or individually, let's trust in God's love when He promises in Jeremiah 29:13 Jeremiah 29:13And you shall seek me, and find me, when you shall search for me with all your heart.
American King James Version×, "You will seek Me and find Me, when you search for Me with all your heart." | <urn:uuid:be275cd6-e563-47c2-9b04-677e19afea4b> | CC-MAIN-2022-33 | https://www.ucg.org/the-good-news/what-could-america-and-britain-learn-from-romes-fall | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00295.warc.gz | en | 0.965341 | 5,891 | 3.578125 | 4 |
Table of Contents
Many already do and if you don't now, you will. Testing for drug-resistant HIV among treatment-naive patients, once thought to be an exercise in futility, is becoming commonplace. This follows several studies from the United States and Europe demonstrating that a significant proportion of such patients harbors detectable levels of drug-resistant virus.1-3
The transmission of HIV that has reduced susceptibility to antiretrovirals has been well described and resistance testing of patients with acute HIV infection has become standard practice.4-11 Until recently, the dominant paradigm held that following primary HIV infection, in the absence of drug selection pressure, the relatively fitter "wild-type" virus would overgrow resistant variants and only wild type virus would be sampled during subsequent resistance testing. However, several recent studies of cohorts of chronically infected individuals have challenged the conventional thinking regarding the detection of drug resistance and have revealed that certain resistance mutations can persist for much longer than previously believed.1,12
The results of 2 studies focusing on the prevalence of drug resistance among HIV-infected persons, previously presented during conferences, were published this month and are described below. Although they examine different but overlapping patient populations, both studies add to an emerging picture of resistance rates among patients in clinical care.
Richman DD, Morton SC, Wrin T, et al. The prevalence of antiretroviral drug resistance in the United States. AIDS. July 2, 2004;18(10):1393-1401.
To estimate the prevalence of drug resistance among persons with HIV in the United States during the years following the availability of highly active antiretroviral therapy (HAART), investigators of the HIV Cost and Service Utilization Study (HCSUS) examined phenotypic resistance patterns in study participants who had received care in 1996 and survived at least until 1998 when their blood was drawn during the study.
HCSUS is a large cohort study comprised of HIV-infected persons under care in urban clinics nationwide. Its participants are considered to be representative of persons with HIV infection in the United States. Therefore, HCSUS investigators frequently extend the findings from their sample to the population of HIV-infected persons at large. The 2,864 study participants thus represent 231,000 individuals under HIV care in the contiguous United States, which makes for some impressive-sounding results although it should remain clear that 231,000 people did not actually participate in the study.
Among the cohort, 1,797 of the original 2,864 participants had survived from 1996 to 1998; of these, 1,099 had a viral load >500 copies/mL and phenotypic drug resistance testing results available. Among the 1,099 subjects, 76% had evidence of resistance by phenotype to 1 or more antiretrovirals.
Broken down by drug class, the rates of resistance were 71% for nucleotide/nucleoside reverse transcriptase inhibitors (NRTIs), 41% for protease inhibitors (PIs) and 25% for non-nucleoside reverse transcriptase inhibitors (NNRTIs). Lamivudine (3TC, Epivir) was the agent for which there was the highest rate of resistance (68%), which is not unexpected given the low resistance barrier of this ubiquitous antiretroviral. Almost half (48%) of the viremic subjects had multiple-class resistance, with 13% having resistance to all 3 drug classes. Extrapolating to the larger population, the investigators estimated that over 100,000 HIV-infected persons with a viral load >500 copies/mL have detectable drug resistance.
Several factors were identified as being associated with resistance, including, as can be expected, the use of HIV therapy and NRTIs in 1996, when the HAART era began. Not surprisingly, current use of antiretroviral therapy was associated with drug resistance among the patients with a detectable viral load, as was a higher current viral load, lowest self-reported CD4+ cell count and advanced disease stage.
Germane to the argument regarding resistance testing among patients not on HIV therapy, 30% of the participants who were not receiving antiretrovirals had evidence of drug resistance (22% for NRTIs, 11% for NNRTIs and 11% for PIs) and 11% were resistant to agents in 2 drug classes. Of the patients who had never been on an antiretroviral, 10.9% had drug resistance and almost all of these cases were due to NNRTI resistance (9.3%).
These data are interesting in a number of ways. Foremost, the characterization of the HCSUS cohort permits a quantitative reflection of how people with HIV currently in treatment, in general, are doing. A majority of the cohort (63%) had detectable viremia and, of these, 80% were receiving antiretrovirals. The finding that drug resistance was prevalent among those with viremia comes as little surprise to those of us who see patients, because patients with detectable viral loads who are on therapy are generally expected to have drug resistance. The high rates, however, are sobering and point to the need for thoughtful use of HIV therapy, including the need to properly balance potency, tolerability, salvage options and convenience, as well as the need for new agents that are effective against resistant variants. Fortunately, since the study period (1996-1998), HIV therapy has evolved and is arguably more potent and convenient.
In fact, data from on-going studies indicate that resistance rates among recently infected persons are actually starting to decline, presumably since improved therapeutics have led to a decrease in viremia and, therefore, reduced rates of transmission of resistant HIV by those on therapy.13,14
That said, the potential for viremic patients with drug-resistant variants to transmit drug-resistant virus to others remains a concern and, although not a focus of this particular report, these data do illustrate the degree to which drug resistance can be detected even among patients naive to HIV therapy -- findings that are consistent with other studies.1,15-17
The resistance seen in treatment-naive patients is likely to be the tip of the proverbial iceberg, as NNRTI-associated mutations have less impact on viral fitness and, therefore, are relatively more stable than mutations associated with other treatment classes. These other mutations may not persist in sufficient quantities following infection to be detected by resistance testing, even though the resistant strains remain archived in some infected cells and can become dominant under treatment selection pressure. Also, this study was not able to assess the presence of resistant virus among patients with viral loads <500 copies/mL who may have acquired resistant virus at the time of infection but had too few circulating virus for resistance testing to be performed.
These results support resistance testing among treatment-naive patients -- particularly given the current reliance on NNRTIs as initial HIV therapy. The detection of NNRTI resistance in an individual naive to treatment and about to start an antiretroviral regimen would steer most clinicians away from prescribing an NNRTI. But, even if therapy is not being considered at the time of presentation to clinic, resistance testing can be useful to document any resistance that is present and may wane with time. Therefore, it is this author's opinion that this study, the study below and those referenced, support resistance testing of persons naive to HIV therapy, regardless of duration of HIV infection.
Weinstock HS, Zaidi I, Heneine W, et al. The epidemiology of antiretroviral drug resistance among drug-naive HIV-1-infected persons in 10 US cities. J Infect Dis. June 15, 2004;189(12):2174-2180.
Similar to the HCSUS study, this Centers for Disease Control and Prevention (CDC) investigation aimed to characterize antiretroviral resistance in urban populations in the United States. However, unlike HCSUS, the CDC study enrolled only patients naive to therapy. The study was conducted between 1997 and 2001 in 39 clinics in 10 cities: San Francisco, San Diego, Denver, Detroit, Grand Rapids, Houston, New York, Newark, New Orleans and Miami. A total of 1,104 adult patients were enrolled, of whom 1,082 had genotypic resistance assays successfully performed. Three quarters of the participants were male, 73% were non-white and 60% were men who have sex with men. Approximately 19% were determined to be recently HIV infected (within the prior 4-6 months) using a de-tuned antibody assay.
Not dissimilar to the HCSUS results, overall, 8.3% (90 patients) of the cohort harbored mutations detected by genotypic resistance testing. The breakdown of the prevalence of resistance mutations by antiretroviral class was 6.4% for NRTIs, 1.9% for PIs and 1.7% for NNRTIs. Only 1.3% had evidence of drug resistance to 2 or more antiretrovirals. Men who have sex with men were more likely to have resistance mutations than women and heterosexual men (12% versus 6.1% and 4.7%, respectively). Likewise, whites had higher rates of resistance (13%) compared to African-Americans (5.4%) and Hispanics (7.9%). These demographic factors are likely to be indicators of access to HIV care as groups with relatively less exposure to antiretrovirals have fewer opportunities to develop resistance to transmit to others.
The most commonly observed NRTI-associated mutations were M41L (19 patients), K70R (9 patients), M184V (9 patients) and D67N (7 patients). However, 27 patients had one of the T215D/S/C/E/I mutations; these are mutants of the 215 codon that have back-mutated from the thymidine analogue mutation T215Y and which can revert back to T215Y under treatment pressure. K103N was the predominant NNRTI mutation and L90M the most common PI mutation.
Among patients with recent HIV infection, the prevalence of genotypic resistance during the study period ranged from 7.1% in 1998 to 14% in 1999 to 8.9% in 2000 -- differences that were not statistically significant. However, resistance in patients not recently infected increased from 3.2% in 1998 to 9.0% in 1999 to 12% in 2000 (p = 0.004). Notably, resistance was detected more often among patients who reported having sexual partners who were themselves receiving antiretroviral therapy, suggesting a potential route for the acquisition of drug resistance.
This study complements the HCSUS study and, with previous reports, indicates that drug resistance is present in about 1 out of 10 treatment-naive patients. While the impact of pre-treatment resistance on response to therapy is being studied, many clinicians are deeming it prudent to test patients at baseline. Continued monitoring of resistance patterns in the treatment-naive will be essential to understanding the need for on-going pre-therapy testing and gauging whether drug resistance patterns are evolving with treatment trends.
Resistance to HIV therapies can develop for a number of reasons, including nonadherence, sub-optimal plasma concentrations of antiretrovirals due to malabsorption or drug interactions, and, as is evident from the above discussion, infection with drug-resistant virus. Most clinicians blame nonadherence for the lion's share of resistance seen. However, how much nonadherence it takes to produce drug resistance is still unclear. Very low adherence to therapy may actually carry little risk of resistance as there is insufficient pressure placed on viral strains to select for resistant mutants. Likewise, high rates of adherence may shut down viral replication and, subsequently, cut back opportunities for mutants to emerge. These considerations have led to the conception that the relationship between adherence and antiretroviral resistance is bell shaped, with the risk of resistance low during low levels of adherence, peaking with intermediate adherence (not too little, not too much) and falling with high-level compliance.
Many researchers suspect, however, that the relationship between adherence and resistance is more complex and that the risk of resistance may actually increase well beyond the midpoint of adherence to include rates that are clinically common and actually considered fairly good. This is most evident when a patient's viral load remains detectable despite taking combination antiretroviral therapy and when HIV replication persists at a level sufficient to produce resistance mutations. In such a situation, it is not clear precisely where is the tipping point beyond which more adherence will risk more resistance. Certainly, more adherence may increase the rate of achieving an undetectable viral load, but for patients falling short of this goal, more adherence, although laudable, means more drug exposure in the face of on-going replication and, as a consequence, drug resistance. This is not to say that clinicians should not be egging their patients on to better levels of adherence. Rather, in certain circumstances in which viral replication continues, more adherence is a double-edged sword in that although it may help shut down the replicative machinery, it may also make resistance more likely. The study below demonstrates these points nicely, taking advantage of the unique opportunity to study cohorts with well characterized adherence and virological data.
Bangsberg DR, Porco TC, Kagay C, et al. Modeling the HIV protease inhibitor adherence-resistance curve by use of empirically derived estimates. J Infect Dis. July 1, 2004;190(1):162-165.
Can the risk of developing PI resistance at varying levels of adherence be predicted? That's what investigators from California, with expertise in studies of antiretroviral adherence, tried to do by creating a model based on data from their clinical research. The data were generated from 2 cohorts of patients receiving antiretroviral therapy who had viral load determinations and unannounced pill counts performed regularly. One of the cohorts also had genotypic resistance testing performed. Importantly, both of the populations who were used to create the model were heavily treatment experienced.
Based on their study of these cohorts, the authors calculated that the probability of viral suppression below 50 copies/mL increased with greater adherence. This meant, for example, that at 100% adherence, close to half of the treatment-experienced patients would have undetectable viral loads. Among viremic patients, the risk of drug resistance actually increases with adherence. That is, for patients with detectable virus, the more drug they took, the greater their risk of resistance -- with the greatest risk being at their highest level of adherence. When the model considered all patients -- viremic and aviremic -- the maximal rate of new PI mutation acquisition occurred at a level of adherence of 87%, with the risk of resistance fading with additional adherence. The graph of the adherence-resistance relationship was therefore slanted considerably to the right and looked more like a plot of the Dow Jones Industrial Average from 1970 to 2003 than a bell.
The data used to generate the model were derived from patients who were receiving unboosted PIs, which are clearly less potent than boosted PIs. However, adding to the model the regimen's effectiveness as determined by the percentage of patients on that regimen who had undetectable viral loads at 100% adherence, resulted in the shifting of the peak of the adherence-resistance curve back to the left and down, with the most potent regimens yielding resistance less often and maximally at adherence rates of around 50%. More potent regimens therefore reduce the overall prevalence of resistance mutations and are more likely to suppress viral replication at lower levels of adherence than less potent regimens.
This model is useful in several ways because the adherence rates of most clinical populations that have been studied to date are at the range of 80 to 90%, which has been found to be most associated with PI resistance.18 That the levels of adherence which were calculated in this study to be associated with the greatest risk of the development of resistance in treatment-experienced patients are quite similar to those seen clinically provides a potential explanation for the high prevalence of resistance seen in the HCSUS cohort.
The results are also provocative because they suggest that increasing adherence in treatment-experienced patients on regimens may paradoxically increase the risk of resistance if viral suppression is not achieved (i.e., according to the model, improving a patient's adherence from 70 to 87% would appreciably increase the patient's risk of cultivating resistance mutations to his or her PI).
These results have several implications for clinical practice. First, they make plain that greater adherence, if it is unable to drive down a patient's viral load to very low levels, is not necessarily always better. Second, clinicians need to remain vigilant in monitoring for resistance even when a patient's viral load has fallen but remains above the limits of detection despite the patient's high-level swear-on-a-stack-of-bibles adherence. Lastly, to avoid further resistance, a low threshold for changing therapy (when possible) is advisable in cases where viremia persists.
Certainly, the problem with such models is simply that they are models. Further, this particular model is based on data from patients on therapies that are less potent than those being commonly employed today. It also assumes virologic failure to be a gradual process, in which resistance mutations accumulate, which is not always the case. Despite these limitations, the model is thought provoking and challenges our assumption that greater adherence is always better in patients who are heavily treatment experienced. As the potency of regimens increase, the odds shift and adherence pays off but high levels of adherence are still required to suppress viremia and prevent the cultivation of resistant virus.
Exactly what do patients desire in an antiretroviral regimen? What keeps them from the kinds of adherence that makes treatment success likely? Most of us think we know the answers. Some of us consider what we would want if we were faced with having to start an antiretroviral regimen. Others base their opinions on what their patients tell them. However, specifically what patients themselves desire in a long-term regimen may surprise those on the other end of the prescribing pen.
Stone VE, Jordan J, Tolson J, Miller R, Pilon T. Perspectives on adherence and simplicity for HIV-infected patients on antiretroviral therapy: self-report of the relative importance of multiple attributes of highly active antiretroviral therapy (HAART) regimens in predicting adherence. J Acquir Immun Def Syndr. July 1, 2004;36(3):808-816.
In a rather straightforward study, conducted in 6 cities across the United States and sponsored by GlaxoSmithKline (the manufacturer of popular twice-a-day regimens), 299 patients who were receiving a minimum of 3 antiretrovirals were surveyed regarding their HIV treatment preferences.
Participants were asked to evaluate 10 therapy attributes and predict their adherence to each of 7 actual regimens. The 10 treatment attributes were: total pills per day, pill size, side effects, dietary restrictions, dosing frequency, number of prescriptions, number of refills, number of copayments, number of medication bottles and whether bedtime dosing was required. The 7 regimens all contained 3 active antiretrovirals and were dosed either once a day (QD) or twice a day (BID). Participants were told to assume the potency of the regimens was equal but the actual names of the components were not provided.
The 3 QD regimens, although never disclosed, are presumed to have contained efavirenz (EFV, Sustiva, Stocrin) since 1 agent was always required to be taken at bedtime. Only 1 QD regimen had all 3 agents taken simultaneously and this sounded a lot like efavirenz + didanosine QD (ddI-EC, Videx EC) + lamivudine. None of the QD regimens were described as 3 pills before bed with little or no food (i.e., tenofovir [TDF, Viread] + lamivudine [or emtricitabine (FTC, Emtriva)] + efavirenz). The BID regimens ranged from a zidovudine/lamivudine/abacavir (ZDV/3TC/ABC, Trizivir)-like regimen, to a nelfinavir (NFV, Viracept) + tenofovir + lamivudine combo weighing in at 13 pills with food requirements.
The subjects were mostly male (76%), African-American (45%) and men who have sex with men (57%). Two thirds were on their third or more antiretroviral regimen. Only 26% said they had missed no doses of their HIV therapy during the preceding 3 months.
All 10 attributes were deemed to negatively affect adherence. Mean number of pills per day having had the greatest impact followed by dosing frequency, adverse events and dietary restrictions. Bedtime dosing was rated by patients as having the least impact on adherence. When evaluating actual regimens, respondents rated the zidovudine/lamivudine/abacavir regimen the most likely to be adhered to and the most convenient. The runners-up included 4 regimens that had received similar ratings, of which 3 were the QD regimens. As can be expected, the regimens with the most pills fared the worst.
Women rated dosing frequency and side effects as less of an issue than men, food restrictions were more of a concern among whites compared with non-whites and African-Americans reported less of an impact from side effects than whites or Hispanics.
This study, despite some limitations described below, provides an interesting perspective on what matters most to patients who are confronted with the need to take daily life-long therapy. Pill count, frequency and side effects were considered the most troublesome aspects of therapy and the greatest threat to adherence. Bedtime dosing, interestingly, despite the potential inconvenience of having to take a medication at a specific time of day, was not considered a relatively significant treatment liability.
The flaws? For one thing, the comparison of regimens was, unfortunately, somewhat stacked in favor of the 1-pill, BID option (the pill count and frequency of Trizivir, manufactured by the sponsor of this study). The QD regimens described had more negative attributes than can be found in the currently popular QD combination of tenofovir + lamivudine + efavirenz. Additionally, the soft-pedal warning regarding the "very slight chance your body will react to this medicine, which would require you to stop this medicine" -- otherwise known as the abacavir (ABC, Ziagen) hypersensitivity reaction -- which was included in the 1-pill, BID regimen did not provide participants with a completely balanced picture of this option. Lastly, the assumption of equal potency, which was understandably made for the purposes of this study, nevertheless should not extend to the interpretation of the results when crafting an antiretroviral regimen. When the chips are down, potency counts and I suspect many patients would be willing, to a point, to trade some convenience for efficacy, but fewer would agree to the reverse. Obviously, when potency and convenience are combined, everyone wins and, as mentioned above, powerful once-a-day therapies with low pill burdens are available today.
The range of responses in this study also demonstrate how individual treatment preferences can be. For some, pill count is paramount to frequency, for others food restrictions were a more important attribute. These results serve to remind clinicians not to assume we know exactly what regimen our patients would desire. Despite our understandable embrace of once-a-day regimens, a potential treatment regimen is not only about frequency. A frank discussion of the pros and cons of the treatment options is essential prior to handing over those prescriptions.
This article was provided by TheBodyPRO.com. It is a part of the publication HIV JournalView. | <urn:uuid:d384d8d1-8007-48e9-8a5b-76076d77cb6f> | CC-MAIN-2017-51 | http://www.thebody.com/content/art42158.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589177.70/warc/CC-MAIN-20171216201436-20171216223436-00127.warc.gz | en | 0.960852 | 4,987 | 2.5625 | 3 |
« PreviousContinue »
THE WINGED PHYLLOXERA IN CALIFORNIA.
IN August, 1873, the subject of phylloxera The French recognize four distinct forms, was first discussed in Sonoma. While it as will be noticed in the following synopsis: was generally known, for a few years previ- Beginning with the form as it exists in winous to this time, that the vines in some ter, we will find a small, dormant, dark localities were “sick,” yet the true cause of brown aphis, somewhat flattened, having no the decay was unknown; and it was not un- wings, and quite unlike the usual mother of til a year later that the Sonoma Vinicultural the summer
With approaching Club proved, beyond a doubt, that the spring, this insect becomes active, and either dreaded phylloxera had already a strong ascends to the upper part of the vine and foothold on this coast.
becomes the gall insect, or descends to the Various theories were immediately pro- roots and forms the root type, either direcposed regarding the manner and time of tion of movement depending upon the surtheir introduction, but none up to the pres- rounding atmospheric conditions. ent time can be relied upon with certainty. The gall insect is not found in California, European vines were introduced in 1860 and and therefore does not interest us. It is 1862, and without doubt a portion of the the insect which descends to the roots that trouble may be traced to these importations. will finally produce the winged form. After The native vines shipped from the Missis- passing through three changes, or sheddings sippi Valley may likewise have been in- of the skin, the mother insect is developed. fested. The exact manner of their intro- Several generations will thus be produced duction still remains a mystery. Still more during the summer, and the increase will mysterious, however, is the non-appearance continue until the last mother louse dies, in of the form which increased the rapidity of the early part of winter; the younger insects the spread in France and other European are destined to become hibernants. If incountries, and makes the pest far more de- stead of three changes the insect passes structive than we find it in this State. With through five, another form, called the pupa, the assistance of the winged form, a distance is the result. This is the first indication of of a few miles between districts offers ap- the winged form, and is easily distinguished parently no barrier to their progress. Not by the small black pads on each side of its only was this form necessary to the rapid back; these contain the infolding rudimenspread of the insect, but has long been con- tary wings. The next change produces the sidered a necessary stage in the complete fully developed winged form, which presents, cycle of its life history. This most dreaded with its beautifully colored body and four form, found in all other countries, has es- delicate wings, a striking contrast to the dull caped the closest search upon the part of appearance of the winter form. The winged California vineyardists until recently, and its form lays the eggs for the development of appearance at so late a date, leaves a doubt the true sexual individuals, which are again as to whether it may not, at any time, de- wingless, destitute of suckers and digestive velop into all forms common to the insect, organs, and seem to have but one mission in and be as destructive to our vineyards as to life—to produce the winter egg for the rethose in France.
juvenation of species in the following sumIn order that we may understand more mer. clearly the position which this new form It must not be understood that the insect holds in the life history of the fully devel- passes through just three or five changes, or oped insect, a short sketch of its changes, moltings, but this seems to be the average during metamorphosis, will be necessary. number under ordinary circumstances. The
different number of changes produce two cy- oped into the complete winged form, showcles of life—one incomplete—which the insect ing clearly that, under proper conditions, may pass through during a single summer. our phylloxera would pass through that In California, the incomplete cycle is prob- stage, which up to this time seemed to be ably the prevailing one, and it would appear missing. But strangely enough, they were all, that they can go on indefinitely without de- or nearly all, of the infertile variety—a variety
— veloping further than the three changes not abundantly found in European vinewhich produce the mother form. The pro- yards. duction of the sexual individuals
Since the above were developed, I believe unnecessary. With the two different cycles none have been positively identified until there arise two different forms, larvæ and last summer (1882) when they were found eggs, which may pass the winter in a in very small numbers, in the pupa form, on dormant state. The eggs, in
the roots; and in one case a fully developed country like the southern part of France, one on the vine itself. In only one or two are frequently hatched at the beginning of cases was the winged form developed during winter, into a form of insect similar, if not the summer in bottles. Apparently, when all identical, to the hibernating form of the conditions are favorable they develop abunmother louse; more frequently, however, dantly; for, while making some observations they are not hatched until spring. Here, at the State University, I have taken at then, at the beginning of spring, the forms least fifty insects, in the pupa state, from a from both cycles are the same. It may be single small bottle. Soon after removing well to notice that in California no winter them, they developed into mature winged eggs, and only comparatively few winged in- insects. All the insects, as far as noticed, sects, have been found.
were fertile ; and very soon after they obA very peculiar phase in the development tained their wings, each laid a solitary egg, of the incomplete cycle was noticed by Bal- and died. They were taken from the bottle biani while observing the Phylloxera quercus, one day, and in less than twenty-four hours a species closely allied to Phylloxera vastatrix, some of the eggs were laid. Each of these or grape phylloxera. He has since observed insects should have laid from six to eight the same change to take place among the eggs, judging from the number laid by the grape phylloxera. It had long been held corresponding form in France; but the conthat the last stages of the winged form of ditions under which they were placed were so P. vastatrix alone produced true sexual in- unfavorable, that no doubt their lives were dividuals. By Balbiani's observations it was much shortened by the treatment. Howclearly shown that during the latter part of ever, they have been frequently kept some the season the wingless form sometimes per- time on a plate of glass without apparently forms the same function as the winged form suffering from the change from the roots. in producing the sexual individuals. This The exact time required in passing from offers an excellent explanation for the con- the pupa state to the laying of the egg is tinued prolificacy, for so many years, of the uncertain; but it is presumably small, as Californian phylloxera without the interven- the winged insects were removed from the tion of the winged form. The number “trap” as soon as discovered. They were of eggs laid are the same in either case, supposed to have been entrapped by the their characteristics are similar, and both moisture on the inside of the bottle soon forms end in the production of a single win- after they became winged; and if this be so, ter egg.
the life of the winged insect must be short In 1879 Dr. Hyde of Santa Rosa first indeed. succeeded in producing the winged form in I have said that there seemed to be “spethis State, from root samples taken from cial conditions” necessary for their developSonoma district. Seven insects were devel- ment. I was led to believe this from the
fact that out of twenty-five bottled speci- been very few; and many cases of rapid mens of roots, only two had the slightest in- spreading have been attributed to this dications of developing this form; and of form because they could be accounted for these two, upon one was found the partly in no other manner. Yet the sudden dedeveloped form as soon as the root was caying of several acres of vines, all possibly taken from the vineyard. As the specimens infested from the same spot, and on the leewere taken from all parts of the vineyard, it ward side of the decaying district, forces the is quite natural to conclude that only one or conclusion that the infection must be carried two vines had the special conditions neces- by the winds, and if so, the winged form sary. A thick bunch of young, tender, must have prevailed to a considerable extent. fibrous roots produce the form in greatest There are notable cases in which narrow abundance. The first supposition is again strips, extending in the direction of the presupported by the fact that the form has been vailing winds, have become infested and found in the vineyards in only four different completely and rapidly destroyed, while adplaces, and upon about as many different joining portions of the vineyard remained vines. A single vine will produce this form, untouched. In other cases, the whole vinewhile none will be found on the surrounding yard seems to collapse in the course of one vines. Diligent search was made last sum- or two years. Happily, these cases of such mer for this variety on a large number of rapid destruction are few, and are the exvines, while looking for the common form of ceptions rather than the general rule. If the the insect, with results as stated above. winged form prevailed in all the vineyards,
The pupa are found near the surface of the spread would be more sweeping, leaving the ground, and also to a depth of five or fewer vines in a healthy condition, as six inches. It is still doubtful whether they now find them. become fully developed winged insects be- Probably the most peculiar phase of the fore leaving the roots; but as the form has insect's workings is shown in some of the never been found on the roots, it is presum- vineyards of Napa County. In these places able that the transformation does not take the manner of spreading is entirely different place until they come to the surface of the from any thus far noticed; and if a typical ground. This may account for the unusual spread by the winged form is possible, and is activity of the pupa, for their existence in to be found anywhere in California, it would this form, at best, is short; so their upward seem that it is developing here. No other movements must be as rapid as possible. vineyards of the State have the appearance
At the time I took the winged specimen of being similiarly infested. Several vinefrom the trunk of the vine, I also bottled yards are included in the group. In two an active pupa, taken three to four inches notable cases only two or three vines in a below the surface of the ground. In less group have the characteristic short growth. than twenty-four hours this also became Surrounding these spots are from one to two a winged insect. Possibly the removing acres, dotted here and there with single inhastened the development; if not, it shows fested vines. The only indication of disease that their rate of locomotion is quite rapid, was a slight change in color ; otherwise, the considering the obstacles they meet in the foliage and fruitage was fully equal to that way of hard soil and other impediments. of any other part of the vineyard. It seems
Keeping in mind the small number of impossible that the vines could have beplaces in which the winged form has been come infested in any other way than by the found, we may consider the vineyards as winged form. The sickly vines were scatnearly exempt from this form, although tered in all directions from the original spot, there are spots which seem to show, by the mainly toward the valley ; cultivation could more rapid spread, its existence in appre not have distributed the pest so impartially; ciable numbers. But such examples have moreover, they were all in the same stage of decay. Both vineyards were affected in pre- yet no signs of rapid spread. In the older cisely the same manner, and had the same and more noted phylloxera districts, instances appearance throughout. It is also a notable of rapid spreading are becoming more nufact that surrounding vineyards were more merous, and anomalous cases are occurring or less similarly dotted with yellow vines, more frequently, indicating a possible designificant of phylloxera, although no original velopment of the new form. source could be located as a starting point. In studying the different phases in which Vineyards two years old were affected equally this insect is found, one cannot but notice with older ones. In several acres of a two- the striking changes which may be produced year-old vineyard single vines could be by accustoming the insect to varying condi“spotted” as infested. Cultivation in so tions. The gall louse may be entirely driven young a vineyard could scarcely have brought from a vineyard by replacing the vines with the pest from a distance. The choice, then, other varieties; the common root form may, lies between infested cuttings and winged after several generations, be persuaded to form.
live above ground upon the leaves, without The greater ease with which the winged assuming the characteristics of the gall louse; form is found of late, and the peculiar phase surrounding circumstances will, too, deterof its movements, naturally suggests the ques- mine the length of the life cycles. If the tion, Is not the original form developing into changes can be produced artificially, is there the more dreaded winged form? and may not not a possibility of the different forms being the insect, in time, accommodate itself to the reproduced in the open field? surrounding circumstances, and develop wing- In order to compare the rapidity of proed form as readily as in its native country? I duction of the winged form of California believe when the insect was first discovered with that of other countries, I would note in California no instances of rapid, sweep- what Professor Riley, says of their producing spread are recorded.
The spread was
tion in the Mississippi Valley, and comslow in all directions. Each separate locality pare it with the numbers found in California. where the root insect is found shows that He says: “An ordinary quart preserve jar several years have passed since their intro- filled with such roots (rootlets from vines in duction. Among these are the two districts proper season), and tightly closed, will furin the eastern extremity of the infested part nish daily, for two weeks, a dozen or more of the State. There has been sufficient time of the winged females.” If every vine in a since they became infested to enable the vineyard bears the winged form at this rate, pest to nearly destroy the original vine- it is easy to form an opinion of the vast yards. In one case, where French vines numbers that would thus be produced, and were freely imported, the vineyard has been to see the ease with which they could be almost entirely uprooted, with the exception carried into the air. of occasional solitary vines, which still re- Observation has not yet shown that Calimain, showing too plainly, with their scanty fornia produces vineyards in which all the growth, the cause of their decay. Slow but vines are infested with winged form, but ravery destructive inroads are being made into ther that the vines thus affected are very few the immediately surrounding vineyards. indeed. If this be the case, vineyards at a Still no signs of rapid spreading are visible, distance are not apt to become infested by the The other case spoken of is represented by blowing of the form, for the number which a single vineyard nearly destroyed, while all could be taken into the air must be exceedthe surrounding vineyards are in a healthy ingly few, and the possibility that any one condition. Traveling westward through sev- of these will ever find suitable condition eral districts, one or more vineyards in each for future action in a distant vineyard is will be found to contain well-developed spots, almost beyond calculation.
F. W. Morse.
Not the renowned philosopher, though he had been telescopically observed from no doubt it might be pleasant and profitable some neighboring planet, he would undoubtto consider him and his wisdom, unrivaled edly have been set down by a scientist as a while millenniums have rolled by. The curious specimen of wheeled animalcula.” Plato of my tale is by no means so notable The doctor was a bachelor, and had lived a personage; yet he, too, had more than for a score of years with a family who, his share of wit and wisdom, was quite a though not akin to him, yet made in every philosopher after his fashion, and well de- sense of the word a home for the homeless serveş such attention as his present biog- man. Hither he brought his new canine rapher can win for him.
responsibility, who speedily so ingratiated A small, bright-eyed, quick-eared dog is himself with the family, and was so thormy hero. Come forward and be introduced, oughly adopted by them, that the question O Plato! Hold up your head in your alert, of ownership was merged in common friendbold, little dog style. Now, up on your ship. It was a home so still, so peaceful, hind legs and salute the good folks who, ac- so well ordered, yet so kindly and cheerful, cording to certain savants, are themselves that Plato found its atmosphere wonderfully only just getting well used to that ticklish congenial-a veritable dogs' paradise. position. Now give us your paw in token Out of this pleasant home the little chilof good-fellowship. There, that will do; dren had gone one by one with shut eyenow back to your warm corner-nay, alas! lids and folded palms. There was not one to the land of shades, the unknown country left to be a playmate for the bright little where all good dogs go; for Plato is gone dog. The great, solitary house held only from hearth and home. His biography the master and mistress, the doctor, and must be written in the past tense. “ Ille " Aunt Judy”—“auntie” always and to all, fuit."
though the sweet young voices which had It was in the quiet old Dutch town of named her so mute forever. It Schenectady, on the famous Mohawk River, might have seemed a lonely spot to the that life first dawned upon Plato--unless, lively little dog, if he had lived there in its indeed, as was taught by the illustrious an- days of merry, romping, childish play, and cient Plato, he had pre-existed, and so did but then felt the solemn shadow and silence migrate into a new shape on this occasion. creep over it all; but as it was, he only It was the drowsy month of August, and knew it in its present stillness and serenity, Schenectady is not remote from Sleepy Hol- and was the happy recipient of such loving low; but Plato inherited no somnolent ten- kindness that to him there never was an dencies from birthplace or birthday. Very aching void. early in life he was taken from his native He was a little fellow, weighing only ten place back into a little country town ten or a dozen pounds, swift of foot and motion, miles distant, where he found good friends and showing plainly his terrier blood, though and good fare, and never changed homes not of the usual black-and-tan color. He again--great luck for dog or man !
was of a soft, bright chestnut hue, with a He was the property, nominally, of a coun- single white spot on his breast. try doctor, whose ministrations stretched were short and alert, his eyes clear and penover a wide circuit of country, and who con- etrating, and his tail-ah, what tales that sequently lived so perpetually in his "sulky," tail could tell! That which his beautiful, that, as the “Autocrat” quaintly remarks, “if speaking eyes, his quivering ears, and his | <urn:uuid:f1a06c65-01f9-4f30-9d6a-a952428fdf07> | CC-MAIN-2022-33 | https://books.google.ba/books?id=2Fw4AAAAIAAJ&pg=PA231&dq=editions:LCCN09019704&output=html_text | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00497.warc.gz | en | 0.978767 | 4,461 | 3.25 | 3 |
Summary in English of the Forgiveness Study Group held with the Puerto Vallarta Zen Group.
Recently in our meditation group in Puerto Vallarta we've had a study group on the topic of forgiveness, and that's what I'd like to talk about today. We found the study group format very useful, but what I propose to do today is not a study but rather a summary of some of the things we talked about. I'd kind of like to call it “Forgiveness Part I.” This is mostly to acknowledge that there's much more to say about forgiveness than I'll be able to say today. If I'm able to present a starting point, I'll be happy.
We have to start by asking a very big question. What is forgiveness? What does forgiveness include? What does it not include? I can't say that the way I see forgiveness is the only way to see it. So I think I need to be as clear as possible what I mean when I say forgiveness.
So this is what I mean: forgiveness means letting go of the anger, resentment and blaming that we feel concerning some action that has had an impact on us. Forgiveness does not change what happened, but it does change our way of relating to what happened. Forgiveness doesn't mean that we have to trust the offender not to re-offend. We can forgive an alcoholic spouse, but that doesn't mean we trust them with a bottle. Forgiveness does not require reconciliation. Reconciliation does require forgiveness though. So forgiveness may be the beginning of a larger process, but it doesn't have to be. I think this is important enough that I want to repeat it. Forgiveness does not require reconciliation.
Forgiveness doesn't relieve the offender of their responsibility for their action, nor does it turn a wrong into a right. A dictionary definition that I like comes from the Miriam-Webster Dictionary: “to cease to feel resentment.” “Resent” is an interesting word. It literally means to feel over again – to “re-sense”. Something happens that has a negative impact on us. We feel that impact as emotional or physical pain. Resentment happens later, after the actual event. Resentment means that we feel the pain again after the original impact has passed. Along with this resentment there is usually anger and blaming toward whoever or whatever it was that we identify as the cause of our suffering.
So I would identify some different steps or stages in this:
1.There is some action of body, speech or mind that has an impact.
2.There is emotional or physical pain (or both) arising out of this impact.
3.Then there is anger, resentment and culpability relating back to that pain, interpreting that pain.
These are the things that set up a situation in which forgiveness might be appropriate. If we are to move forward from this point into forgiveness, two important things must happen:
1.We have to acknowledge both the original pain and the resentment of it. We have to accept our own feelings about what happened, and we have to recognize the emotional reactivity that we have concerning the situation and the person or thing that we blame for it.
2.Then comes the intention to give up or let go of our resentment and anger. Forgiveness begins here.
If you think about this a little, you'll see that forgiveness, as I am talking about it, has nothing at all to do with “making nice” to the offender, or saying that what they did was really OK after all. It has to do with “making nice” to oneself. It has to do with realizing that holding on to anger and resentment hurts the so-called victim, and really does nothing to punish or to reform the offender. One analogy is that refusing to forgive is like holding a hot coal in your hands waiting for someone to come by so you can throw it at them. They might not come by, or maybe they do but by the time that has happened you've already burnt yourself pretty badly. Another descriptive metaphor is that holding on to resentment and anger, refusing to forgive, is like eating rat poison and then hoping for the rat to die.
The etymology of the word “forgive” comes from Old English. “For” means “completely” and “give” comes from giefan which means “to give, grant or allow.” So to forgive is to give completely. I thought it was interesting that the Spanish word, perdón has the same etymological meaning even though its source is Latin and not old English. “Per” is an intensifier meaning completely and “don” has the same source as our English word donation. So in Spanish also it means “to give completely.”
Who gives what? I would say that first of all, the person who was holding on to their resentment gives it up. They let it go and give themselves relief. So it's not so much a giving to as it is a giving up.
These words might make forgiving sound rather clear and simple, but as we all know, it can be very difficult to do. We have to allow ourselves to feel the hurt before we can let go of it. And we'd rather not feel that hurt. Habitually we try to ignore it or minimize it or cover it up. We close ourselves off from our own experience as if this could protect us.
My experience has been that the fear of feeling pain is much worse than actual the feeling of it. When we allow ourselves to be present to even just a little of our own pain, we can begin to free ourselves of that fear. Over time, little by little, we can continue this process of letting go of the fear and simply feeling what we feel. This is essential to the healing process that is forgiving.
Sometimes we try to jump over the pain right into blaming someone for having hurt us and then try to use that energy to avoid feeling what we feel. This just adds bitterness and anger and makes us feel worse. So over and over we may need to keep stepping back to ask ourselves where all this anger is coming from.
The practise of becoming familiar with our wounds is an essential part of the process. We must not try to rush it. It’s absolutely necessary that we allow ourselves to become familiar with the wounds, and with the feeling of having been wounded. It has to be included in our forgiveness. It’s not a preliminary; it’s an essential part of the whole. And if we are having difficulty forgiving, this practise is what we need to return to.
The therapist Robin Casarjian puts it this way in her book, Forgiveness:
After attempting to forgive we might wonder why we still feel angry or empty inside. If we are repressing anger and guilt, the forgiveness we extend can't be rooted in our being because the repressed feelings become a barrier to our core experience. The body and psyche that hold too many restricting and repressed emotions have little room to embody love and joy with much consistency and depth. We may experience the joy and relief that forgiveness offers from time to time, but it will remain on the surface. It's like trying to plant a magnificent flower garden with a very shallow root system. A brief drought or a passing wind can sweep it all away. But if we give our pain acceptance and, in a safe context, feel what may have been too unsafe and scary to feel in the past, then the pain can be released and transformed. The process of honoring our feelings is like tilling hardened or shallow topsoil so that it becomes rich and deep. Only then will our forgiveness and understanding have room to take root deep within us.
As we acknowledge our pain and recognize that we feel wronged, we can begin to acknowledge all the bitterness and anger that we feel in relation to that. At this point we can begin to bring awareness to all of our resentment, to take its measure, without judging ourselves for feeling what we feel. This is how we begin to forgive.
Often we do have judgements about these feelings. We may think it's somehow wrong to feel angry and try to stifle it or cover it up. Or we may have the opposite feeling and think that it's good and empowering to have what we might call righteous anger. It's not easy to let go of the judgements and just see what's there: blaming, anger and resentment. And then to see what's beneath that: some pain that we would like to relieve.
It's good to do this with a partner whom we trust to help us to keep our perspective, especially if the issues are particularly difficult.
Letting go of our resentment isn't easy and it doesn't happen all at once, but I think that little by little we can began to acknowledge it and then let it go, see a little more of it, and then let it go. And we may have to do this over and over and over again. We let something go but then the wound may open again in a different way, or something new might come up.
Another thing that's difficult to let go of is the conviction that “I was right.” and “They were wrong.” It's even possible that this is true, but holding on to it doesn't lead us anywhere. One thing that might help here is to recognize that at one time or another we have also acted in ways that created pain and suffering. Just as we have been hurt, we have caused others to hurt.
We might look at the person who we blame for hurting us and imagine them as a baby in their mother's arms. We can imagine them as a baby in our arms. We might imagine them on their first day of school, or learning to tie their shoes. And we might imagine the suffering that they have lived through. We try to look beyond our own reactivity and see them as simply another human being who lives and breathes just like us. Maybe we will come to feel some compassion for them, or at least come to recognize that they too have suffered, and that their hurtful behaviour is the result of their own pain and confusion.
We can also look at our own responsibility for why we feel resentful. Perhaps we feel hurt because someone did not treat us as we think they should have. But this idea of how we should be treated comes from our own expectations. If it weren't for our expectations, perhaps their behaviour wouldn't have bothered us so much. Perhaps it wouldn't have bothered us at all. So was it that person, or was it our own expectations that caused us to feel offended? It may help to see all this from a broader perspective.
And we can recognize that no matter what we do, what happened, happened. We can't change that with our righteous sense of what should have happened or by being angry or blaming someone. It has been said that forgiveness means giving up all hope for a better past.
We can include the person who hurt us in a lovingkindness meditation. We can begin the meditation by sending lovingkindness to ourselves and to those who are close to us and then go on to include the person who has hurt us. “May they be happy, safe and well, and free from suffering.” If this is hard to do at first, we can remind ourselves that if they had been able to enjoy those conditions, they probably never would have hurt us. And again, we can try to let go of our reactions and opinions about them and just see them as another human being.
So far I have been talking mostly about how we forgive when we feel we have been hurt or offended. But there are really three different relationships for the practise of forgiveness. In addition to forgiving others who have caused us harm. There is seeking forgiveness for having caused harm to others, and there is forgiving ourselves for the ways we have harmed ourselves. I'd like to talk a little about the other two situations now.
First I want to say a few things about seeking forgiveness. At the beginning of our Bodhisattva Precepts ceremony, we chant a verse of confession three times.
All my ancient twisted karma,
From beginningless greed, hate and delusion,
Born through body, speech and mind,
I now fully avow.
I think that this is an important practise, to acknowledge that each and every one of us has been the cause of suffering, we should be mindful of that as we chant. But if we just do that and feel it's enough, we are mistaken. It is difficult and disagreeable work to go right into the ugliness and shamefulness of what we have done and admit our errors. But it has been my experience that this is what we need to do in order to cleanse ourselves and in order to offer some relief to the ones that we have wronged. I think we have to honestly admit to the specific wrong or unskilful action and we have to commit ourselves to avoid repeating it.
Often we hear “apologies” where someone says, “If you feel bad because of something that I said, then I'm sorry.” I would call that a “no-apology.” The person is refusing to take responsibility for their own behaviour and acknowledge its harmful consequences. I think it’s a subtle way of saying, “It’s your fault if you feel offended.” Even if we are convinced that we had no choice but to act in a way that hurt someone, we can still say, “I know that what I did hurt you and I'm truly sorry.” And then we can honestly try to understand how we can avoid finding ourselves in a situation like that again.
The first Buddhist retreat that I went to after I was ordained as a priest was with a Chinese Ch'an teacher. In my interview with him I said, “I have only been ordained for a few months. What advice do you have for me as a new priest?” He told me, “Whenever you make a mistake, it's very important for you to confess it to another priest. If there are no other priests where you are, then you can offer incense and confess to the Buddha, but it's better if it's to another priest.”
It wasn't what I expected to hear. Maybe I was thinking, like in the 70's movie Love Story, that “Being a priest means never having to say you're sorry.” But I have learned that this is not the case. Sometimes the hard part has been realizing or admitting that I have caused suffering. Then it can be even harder to say so out loud to someone else. But I think it has to be done. It's not good enough to admit your mistake only to yourself. Someone else needs to hear it. This is not easy. When it's something small, maybe you can go right to the person you hurt and speak your confession. But if it's something big, or something that you're not clear about, maybe still feeling defensive or looking to justify your action, it's a good idea to talk this through with someone you trust first.
I have already recommended that when you are learning to forgive, it's good to have a forgiveness partner, someone that you can talk with as you go through the forgiving process. I think this is also true for us when we are seeking forgiveness. Someone who is trusted and yet somewhat neutral can make a huge difference to us. This is one of the functions of the sponsor in the 12 step program of Alcoholics Anonymous.
I have come to prefer the phrase “seeking forgiveness” over the phrase “asking for forgiveness.” To me it's closer to the way I understand forgiveness. Seeking forgiveness to me means looking deeply at ourselves and at our own behaviour first. Then, when it's possible and won't cause more suffering, we can honestly acknowledge our offence to the person we hurt. We need to do this to try to ease their suffering. We hope they will be able to forgive, not for our sake but for their own sake. To me, “asking for forgiveness” sounds as if on top of having hurt them, now we are going to ask them to give us something, and that just doesn't sound right to me. “Seeking forgiveness” sounds better.
When someone whom we have hurt does forgive, it means that they are able to let go of their resentment, anger and pain. Then because they feel some relief, we also can feel some relief. But this will not be complete until we can forgive ourselves. That's what I'd like to talk about next.
We are often told that the Golden Rule is that we should love and care for other people as much as we love and care for ourselves. Considering how judgemental and unforgiving we can be with ourselves, no wonder the world is such a mess! Maybe we should reverse that advice. We should show as much consideration for ourselves as we do for others.
It's not that difficult for us to see the value in treating others with compassion and understanding. There is some encouragement to do this in our culture. And yet, so often we feel that we should not extend this same compassion and understanding to ourselves. We have learned that it is selfish and bad to do so.
I think it's just as selfish to think that we should be treated worse than others as it is to think we should be treated better. In both cases we place ourselves as something separate from and opposed to everything else. When we vow to benefit all beings, we shouldn't forget that this includes us!
Why is it so difficult to forgive ourselves? I think it's partly because deep down we are aware of the insubstantiality of all our excuses. Even when we deny responsibility for our offence, at some level we still blame ourselves and feel ashamed and unworthy. We usually try to hide this from everyone. Most of all we try to hide this from ourselves, but that has consequences too. So the first step to forgiving oneself is to begin uncovering all that we have been denying, all of those things about ourselves that we fear might be true. You probably already know for yourself what some of these things are.
I can clearly remember some very hurtful things that I did and said to others even when I was a little boy. In some cases it's been over 50 years, and I am still ashamed of some things I did as that boy. But I have forgiven him. I know that I am no longer that same little boy. I know that in his immature mind he was just looking for love and acceptance. I understand how little he knew of right and wrong.
This is one way that we can begin to forgive ourselves. We can recognize that we are no longer the person who committed that offence. This is true even if the offence was committed yesterday or only 5 minutes ago. We are no longer that person. We can recognize how the suffering of that person resulted in their unskilful action. Then we can let go of blaming ourselves and we can vow to do our best not to repeat the offence. We may have to do that many times.
When we are the cause of someone's suffering, it's easy to see who was the offender and who the offended. But when we speak of forgiving ourselves, who is offended? Who is the offender? And who offers forgiveness? It's not so easy to answer that. Did I hurt me? I think that we suffer because we have caused suffering. We suffer because we know that we have done something that was un-virtuous, non-compassionate, un-wise and dis-harmonious. You could say that we have committed an offence against virtue, compassion, wisdom and harmony. Our offence was against the three treasures: Buddha, Dharma and Sangha.
And so our self-forgiveness needs to come by way of the three treasures. In the realm of the three treasures there's no possibility of resentment, no place for anger and blame to stick. In order to forgive ourselves we need to realize and accept that this is true.
I did a search for self-forgiveness on an internet site dedicated to answering questions about the bible. The answer that came back was that there is not one mention of forgiving oneself in the bible. There is no such concept in the bible. This surprised me at first, considering how important I think it is to forgive ourselves.
But, according to this website, the true issue is not that we should forgive ourselves. The true issue is that we need to accept that God forgives us. Self-forgiveness needs to come from somewhere much greater than our narrow view of who we think we are and what we have done. Whether we say it's God, or the three treasures, or Buddha, or the great mystery doesn't really matter.
Self-forgiveness can't be found by thinking about it or analyzing it. It can only be found somewhere beyond our thinking, beyond our feeling, beyond our small selves. Actually I believe that this is where all forgiveness comes from, not just self-forgiveness. I think that we accept forgiveness for ourselves and offer it to others through our mysterious connection with something greater than ourselves. To me this is what is meant in the Lord's Prayer “forgive us our debts as we forgive our debtors.” If we cannot find forgiveness for others, we can't find it for ourselves either. And we find forgiveness for ourselves and for others when we connect with the great mystery. So the Lord's Prayer reminds us that we find self-forgiveness in the same place that we find forgiveness for others. That's the only place that it can be found. But it's not just there on the surface. It takes time and practise. It takes opening our hearts, exposing our shames, and giving up our strong habit of believing we are unworthy.
To do this we need to go beyond our analysis and intellectual understanding. We need to go beyond our thinking minds. We definitely need to go beyond today's talk.
I'd like to tell you a story from a dharma talk given by Daigan Lueck, a teacher from Green Gulch Farm.
In his talk Daigan told us about how much anger he had when he came to Zen practise, and how his anger continued in his practise. He said he was angry with everyone and everything. He knew that his anger was eating him up, but knowing that didn't help him to get beyond it. He said that one day, in total desperation, feeling that he could not draw another breath; he went to his teacher and said, “What can I do?”
And his teacher said, “I have a practise for you, and I guarantee you that it will work.” He said, “Every day in your home you should do prostrations. Do 108 prostrations every day. You don't have to do them all at once, just do 108 prostrations in the course of each day. And with each prostration say, 'I forgive you.'”
Daigan said, “OK, but who am I forgiving?” and his teacher said, “You'll find out.”
So Daigan said he followed this practise, day after day, doing 108 prostrations and saying “I forgive you” 108 times. And then one day in the middle of doing this, he broke down in tears. He said that he finally realized who it was that he had to forgive. Then he said, “I don't need to tell you who that was. Everybody knows who that was.” Daigan said that he had a deep sense of gratitude for having found a practise that he could do.
The forgiveness that I have been talking about is not something that can be done quickly. It's not a single event, but rather a process. Sometimes we might enter into the process without even knowing that we have done so. Or we might realize that we have entered the process with absolutely no idea of where it will lead. Perhaps it's not even a process so much as a pilgrimage, a pilgrimage with no end.
There is still so much more to say about forgiveness. May the conversation continue.
Thank you for your practise. | <urn:uuid:afddc548-6343-4f95-af55-4f74ec80deb2> | CC-MAIN-2022-33 | https://everydayzen.org/teachings/the-practice-of-forgiveness/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00494.warc.gz | en | 0.983252 | 5,056 | 3.015625 | 3 |
This paper describes an extensive online project, Writingmatrix [http://writingmatrix.wikispaces.com], involving several key elements essential to collaboration in Web 2.0, such as aggregation, tagging, and social networking. Participant teachers in several different countries--Argentina, Venezuela, and Slovenia--had their adult students at various levels of English competency interact using blogs and other Web tools, including RSS feed readers and Technorati [http://technorati.com]. The teachers describe their respective settings, how they got their students started in communicating through blogging and social networking, and how in future they intend to expand the Writingmatrix experience.
A working understanding of aggregation, tagging, and RSS (Really Simple Syndication) is key to collaboration as well as to filtering and regulating the flow of information resources online. Tags allow people to organize the information available through their distributed networks in ways that are meaningful to them, and social networking enables nodes in these networks to interact with each other according to how these tags and other folksonomic (for example, socially intertwined and personally meaningful) data overlap. Once productive tags are identified, then RSS (a script constantly updated with changes to certain Web content, such as blogs, which delivers a constant 'feed' of those changes) is used to monitor where Web artifacts containing those tags are accumulating, or being aggregated. Using an aggregator, or feed reader, users can then read the aggregated changes all in one place at any convenient time--and link directly back to those pages to make comments.
This article describes how the concepts associated with aggregating tagged content through tag search engines and RSS were applied in a worldwide collaboration project, Writingmatrix, involving bloggers in two countries in South America and one in Balkan Europe, who utilized tagging and social networking tools to enable their students to locate each other's blogs and then interact with individuals whom participants identified as being of similar age and interests. To achieve these communicative goals, students were encouraged to blog their interests and concerns, then tag posts with an identifier unique to the project. Technorati, a real-time search engine and organizer, was used to ferret out the posts of participants in other countries. This article takes the form of a narrative told from the perspective of the language teaching practitioners involved in the project. It describes how each became involved in the project and suggests where the participants are going with the project in the near future.
In early 2007, I was invited to give a series of lectures in Spain on teaching writing over the Internet, and I accepted the challenge although my career focus had moved from ESL/EFL to computing and educational technology. With regard to the educational technology aspect, I was curious how tags in blog posts could be utilized in promoting collaboration, but I needed a testing ground in order to be able to discuss my ideas in the lectures I planned to give. At about this time, I became aware, through the English Virtual Community [http://ar.groups.yahoo.com/group/inglesunlp/], a list managed by Nelba Quintana, that a group of students and teachers in South America were using the list to discuss their summer reading, and I suggested, as a participant in the list, that those concerned blog their reflections and find each other's posts through tagging. Although the list did not follow up on my suggestion, Nelba herself was intrigued, and a message was sent out to the Webheads in Action list [http://groups.yahoo.com/group/evonline2002_webheads/] asking if anyone else would be interested in getting their students to blog and find each other's posts through tagging. Three teachers, Doris Molero from Venezuela, Saa Sirk from Slovenia, and Rita Zeinstejer from Argentina, decided they would like to learn how to help their students use blogs in this way to create a social network and encourage their students to discover and use each other as an authentic audience for their writing.
This team of four teachers all started their students blogging in April 2007, following guidelines which I invented just in time, not knowing much more than the instructors did. The project got off to a slow start, with observations about students not being interested at first in going beyond the confines of their own clique and culture, but the fact that all were learning and having the opportunity to interact with students from different countries made the experience very enriching. Carla Arena and other participants from the Webheads group also joined in with their comments on participants' blogs, and she has added her reflections to this article.
One great benefit of this approach to tagging and social blogging is that anyone in the world can join the project with no prior arrangement whatsoever. All one needs to do in order to participate is to tag posts writingmatrix. A Technorati search can then be performed to find other participants. Or, even simpler, you can subscribe to the RSS feed for the Technorati page that has given you the desired results from a writingmatrix search, using a feed reader of your choice (e.g., Bloglines [http://www.bloglines.com] or Google Reader [http://www.google.com/intl/en/googlereader/tour.html]), and be automatically alerted when new posts are published. Figure 1 shows how Ctrl + click or a right click will allow the user to copy the 'Subscribe' link that can then be pasted into a feed reader.
Figure 1. This graphic shows how to Copy Link Location with Ctrl + click or right click. The link can then be pasted into Bloglines or Google Reader.
I work at a private institute, Asociación Rosarina de Cultura Inglesa in Rosario, Argentina, a local center for Cambridge exams, which students of English choose to attend to get their certification of achievement, and then sit for a final examination. The teachers here aim primarily at accuracy both in writing and in speaking, and the students fail to see the importance of communicating with the language they've been studying for around ten years, as on very few occasions do they have the chance to meet native or even non-native speakers of English to practice the language in authentic situations.
This being so, it is very hard to convince students of their need to poke their heads and minds out of their classroom windows to see beyond, to appreciate the invaluable chance the Internet offers to use tools that will enable them to meet like-minded people all over the world and communicate safely from their homes, in English, exchanging information about each other's cultures. This is what Vance had in mind when he started his Writingmatrix project involving teachers from different countries ready to pass their enthusiasm on to their face-to-face students. And tagging promotes this effect.
In my case, I get new groups of students every school year who want to sit for the CAE exam, and who are reluctant to deviate from what they perceive to be their limited aim. Yet, little by little they are coaxed into participating in a class blog, they start their own blogs, they post and see their writing out there in the real world, they get amazed at reading comments from other teachers and students from places they have never imagined they would reach. And they come to see the advantages of tagging, and they realize how through tagging they can connect with people sharing their interests, regardless of geographical and cultural distances.
Following this progression, my students enthusiastically wrote, read, recorded, listened, and tagged. They added writingmatrix to their list of tags at the end of each blog post, and they went to Technorati, where they located not only other teachers and students from our Writingmatrix project, but also other readers who had become interested in the project. We even ran our own project, which we called The Serendipity Project, as they had to record a serendipitous experience they had had or had heard about. They all enjoyed the experience, chronicled in our blog for May 18, 2007, caeb2007's podcast: The "Serendipity" Project [http://caeb2007.podomatic.com/entry/2007-05-18T19_12_16-07_00], and later posts through June, 2007.
At caeb2007's podcast for September 7, 2007, you will also find the podcast my students produced about tagging, Tagging...sth really fun [http://caeb2007.podomatic.com/entry/2007-09-07T17_39_46-07_00], and it can be seen how this concept opened their eyes and minds into a broader world--how they became aware of the fact that a final examination and a certification of achievement should not be the only target when taking up a new language. Rather, one should be open to the possibility of meeting like-minded people, of learning about different cultures, of making friends beyond and across geographical distances. My students became aware of a different way of learning, of the importance of using English to communicate, to exchange experiences, opinions, possible solutions to common problems related not only to learning a language, but also to those facets of their everyday life they have in common with the whole world. They have become aware of the need to learn through sharing, both through text and voice, which serves to enhance their proficiency in the four skills. In fact, through tagging, learners can develop heuristics for improving their reading, writing, listening and speaking abilities as they are led into connecting, communicating, and interacting in authentic environments and with authentic aims in mind.
However, there is still a long way ahead, since students themselves will not see these benefits until and unless they are guided into the process by those capable of showing them the way. The next steps will therefore entail raising teachers' consciousness and helping them gain the necessary knowledge to see for themselves how to integrate the use of tagging and social media into their teaching practices.
I teach English at the Vocational College of Tehniki olski center Nova Gorica in Nova Gorica, Slovenia. My students are full-time and part-time students of different professional orientations (Informatics, Mechatronics, and Countryside Management). The full-time students are aged 19+ and generally are continuing their education straight after secondary school (so they have little or no working experience), whereas the part-time students are employed adults of different ages. Both full- and part-time groups are usually quite large (60+) and their level of English varies a lot (from group to group and within the group, ranging from lower intermediate to advanced). So far, two of my groups have been involved in the Writingmatrix project--nine students from the Spring 2007 group (part-time adult class) and most of the students from the Autumn 2007 group (full-time students).
I had started blogging in class for the first time just a few weeks before finding out about the Writingmatrix project. We had started blogging simply to continue class discussions beyond classroom walls and class hours. Our class blog was meant as an extension of our Moodle [http://www.moodle.org] forum, which is not public. At first I just wanted to show my students how simple it is to use a blog and how far it can reach. Also I wanted to show them what a wonderful tool blogs are, and how efficient they are for keeping in touch with the latest news and developments in various professional fields. I hoped that exposure to blogs would encourage students to get used to reading in English more regularly. Students today should know how to use this technology efficiently.
When I heard about the Writingmatrix project I joined it with some students from my adult class. Unfortunately our course was already winding up at that point. Some of my students chose to join in spite of this and blogged in their free time. We didn't cover all the tools from the Writingmatrix syllabus. We explored Blogger [https://www.blogger.com/start/] and learned how to use Bloglines and del.icio.us [http://del.icio.us]. We reached out into cyberspace, exchanged comments with participants from other countries, and got to know one another in a fun way.
My second group (full-time students) was younger (19+) and much bigger. Unlike the previous group, these students mostly had a general idea of what blogs are but did not read them much and were not familiar with aggregators and RSS. Seeing how much time and effort some students from my previous group had invested in their blogs, I decided to additionally encourage these students' participation by giving them a possibility to earn part of their written grade this way.
The students liked the idea behind the Writingmatrix project and most of them joined. To be able follow their work more easily, I asked them to record all their weekly activities in a Google docs [http://docs.google.com] and share this document with me. These personal reports were in English and clearly showed how much effort they invested in their work. In addition to Blogger, my students explored Google Reader, Technorati, del.icio.us and Flickr [http://flickr.com/]. Those new to the Internet needed some help and guidance, and had some problems getting used to switching between the many different applications. We tried to help each other by setting aside some time during the classes in our computer lab (one to two classes a week). Some students developed their blogs considerably; others just posted a few experimental posts. Some chose not to participate because they didn't like the idea of having their writing publicly displayed online.
Like the previous group, this group too exchanged some comments with other Writingmatrix students and teachers, but by and large the group mostly interacted within itself. We learned how simply and efficiently we could connect and aggregate content using tags and how invaluable tags are for organizing our posts, bookmarks, and photos. The tools we explored were new to the students. The Writingmatrix project gave them an opportunity to practice their English meaningfully while familiarizing themselves with tools useful for life.
I liked the way some topics spread across the Writingmatrix project; for example, the Internet meme started by Maria Lujan, Nelba's student (she had received it from a friend and passed it on to some participants, who passed it on to others). The meme (in this context, a question that shares cultural information and may spread or transmute "virally" around the Internet) is about describing your first Internet experience. I liked it and set it as one of my students' weekly tasks. My students were asked to write about their experience in their blogs (or Google docs, if they chose not to work on their blogs). Some of them dropped a comment letting Lujan know they did their homework (see for example, The Tics World: This is my first Meme-- [http://theticworld.blogspot.com/2007/09/this-is-my-first-meme.html] (September 29, 2007). It was fun to hear about Lujan's surprise when she later checked her blog statistics and found there a greater number of visitors from Slovenia than from her home country, Argentina! (See The Tics World: Writing Matrix project is global !! :) [http://theticworld.blogspot.com/2007/10/writing-matrix-project-is-global.html]).
Later, Nelba invited us to explore the trackback option with her, and we helped each other figure out the settings. (See English Virtual Community: My first meme (pingback and trackback) [http://englishvirtualcommunity.blogspot.com/2007/10/my-first-meme.html] in October 2007). Other memorable moments included my students' using Doris's fun posts in their blogs (e.g., the superhero quiz at Doris 3m EFL Center: I am Superman. What Super Hero are you? [http://doris3meflcenter.blogspot.com/search/label/superhero], October 15, 2007), and Doris and some others highlighting interesting picks from other blogs (for example, Doris 3m EFL Center: This is another personality test!!!! [http://doris3meflcenter.blogspot.com/search/label/persnality%20test], November 4, 2007).
After my first encounter with this project, I have created a short wish list for the Writingmatrix project in subsequent iterations. It would be nice if...
As of this writing, we have been blogging for only one year of three trimesters here at URBE (Universidad Rafael Belloso Chacín, Maracaibo, Venezuela). In our first trimester, both teachers and students were new to this blogging idea. I had already started a blog to participate in a multiliteracies adventure to which Vance had previously invited me as part of a TESOL Multiliteracies course (Stevens, 2006), so blogging wasn't entirely unfamiliar to me.
I teach all levels and at many schools, so my classes are a mixture of tasks and skills. My students come from different backgrounds and from different schools and have very different interests and ways of learning, so my approach to teaching has to be very flexible and allow enough space for the students to be creative and responsible The methodology used is task-based and combines different learning approaches. Our English program at URBE is based on Communicative English so it requires a lot of interaction, like holding conversations, and real life simulations. Most teachers in the traditional class just use traditional tools. But my classes are different. Technology is present in the classroom and in the lab, and social networking is strongly supported both in class and online. My students and I use cell phones, cameras, video, and PowerPoint (Microsoft) as everyday tools. Homework is assigned at the end of each class, and most of the time students have to produce something based on a model given in class. That way, working on their products becomes a gradually easier task.
As early as March 2006, I proposed my students start their blogs as journals to record their homework assignments However, not all of them were able do that, nor were they all able to do it the next trimester or even now. Only the ones who were really motivated or disciplined enough did it. Some of my students are just interested in passing the level, not in learning per se. They are goal-directed university students and they want only to be lawyers or engineers for example. It's up to the teachers to help them fall in love with education, but finding support from fellow teachers at school is really difficult since teachers consider technology to be too difficult and time consuming and most of them say they are too busy now to start working with something they are not familiar with. Despite this, my students have produced a large number of blogs.
During the September/ December 2007 trimester I taught 13 classes and two or three intensive courses. All but the level one students (the beginners) tried to get on with blogging. Also we experimented with Windows Movie Maker (2004), audio recording in PowerPoint (Microsoft, 2007) presentations, photo shows with Slide [http://www.slide.com], Google Images [http://images.google.com], tagging, aggregating, Google Reader [http://www.google.com/reader/view/#overview-page/], audio forums at Chinswing [http://www.chinswing.com], virtual pets, chat boxes, commenting, memes, and so on. Most of these things were new, but we had lots of fun. Also we did a free hugs activity and video (see Free Hugs at URBE Edward Garcia [http://www.youtube.com/watch?v=tktKIZ88p-I&e]. But the best was when we participated in Blog Action Day [http://blogactionday.org]. The students wrote about the environment and we watched some environment videos downloaded from YouTube [http://www.youtube.com]. The interesting thing is that when we tagged these artifacts with the label writingmatrix they immediately appeared in Technorati searches (see Figure 2). The hugs videos for example appeared for a long time right near the top of our Technorati searches once they were put online and tagged writingmatrix, as shown in Figure 2 below.
Figure 2. A Technorati search showing the Writingmatrix videos from Doris Moleras' class. " Free Hugs at URBE Edward Garcia" is at the far right and until recently could be linked directly from Technorati.
Something really important about this project is that it has given my students a way to dare to do things, to be more courageous, and to realize that what you can dream can be accomplished. Also, there's the awareness of multiliteracies that they are gaining by combining their EFL training with use of Web 2.0 tools like blogging, tagging, aggregating--and having fun with or getting frustrated by technology, but trying hard to overcome hurdles. Through grappling with such real life tasks, we are growing as humans and becoming better students and teachers as all concerned learn the value of working together through the use of Web 2.0 tools in a multiliteracies framework.
Since starting this project, I have noticed that my students now are a little bit different from my students from a year ago. I remember telling my students that they were going to use technology and learn English at the same time. Their reaction was "teacher is crazy!" The students were initially afraid of computers but as time passed and the Internet has become more popular thanks to MySpace [http://www.myspace.com] and Facebook [http://www.facebook.com], students are now more familiar and feel more comfortable with technology. We are on our way to becoming multiliterate, empowered, Web 2.0 users and creators, connected members of the Internet, and citizens of the world.
I work at the School of Languages [http://www.escueladelenguas.unlp.edu.ar/ingles/home.html] of the National University of La Plata, Argentina. In 2001, I opened a Yahoo! Group [http://groups.yahoo.com], used primarily as an electronic mailing list, called English Virtual Community (EVC) [http://ar.groups.yahoo.com/group/inglesunlp/]. It was established as an opportunity for my students to gain extra practice with English, but little by little it became opened for any person interested in the English language. When Vance launched the Writingmatrix project around April 2007, I sent an email inviting EVC members to participate. Some accepted the proposal and we started blogging.
My student groups are different from the others described here because I normally never meet my students face to face. Participants in these groups live in different parts of Argentina (and one lives in New Jersey), and not only do they live in different parts of the country, but also they are of different ages, have different occupations, and are at markedly different level of Internet knowledge though most of them are not greatly familiar with Internet tools.
My experience of the Writingmatrix project has been very challenging because contact with my students has been completely virtual, and took place either by synchronous chat or asynchronously by email. We would normally have chat sessions every fifteen days. Once the students started meeting one another through the Writingmatrix project, and due to our time zone being similar time to that of Venezuela, I invited Doris Molero and her students to join our chat sessions. It was really very pleasant to interact with them. In the month of July an even more enriching session was carried out when the EVC participants were joined by some students of professor Molero, and also by professor Stevens, who was at that time on summer vacation in Houston.
Constant feedback and focus on fluency in communication have been keys to success. All the participants were interested in learning to interact using chat and email. Our habitual method of work was as follows: I used email during the week to send instructions about the work to be performed in blogs, and in reply, the participants sent any questions they had to me; in addition, once every fifteen days we had meetings using synchronous online chat in which we talked about our accomplishments, visited the blogs of the other participants, and left comments.
From the Writingmatrix project, students also learned:
In general, most participants were very motivated to blog because their writings had a real audience and they were very happy when they received comments in their blogs from other countries. They also learned collaboratively because they were involved in each other's learning and progress.
For the teacher, it was a very satisfying experience from both the professional and personal points of view. Through working with the Writingmatrix project, I became more confident in leading a group online. The project motivated me to research topics pertaining to the social networking aspects of blogging and tagging, and I was able to transmit that knowledge to the participants. As we were all exploring these topics together, we had a very good relationship, a friendly yet professional one. And what is more, I am planning to repeat the experience this year (2008) with another group under similar conditions!
Although I didn't formally join the Writingmatrix group because I was not teaching at that time, I decided to apply its concepts to my own subsequent projects. I learned from the group how powerful tagging and RSS could be in aggregating content. The project has given me invaluable insights into how to tie bits of content together in one place just by tagging appropriately.
I have since chimed in with my contributions to the Writingmatrix group. I've given it a try to see what would happen just by applying the tag writingmatrix to any of my posts that I thought might interest the teachers and students involved in the project. What a surprise when Saa just "stopped by" on one of my blogs and joined the discussion (see the comments May 30, 2007, Brazil and Brazilians: The City of God [http://brazilandbrazilians.blogspot.com/2007/05/city-of-god.html]). It certainly added an interesting perspective to a very rich cultural exchange about the movie. Also, I replied to some of her students and was able to learn more about them and their lives in different parts of the world.
I learned that tagging is connecting, and RSS is the glue. It's about making stronger bonds that really make our world flat, where we become aware of many venues happening at once. Once we become cognizant and then familiar with these tools, it's just a matter of exploring them.
Some lessons learned:
I gained these insights thanks to the Writingmatrix group, who opened my eyes to the tools to broaden our online connectivity and helped me understand the dynamics of social networking through tagging and RSS.
A logical next step for those who decide to pursue their involvement with this project is for students to start forming friendships with one another that might result in writing partnerships. One way for this to happen is for them to browse the output of the Technorati tag search on writingmatrix, using Bloglines or Google Reader to aggregate RSS feeds and display results as they are updated (see Figure 3). As explained earlier this can be automated when students put the link to the RSS feed from the Technorati writingmatrix search results into one of these feed readers, and browse the output there. This technique saves a few mouse clicks and allows students to monitor the results of the desired Technorati tag search conveniently. Better yet, using an aggregator in this way might help some students more easily find others in the project whose blogs resonate with them and then follow postings from those particular peers regularly. Students might then comment more often in each other's blogs. Dialog might follow.
Figure 3. Blogs as well as Technorati feed results can be read directly in an aggregator such as Bloglines as shown here.
Whereas we see that tagging allows us to interact with others in a social network, and sift through and find each other's postings in an otherwise seemingly chaotic docuverse, this is not the only way we know that our writing is being read and interest shown in it. Blogs also allow comments to be made in them. As Carla mentions above, comments from unexpected sources can be very motivating. One can listen to the podcasts on the Worldbridges Network of Teachers Teaching Teachers [http://teachersteachingteachers.org], follow discussions about Youth Voices [http://youthvoices.net/elgg/], and note some of the quite remarkable outcomes from student bloggers whose writing has taken on a sort of cult quality, and who have found audiences neither they, nor their teachers, could have imagined. Paul Allison's (2006) videos on successful teaching practices in blogging are engaging presentations of how this kind of interchange between students happens in practice, the process of writing that goes into Paul's students' blog postings (freewriting, sentence starters, bubble cartoon devices, etc.), and how the students respond to one another and produce better writing by passing it through the crucible of feedback from peers. To paraphrase a student in one of Allison's videos , you can write something and think it is totally correct, but when someone else reads it they can find some aspect that the writer didn't think of. This kind of feedback, coming from another student, is much more meaningful to that student in some respects than a comment his teacher might have made. If Allison's videos are not evidence enough, he mentioned on a Women of Web 2.0 podcast in the summer of 2007 that he's finding that "students are beginning to write for students; imagine how exciting that is!" (confirmed in personal communication).
Stanley (2006) provides an excellent rationale for using blogs in writing and its counterpart, reading. A recent dissertation by Felix (2007) also documents in a systematic framework the rationale and many positive outcomes associated with blogging. As an illustration of such outcomes, Saa told us, though her classes had ended and she and her students were on summer holidays:
Another student of mine opened his blog and joined our project--2 months after our classes officially ended : - ). It's really nice to remain in touch through blogging--everyone working at his own pace without any pressure.
The Writingmatrix project, still ongoing as long as there are students who wish to try it out and respond to one another's postings, worked remarkably well considering that its participants acted as pioneers and didn't know what to expect from it. Next steps in the project include getting the students to tag the URLs of each other's posts in del.icio.us. Saa has already explored this possibility. In her words:
I showed the students how it works, entered my class blog in there, and to my surprise saw that it had already been entered 6 months earlier by Hala Fawzi, a Webhead from Sudan. Similarly I asked my students to add their blogs and one or two Writingmatrix blogs they liked in their Del.icio.us accounts and check who else had bookmarked the same sites and what else these people had bookmarked. Some of them found one another this way.
Once students find each other through appropriate use of Technorati and Bloglines or Google Reader, tagging each other's posts and exploring how others have tagged them through del.icio.us will be a mind-opening experience. It is further reinforcing and accordingly motivating to discover that others are tagging what you produce and place on the Internet, and to follow the links that these others have tagged to see what their interests are. In our view this is the crux of collaboration in such a way that students might be motivated to make discoveries of authentic interest so that a motivation to write can be nurtured.
In a recent recapitulation of what I had learned from this project (Stevens, 2008), Ronaldo Lima from Brazil corroborated our outcomes when he commented there:
----- The recording of this WiAOC [http://wiaoc.org ] presentation may be found at Learning Times [https://sas.elluminate.com/p.jnlp?psid=2007-05-18.0447.M.FF88B318415986DF118835E10F8CB7.vcr]. Further information about the presentation is here: http://webheadsinaction.org/node/174.
Just want to testify here that, around September or October. . . . I was conducting a blogging writing project with two classes here in Brazil. Well, I had my students tag their posts writingmatrix and later they were amazed to see some comments from students from totally different countries and backgrounds.
So, it surely works!
Vance Stevens is a computing lecturer at Petroleum Institute in Abu Dhabi and coordinator of Webheads in Action distributed learning network and online community of practice.
Rita Zeinstejer is a teacher in Argentina, and Self-Access, Laboratory and Multimedia Coordinator at the Asociación Rosarina de Cultura Inglesa, Rosario, Argentina.
Saa Sirk is an EFL teacher at Tehniki olski center Nova Gorica, Nova Gorica, Slovenia and a member of The Slovene Association of LSP Teachers.
Doris Molero is an EFL Professor at Universidad Dr. Rafel Belloso Chacín, Maracaibo, Venezuela.
Nelba Quintana is a teacher of English Language and Literature working as a web and blog content developer, and teacher trainer.
Carla Arena is a teacher from Brasilia now living in Key West who was recently co-ordinator of the Blogging for Educators EVOnline session associated with CALL-IS and TESOL, Inc.
Allison, P. (2006). New media in the classroom: Blogging. Blogging at East Side Community HS. Teachers Network video in New Journalism at ESCHS, Spring 2006. Immigration and Blogging unit. Available at http://www.veoh.com/videos/v522243QnycPP5m?c=paulallison.
Felix, J. (2007). Edublogging: Instruction for the digital age learner. Ph.D. dissertation. Available at: http://bonsall.schoolwires.com/1512109262125477/cwp/view.asp?A=3&Q=277315&C=55071 (Eight-page summary available at http://bonsall.schoolwires.com/1512109262125477/cwp/view.asp?A=3&Q=277322&C=55071),
Microsoft Office PowerPoint 2007. (2007). Redmond, WA: Microsoft, Corp.
Stanley, G. (2006). Redefining the blog: From composition class to flexible learning. In E. Hanson-Smith and S. Rilling (Eds.), Learning languages through technology (pp. 187-200). Alexandria, VA, USA: TESOL.
Stevens, V. (2006). PP 107: Multiliteracies for collaborative learning environments - 2006. Portal for online course conducted as part of TESOL Certificate Program: Principles and Practices of Online Teaching. Available at http://prosites-vstevens.homestead.com/files/efi/papers/tesol/ppot/portal2006.htm.
Stevens, V. (2008). adVanceEducation: All I know about Blogging and Microblogging. Available at http://advanceducation.blogspot.com/2008/02/all-i-know-about-blogging-and.html.
Windows Movie Maker, Ver 2.1. (2004). Redmond, WA: Microsoft.
© Copyright rests with authors. Please cite TESL-EJ appropriately.
Editor's Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations. | <urn:uuid:9d632ba4-626e-4a48-a227-e29ebeab53d6> | CC-MAIN-2022-33 | http://www.tesl-ej.org/ej44/a7.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00498.warc.gz | en | 0.962546 | 7,453 | 2.84375 | 3 |
History of modern nutrition science—implications for current research, dietary guidelines, and food policyBMJ 2018; 361 doi: https://doi.org/10.1136/bmj.k2392 (Published 13 June 2018) Cite this as: BMJ 2018;361:k2392
Although food and nutrition have been studied for centuries, modern nutritional science is surprisingly young. The first vitamin was isolated and chemically defined in 1926, less than 100 years ago, ushering in a half century of discovery focused on single nutrient deficiency diseases. Research on the role of nutrition in complex non-communicable chronic diseases, such as cardiovascular disease, diabetes, obesity, and cancers, is even more recent, accelerating over the past two or three decades and especially after 2000.
Historical summaries of nutrition science have been published, focusing on dietary guidelines, general scientific advances, or particular nutritional therapies.1234 Carl Sagan said, “You have to know the past to understand the present;” and Martin Luther King, Jr, “We are not makers of history. We are made by history.” This article describes key historical events in modern nutrition science that form the basis of our current understanding of diet and health and clarify contemporary priorities, new trends, and controversies in nutrition science and policy.
1910s to 1950s: era of vitamin discovery
The first half of the 20th century witnessed the identification and synthesis of many of the known essential vitamins and minerals and their use to prevent and treat nutritional deficiency related diseases including scurvy, beriberi, pellagra, rickets, xerophthalmia, and nutritional anaemias. Casimir Funk in 1913 came up with idea of a “vital amine” in food, originating from the observation that the hulk of unprocessed rice protected chickens against a beriberi-like condition.5 This “vital amine” or vitamin was first isolated in 1926 and named thiamine, and subsequently synthesised in 1936 as vitamin B1. In 1932, vitamin C was isolated and definitively documented, for the first time, to protect against scurvy,6 some 200 years after ship’s surgeon James Lind tested lemons for treating scurvy in sailors.7
By the mid-20th century all major vitamins had been isolated and synthesised (fig 1). Their identification in animal and human studies proved the nutritional basis of serious deficiency diseases and initially led to dietary strategies to tackle beriberi (vitamin B1), pellagra (vitamin B3), scurvy (vitamin C), pernicious anaemia (vitamin B12), rickets (vitamin D), and other deficiency conditions. However, the chemical synthesis of vitamins quickly led to food based strategies being supplanted by treatment with individual vitamin supplements. This presaged modern day use and marketing of individual and bundled multivitamins to guard against deficiency, launching an entire vitamin supplement industry.
This new science of single nutrient deficiency diseases also led to fortification of selected staple foods with micronutrients, such as iodine in salt and niacin (vitamin B3) and iron in wheat flour and bread.8910 These approaches proved to be effective at reducing the prevalence of many common deficiency diseases, including goitre (iodine), xerophthalmia (vitamin A), rickets (vitamin D), and anaemia (iron). Foods around the world have since been fortified with calcium, phosphorus, iron, and specific vitamins (A, B, C, D), depending on the composition of local staple foods.10111213
As one of the great accidents of nutrition history, this new science and focus on single nutrients and their deficiencies coincided with the Great Depression and second world war, a time of widespread fear of food shortages. This led to even further emphasis on preventing deficiency diseases. For example, the first recommended dietary allowances (RDAs) were a direct result of these concerns, when the League of Nations, British Medical Association, and the US government separately commissioned scientists to generate new minimum dietary requirements to be prepared for war.14 In 1941, these first RDAs were announced at the National Nutrition Conference on Defence, providing new guidelines for total calories and selected nutrients including protein, calcium, phosphorus, iron, and specific vitamins.15 These historical events established a precedent for nutrition research and policy recommendations to focus on single nutrients linked to specific disease states.
1950s to 1970s: fat versus sugar and the protein gap
During the next 20 to 30 years, calorie malnutrition and specific vitamin deficiencies fell sharply in high income countries because of economic development and large increases in low cost processing of staple foods fortified with minerals and vitamins. At the same time, the rising burdens of diet related non-communicable diseases began to be recognised, leading to new research directions. Attention included two areas: dietary fat and sugar.16171819
Early ecological studies and small, short term interventions, most prominently by Ancel Keys, Frederick Stare, and Mark Hegsted, contributed to the widespread belief that fat was a major contributor to heart disease. At the same time, work by John Yudkin and others implicated excess sugar in coronary disease, hypertriglyceridemia, cancer, and dental caries. Ultimately, the emphasis on fat won scientific and policy acceptance, embodied in the 1977 US Senate committee report Dietary Goals for the United States, which recommended low fat, low cholesterol diets for all. This was not without controversy: in 1980, the US National Academy of Sciences Food and Nutrition Board reviewed the data and concluded that insufficient evidence existed to limit total fat, saturated fat, and dietary cholesterol across the population.20
Some interpret these controversies as evidence of industry influence, and others as natural disagreement and evolution of early science.16171819 More relevant is that both the dietary fat and sugar theories relied on a nutritional model developed to address deficiency diseases: identify and isolate the single relevant nutrient, assess its isolated physiological effect, and quantify its optimal intake level to prevent disease. Unfortunately, as subsequent research would establish, such reductionist models translated poorly to non-communicable diseases.
In less wealthy countries, the main objectives of nutrition policy and recommendations during this period remained on increasing calories and selected micronutrients. In many ways, foods became viewed as a delivery vehicle for essential nutrients and calories. Accordingly, agricultural science and technology emphasised production of low cost, shelf stable, and energy dense starchy staples such as wheat, rice, and corn, with corresponding breeding and processing to maximally extract and purify the starch. As in high income nations, these efforts were accompanied by fortification of staple foods10111213 as well as food assistance programmes to promote survival and growth of infants and young children in vulnerable populations.
Scientists focused on malnutrition disagreed on the relative role of total calories and protein in infant and child diseases such as marasmus and kwashiorkor—also termed “the protein-calorie deficiency diseases.”2122 Support for the “protein gap” concept led to extensive industrial development of protein enriched formulas and complementary foods for developing countries. Other scientists supported the primary role of calorie insufficiency and believed that protein enriched formulas and foods should not replace breast milk. As one prominent scientist wrote in 1966, “Millions of dollars and years of effort… into developing these [high protein] foods would have been better spent on efforts to preserve the practice of breast feeding... being abandoned everywhere.”22
The debate essentially ended when in 1975 leading scientists in the US and London independently concluded from the scientific evidence that a lack of food was the main problem:22 “The concept of a worldwide protein gap… is no longer tenable… the problem is mainly one of quantity rather than quality of food.”23
This conclusion influenced subsequent efforts to tackle malnutrition in developing countries. For example, a formal UK advisory committee on international nutrition aid recommended that, “the primary attack on malnutrition should be through the alleviation of poverty… aid should be directed to projects that will generate income among the poor, even where such projects do not have any marked effect on the national income of the country concerned.”22
However, the earlier decades of uncertainty had fostered a multinational industry that continued to promote formula and baby foods in low income countries based on their protein content and nutrient fortification. In addition, nutrient supplementation strategies remained effective at preventing or treating endemic deficiency diseases. Thus, despite the shift in scientific thinking to focus on economic development, substantial emphasis remained or even accelerated on providing sufficient calories, most often as starchy staples, plus vitamin fortification and supplementation.
1970s to 1990s: diet related chronic diseases and supplementation
Accelerating economic development and modernisation of agricultural, food processing, and food formulation techniques continued to reduce single nutrient deficiency diseases globally. Coronary mortality also began to fall in high income countries, but many other diet related chronic diseases were increasing, including obesity, type 2 diabetes, and several cancers.
In response, nutrition science and policy guidelines in high income nations shifted to try to deal with chronic disease. Building on the 1977 Senate report, the 1980 Dietary Guidelines for Americans was one of the earliest such national guidelines.24 Many of the available data were derived from less robust types of evidence, such as from crude cross-country (ecological) comparisons and short term experiments using surrogate outcomes, mostly in healthy middle aged men. More importantly, these studies followed the deficiency disease model, largely considering isolated single nutrients. Accordingly, the 1980 dietary guidelines remained heavily nutrient focused: “avoid too much fat, saturated fat, and cholesterol; eat foods with adequate starch and fiber; avoid too much sugar; avoid too much sodium.”24 International guidelines were similarly nutrient focused.25 This led to a proliferation of industrially crafted food products low in fat, saturated fat, and cholesterol and fortified with micronutrients, as well as expansion of other nutrient focused technologies to reduce saturated fat such as partial hydrogenation of vegetable oils.
At the same time the global community prioritised action to eliminate hunger and micronutrient deficiency in lower income nations. Major micronutrient targets during this period were iron, vitamin A, and iodine. Evidence was increasing that vitamin A supplements could prevent child mortality from infection, such as measles, as well as preventing night blindness and xerophthalmia.26 Field trials provided a basis for WHO recommendations for widespread micronutrient supplementation, especially during pregnancy, with iron and vitamin A, and for fortification of salt with iodine to prevent goitre and developmental abnormalities such as congenital hypothyroidism and hearing loss.
Based on these priorities, the UN, national governments, and other international groups adopted portfolios for preventing micronutrient deficiencies through supplementation and fortification and integration of the growing relevant evidence. Scientific investigations further focused on other environmental factors that may interact with micronutrients and dietary protein, such as infection and related poor sanitation, leading to concepts such as subclinical enteritis or malabsorption called first “tropical enteritis,” then “environmental enteropathy,” and currently “environmental enteric dysfunction.”272829
Thus, in both lower and higher income nations, for partly overlapping reasons, a nutrient specific focus continued to shape both scientific inquiry and policy interventions.
1990s to the present: evidence debates, diet patterns, the double burden
Among the most important scientific development of recent decades was the design and completion of multiple, complementary, large nutrition studies, including prospective observational cohorts, randomised clinical trials, and, more recently, genetic consortiums. Cohort studies provided, for the first time, individual level, multivariable adjusted findings on a range of nutrients, foods, and diet patterns and a diversity of health outcomes. Clinical trials allowed further testing of specific questions in targeted, often high risk populations, in particular effects of isolated vitamin supplements and, more recently, specific diet patterns. Genetic consortiums provided important evidence on genetic influences on dietary choices, gene-diet interactions affecting disease risk factors and endpoints, and Mendelian randomisation studies of causal effects of nutritional biomarkers.
These advances were not without controversy, in particular the general discordance of findings between cohort studies and those of supplement trials for specific vitamins on cardiovascular and cancer endpoints.3031 Some experts interpreted the discordance as evidence for irredeemable shortcomings of observational studies (inherent residual confounding). Others believed it showed the limitations of single nutrient approaches to chronic diseases as well as potentially reflecting the different methodological designs, with trials often focused on short term, supraphysiological doses of vitamin supplements in high risk patients, while observational studies often focused on habitual intake of vitamins from food in general populations.
In contrast to single nutrients, physiological intervention trials, large cohort studies, and randomised clinical trials provided more consistent evidence for diet patterns, such as low fat diets (few significant effects) or Mediterranean and similar food based patterns (consistent benefits).3233 This concordance was supported by advances in research methods and better understanding of the complementary strengths of different study designs.343536373839
Together, these advances suggested that single nutrient theories were inadequate to explain many effects of diet on non-communicable diseases. This pushed the field beyond the RDA framework and other nutrient metrics designed to identify thresholds for nutrient deficiency diseases, and towards complex biological effects of foods and diet patterns.4041424344 Such factors were increasingly seen to reflect joint contributions and interactions between carbohydrate quality (eg, glycaemic index, fibre content), fatty acid profiles, protein types, micronutrients, phytochemicals, food structure, preparation and processing methods, and additives.
Prospective cohorts and dietary intervention trials showed that a focus on total fat, a mainstay of dietary guidelines since 1980, produced little measurable health benefit; conversely, nutrient based recommendations for specific foods such as eggs, red meats, and dairy products (eg, based on dietary cholesterol, saturated fat, calcium) belied the observed relations of these foods with health outcomes.3233 For weight loss and glycaemic control, decades of emphasis on low fat diets were questioned by the results of a series of prospective cohort studies, metabolic feeding studies, and randomised trials, which showed that foods rich in healthy fats produced benefit, while foods rich in starch and sugar caused harm.33454647 This progress was extended to recognition of the relevance of diet patterns such as traditional Mediterranean or vegetarian diets that emphasised minimally processed foods such as fruits, vegetables, nuts, beans, whole grains, and plant oils and low amounts of highly processed foods rich in starch, sugar, salt, and additives.3233
These recent scientific shifts help explain many uncertainties and controversies in nutrition today. After decades of focus on simple, reductionist metrics such as dietary fat, saturated fat, nutrient density, and energy density, the emerging true complexities of different foods and diet patterns create genuine challenges for understanding influences on health and wellbeing. For several categories of foods, meaningful numbers of prospective observational or interventional studies have become available only recently.3338 Growing realisation of the importance of overall diet patterns has stimulated not only scientific inquiry but also a deluge of empirical, commercial, and popular dietary patterns of varying origin and scientific backing.48 These range, for example, from flexitarian, vegetarian, and vegan to low carb, paleo, and gluten-free. Many of these patterns have specific aims (eg, general health, weight loss, anti-inflammation) and are based on differing interpretations of current evidence.
In lower income countries, concerns about vitamin supplementation have emerged, such as harms associated with higher dose vitamin A supplements, risk of exacerbating infections such as malaria with iron, and safety concerns about folic acid fortification of flour, which might exacerbate neurological and cognitive deficits among people with low vitamin B12 levels.49505152 In addition, a precipitous rise in non-communicable diseases in these countries has led to new focus on the “double burden”—both conventionally conceived malnutrition (insufficient calories and micronutrients) leading to poor maternal and child health and modern malnutrition (poor diet quality) leading to obesity, type 2 diabetes, cardiovascular diseases, and cancer. These dual global burdens are increasingly found within the same nation, community, household, and even person.535455
Yet, after decades of focus in the international nutrition community on vitamin supplements, food fortification, and starchy staples to provide calories, the necessary shift towards diet quality is slowed by considerable inertia. This is seen, for example, in the reductionist, single nutrient focus of many of the UN sustainable development goals. Even when non-communicable diseases are considered, the predominant focus is on obesity rather than the diverse risk pathways and conditions affected by nutrition—facilitating a misleading concept of “overnutrition” rather than unhealthy dietary composition as the root problem.55
Future of nutrition science
Building on the evidence for multifaceted effects of different foods, processing methods, and diet patterns,3233 new priorities for research are emerging in nutrition science. These include optimal dietary composition to reduce weight gain and obesity; interactions between prebiotics and probiotics, fermented foods, and gut microbiota; effects of specific fatty acids, flavonoids, and other bioactives; personalised nutrition, especially for non-genetic lifestyle, sociocultural, and microbiome factors; and the powerful influences of place and social status on nutritional and disease disparities.335657585960
For lower income nations and populations, rigorous investigation is required to understand the optimal dietary patterns to jointly tackle maternal health, child development, infection risk, and non-communicable diseases.
Our understanding of diet related biological pathways will continue to expand (fig 1),335761 highlighting the limitations of using single surrogate outcomes to determine the full health effects of any dietary factor. In addition, future conclusions about diets and health should be based on complementary evidence from controlled interventions of multiple surrogate endpoints, mechanistic studies, prospective observational studies, and, when available, clinical trials of disease outcomes.3536373839 This will require moving away from the current simplistic belief that reliable nutritional evidence can be derived only from large scale randomised trials.
Given the large and continuing global rise in agribusiness and manufactured foods, nutrition science must keep pace with and systematically assess the long term health effects of new food technologies. Relatively little rigorous evaluation has been done on potential long term health consequences of modern shifts in agricultural practices, livestock feeding, crop breeding, and food processing methods such as grain milling and processing; plant oil extraction, deodorisation, and interesterification; dairy fat homogenisation; and use of emulsifiers and thickeners.
Additional complexity may arise in nutritional recommendations for general wellbeing versus treatment of specific conditions. For example, dietary recommendations for treating obesity are now particularly controversial. Many scientists continue to support a basic “energy imbalance” concept of obesity, wherein calories from different foods are all considered equal.62 Conversely, growing evidence suggests that, over longer periods, diet composition may be a more relevant focus than calories because of the varied influences of different foods on overlapping pathways for weight control such as satiety, brain reward, glycaemic responses, the microbiome, and liver function.56636465 Over months to years, some foods may impair pathways of weight homeostasis, others may have relatively neutral effects, and others may promote integrity of weight regulation. These long term effects will be especially relevant as anti-obesity efforts shift from secondary prevention (weight loss in people with obesity) towards primary prevention (avoidance of long term weight gain in populations).
Recognition of complexity is a key lesson of the past. This is common in scientific progress whether in nutrition, clinical medicine, physics, political science, or economics: initial observations lead to reasonable, simplified theories that achieve certain practical benefits, which are then inevitably advanced by new knowledge and recognition of ever-increasing complexity.35
Like nutrition science, policy needs to move from simplistic reductionist strategies to multifaceted approaches. Nutrition policy to reduce non-communicable diseases has so far generally relied on consumer knowledge—simply inform the public through education, dietary guidelines, product nutrition labels, etc, and people will make better choices. However, it is now clear that knowledge alone has relatively limited effects on behaviour, and that broader systems, policy, and environmental strategies are needed for effective change.6667
Compounding these challenges, many current strategies remain focused on reductionist constructs such as total fat or total saturated fat,4168 overlooking the importance of food type and quality, processing methods, and diet patterns. Another example of policy lag involves energy balance. Policy makers continue to promote total calorie labelling laws for menus and packaging and other calorie reduction policies, rather than aiming to increase calories from healthy foods and reduce calories from unhealthy foods.
The public is understandably bewildered by these evolving dietary messages. Many food companies compound the confusion by marketing products rich in refined flours, sugar, salt, and industrial additives, exploiting added micronutrients or terms such as “organic,” “local,” or “natural” to supply a false aura of healthiness. Public uncertainty is amplified by competing nutritional messages from varied media sources, online and social networks, cultural thought leaders, and commercial outlets, whose messages vary depending on underlying goals, expertise, perspectives, and competing interests.35
Although reductionist policies may have some value to reduce specific additives—eg, trans fats, sodium, added sugar—whole food based policies will be crucial to fully address diet related illnesses. Most policy innovation has focused on sugar sweetened drinks, following the model of the WHO Framework Convention on Tobacco Control: tax, restrict places of sale, restrict marketing, use warning labels. This construct breaks down for incentivising consumption of healthy foods. Integrated policy, investment, and cultural strategies are needed to create change in food production and manufacturing, worksites, schools, healthcare systems, quality standards and labelling, food assistance programmes, research and innovation, and public-private partnerships.
To be effective, future nutrition policy must unite modern scientific advances on dietary priorities (specific foods, processing methods, additives, diet patterns) with trusted communication to the public and modern evidence on effective systems level change. This includes a shift from the global medicalisation of health towards addressing the interconnected personal, community, sociocultural, national, and global determinants of food environments and choices.6667 In both lower and higher income countries, interventions must consider the double burdens of food insecurity and chronic disease, and their links to disparities in education, income, and opportunity. This will require substantially more funding for research, both from government sources and through appropriately fashioned, transparent public-private partnerships.6970 Guided by knowledge of the past, creative new approaches are needed for accelerated scientific investigation, coordination, and translation of current and future advances.
Modern nutrition science is young: It is less than one century since the first vitamin was isolated in 1926
The first half of the 20th century focused on the discovery, isolation, and synthesis of essential micronutrients and their role in deficiency diseases
●This created strong precedent for reductionist, nutrient focused approaches for dietary research, guidelines, and policy to address malnutrition
This reductionist approach was extended to address the rise in diet related non-communicable diseases—eg, focusing on total fat, saturated fat, or sugar rather than overall diet quality
Recent advances in nutrition science have shown that foods and diet patterns, rather than nutrient focused metrics, explain many effects of diet on non-communicable disease
●Lower income countries are recognising a growing “double burden” (combined undernutrition and non-communicable disease)
Nutrition policy should prioritise food based dietary targets, public communication of trusted science, and integrated policy, investment, and cultural strategies to create systems level change across multiple organisations and environments
Contributors and sources: All three authors have widely studied, reported on, and served in policy advisory roles on nutrition and health issues. DM had the idea for the article and drafted it with IR. All authors contributed to revising the draft and approved the final version. The authors selected the literature for inclusion in this manuscript based on their own expertise and knowledge, discussions with colleagues, and editorial and reviewer comments.
Competing interests: We have read and understood BMJ policy on declaration of interests and declare the following interests: DM reports personal fees from Acasti Pharma, GOED, DSM, Nutrition Impact, Pollock Communications, Bunge, Indigo Agriculture, and Amarin; scientific advisory board, Omada Health, Elysium Health, and DayTwo; and chapter royalties from UpToDate; all outside the submitted work. This research was partly supported by the NIH, NHLBI (R01 HL130735). The funders had no role in the design or conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.
Provenance and peer review: Commissioned; externally peer reviewed.
This article is one of a series commissioned by The BMJ. Open access fees for the series were funded by Swiss Re, which had no input into the commissioning or peer review of the articles.
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. | <urn:uuid:57cc470c-cc20-402c-a4aa-cad32241916f> | CC-MAIN-2022-33 | https://www.bmj.com/content/361/bmj.k2392?ijkey=3bc9918f3ea805e368032fd781d0fd7e83a376c8&keytype2=tf_ipsecsha | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00297.warc.gz | en | 0.93462 | 5,305 | 3.125 | 3 |
, whose seat is in Bingen am Rhein, although that town is not within its bounds. '''Zweibrücken''' is a city in Rhineland-Palatinate, Germany, on the Schwarzbach (Schwarzbach (Blies)) river. Two other states that were created after the war, Rhineland-Palatinate (French zone) and Lower Saxony (British zone), chose to use the black-red-gold tricolour as their flag, defaced with the state's coat of arms.
these previous states were accepted as a new state flag. This led to the use of the black-red-gold for two reasons: the colours did not relate particularly to any one of the previous states, and using the old flag from the Weimar Republic was intended to be a symbol of the new democracy. commons:Rheinland-Pfalz
'') is a state of Germany. The largest wine producing area in Germany, it is home to 6 of the 9 wine-producing districts in Germany and is full of museums, exhibitions and castles. Regions Rheinland-Pfalz is a region of rolling hills cut by deep river valleys.
landscapes and blue crater lakes makes the Eifel region a special experience. region2name Hunsrück region2color #c5995c region2items region2description Follow the footsteps of the Romans and the Celts, explore their ancient settlement sites in the hills. region3name Lahn Valley region3color #578e86 region3items region3description Divides the Taunus and the Westerwald east of the Middle Rhine Valley. region4name Middle Rhine Valley region4color #71b37b
;nbsp;hectoliters in 2003. commons:Rheinland-Pfalz
Gorge , directly across the river from Sankt Goar, in the State (States of Germany) Rhineland-Palatinate, in Germany. It lies approximately 30 km south of Koblenz, and it is above all famous for the Lorelei rock (Lorelei) nearby. The state of Rhineland-Palatinate has merged Hauptschulen and Realschulen to form a new type of school called "Realschule plus", which offers offers general education classes (that resemble classes hold at the normal
French occupation the Congress of Vienna decided to hand the territories over to Bavaria. The region remained a part of Bavaria until World War II; afterwards it was incorporated into the newly established state of Rhineland-Palatinate. 100px Coat of arms (Image:Wappen Landkreis Germersheim.png) The coat of arms shows the lion of Palatinate (Rhineland-Palatinate) in the top part, and the cross of Speyer in the bottom part. The wavy line in the middle symbolizes the Rhine river, and the escutcheon (Escutcheon (heraldry)) in the middle is from the coat of arms of the city of Germersheim. right thumb Müller-Thurgau is often used in the production of Liebfraumilch. (File:Rebe2.jpg) '''Liebfraumilch''' or '''Liebfrau(e)nmilch''' is a style of semi-sweet white German wine which may be produced, mostly for export, in the regions Rheinhessen (Rheinhessen (wine region)), Palatinate (Palatinate (wine region)), Rheingau (Rheingau (wine region)) and Nahe (Nahe (wine region)). The name is a German (German language) word literally meaning "Beloved lady (Virgin Mary)'s milk". The original German (German language) spelling of the word is ''Liebfrauenmilch'', given to the wine produced from the vineyards of the ''Liebfrauenkirche'' or Church of Our Lady in the Rhineland-Palatinate city of Worms (Worms, Germany) since the 18th century. The spelling ''Liebfraumilch'' is more common on labels of exported wine. Wein-Plus Glossar: ''Liebfrauenmilch'', read on January 13, 2008 '''Kaiserslautern''' is a district (''Kreis'') in the south of Rhineland-Palatinate, Germany. Neighboring districts are (from west clockwise) Kusel (Kusel (district)), Saarpfalz-Kreis, Donnersbergkreis, Bad Dürkheim (Bad Dürkheim (district)) and Südwestpfalz. The city of Kaiserslautern is almost fully enclosed by, but not belonging to the district. '''Kusel''' is a district (''Kreis'') in the south of Rhineland-Palatinate, Germany. Neighboring districts are (from north-west clockwise) Birkenfeld (Birkenfeld (district)), Bad Kreuznach (Bad Kreuznach (district)), Donnersbergkreis, Kaiserslautern (Kaiserslautern (district)), Saarpfalz and Sankt Wendel (Sankt Wendel (district)) (the latter two belonging to the state of Saarland). The '''Rhein-Pfalz-Kreis''' is a district (''Kreis'') in the east of Rhineland-Palatinate, Germany. '''Mainz-Bingen''' is a district (''Kreis'') in the east of Rhineland-Palatinate, Germany. Neighboring districts are (from north clockwise) Rheingau-Taunus, the district-free cities Wiesbaden and Mainz, the districts Groß-Gerau (Groß-Gerau (district)), Alzey-Worms, Bad Kreuznach (Bad Kreuznach (district)), Rhein-Hunsrück. '''Neuwied''' is a district (Districts of Germany) (''Kreis'') in the north of Rhineland-Palatinate, Germany. Neighboring districts are (from north clockwise) Rhein-Sieg, Altenkirchen (Altenkirchen (district)), Westerwaldkreis, Mayen-Koblenz, Ahrweiler. '''Mayen-Koblenz''' is a district (''Kreis'') in the north of Rhineland-Palatinate, Germany. Neighboring districts are (from north clockwise) Ahrweiler, Neuwied (Neuwied (district)), Westerwaldkreis, district-free Koblenz, Rhein-Lahn, Rhein-Hunsrück, Cochem-Zell, and Vulkaneifel. '''Rhein-Hunsrück''' is a district (''Kreis'') in the middle of Rhineland-Palatinate, Germany. The neighbouring districts are (from north clockwise) Mayen-Koblenz, Rhein-Lahn, Mainz-Bingen, Bad Kreuznach (Bad Kreuznach (district)), Birkenfeld (Birkenfeld (district)), Bernkastel-Wittlich, Cochem-Zell. Partnerships In 1962, Simmern began a friendship pact with the French (France) region Bourgogne, which was continued after the merging with the St. Goar. In 1985 a partnership was started with the district Nyaruguru (at that time called the municipality Rwamiko) in Rwanda, as part of the partnership of the Rhineland-Palatinate region with Rwanda. In 1999, a partnership with the Hungarian (Hungary) county Zala began. '''Rhein-Lahn''' is a district (''Kreis'') in the east of Rhineland-Palatinate, Germany. Neighboring districts are (from north clockwise) Westerwaldkreis, Limburg-Weilburg, Rheingau-Taunus, Mainz-Bingen, Rhein-Hunsrück, Mayen-Koblenz, and the district-free city Koblenz. The '''Westerwaldkreis''' (direct 1:1 translation: ''Western Forest District'') is a district (''Kreis'') in the east of Rhineland-Palatinate, Germany. Neighbouring districts are (from north clockwise) Altenkirchen (Altenkirchen (district)), Lahn-Dill, Limburg-Weilburg, Rhein-Lahn, the district-free city Koblenz, Mayen-Koblenz and Neuwied (Neuwied (district)). Together with the neighboring Rhein-Lahn district a partnership with the English county Northamptonshire was started in 1981. As part of the partnership of Rhineland-Palatinate with Rwanda the district had a partnership with the municipality Mugesera since 1983. As in 2001 this municipality was included in the district Mirenge the partner changed. '''Südliche Weinstraße''' is a district (''Kreis'') in the south of Rhineland-Palatinate, Germany. Neighboring districts are (from west clockwise) Südwestpfalz, Bad Dürkheim (Bad Dürkheim (district)), the district-free city Neustadt (Weinstraße), Rhein-Pfalz-Kreis, Germersheim (Germersheim (district)), and the French (France) ''département'' Bas-Rhin. The district-free city Landau is surrounded by the district. '''Südwestpfalz''' is a district (''Kreis'') in the south of Rhineland-Palatinate, Germany. Neighboring districts are (from west clockwise) Saarpfalz, the district-free city Zweibrücken, the districts Kaiserslautern (Kaiserslautern (district)) and Bad Dürkheim (Bad Dürkheim (district)), the district-free city Landau (the Taubensuhl Fassendeich forest part of the city), Südliche Weinstraße, and the French (France) ''département'' Bas-Rhin. The district-free city Pirmasens is surrounded by the district. '''Trier-Saarburg''' is a district in the west of Rhineland-Palatinate, Germany. Neighboring districts are (from the north and clockwise) Bitburg-Prüm, Bernkastel-Wittlich, Birkenfeld (Birkenfeld (district)), Sankt Wendel (Sankt Wendel (district)) (Saarland), and Merzig-Wadern (Saarland). To the west it borders Luxembourg. The district-free city Trier is completely surrounded by the district. By the 1750s the Great Valley was well-settled to the southern end of Shenandoah Valley. Immigrants continued to travel from the Philadelphia area south through the Great Valley beyond Shenandoah, to the vicinity of the modern city of Roanoke, Virginia. There is a wide gap in the Blue Ridge near Roanoke. A branch of the Great Wagon Road began there, crossing through the gap east into the Piedmont region of North Carolina and South Carolina. This road became known as the Carolina Road. During the 1750s the stream of migrants traveling south through the valley and into the Carolina Piedmont grew into a flood. At the time, the Carolina Piedmont region offered some of the best land at the lowest prices. Soon a string of towns appeared, including Salisbury (Salisbury, North Carolina), Salem (Winston-Salem, North Carolina), and Charlotte (Charlotte, North Carolina) in North Carolina. In the decades before the American Revolution the Piedmont "upcountry" of the Carolinas was quickly settled, mostly by recent immigrants who had migrated from the north to the south via the Great Valley. Many of these immigrants were Scots-Irish (Scotch-Irish American), Germans from the Rhineland-Palatinate area, and Moravians (Moravians (ethnic group)). This "upcountry" population soon surpassed the older and more established "lowcountry" population near the Atlantic coast, causing serious geopolitical tensions in the Carolinas during the late 18th century (Meinig, 1986: 291–293). '''Westerburg''' is a small town of roughly 6,000 inhabitants in the Westerwaldkreis in Rhineland-Palatinate, Germany. The town is named after the castle built on a hill above the mediaeval town centre (''Burg'' is German (German language) for “castle”) ** Westphalia formed the westernmost subdivision of the Saxon (Saxons) tribe; the origin of the second part (''-falen'' in German) remains unknown * Rhineland-Palatinate (German ''Rheinland-Pfalz''): formed geographically by joining parts of the Rhineland (after the River Rhine) with the Rhenish Palatinate, formerly a palatine county located near the Rhine, meaning that its count administered a palace of the Holy Roman Emperor. **The name of the Rhine derives from Gaulish ''Renos'', and ultimately from the Proto-Indo-European root *''reie-'' ("to move, flow, run"), the common root of words like ''river'' and ''run''. The Reno River in Italy shares a similar etymology. The spelling with -h- suggests influence from the Greek spelling of the name, ''Rhenos'', seen also in ''rheos'', "stream", and ''rhein'', "to flow". '''Speyer''' (formerly known as ''Spires'' in English) is a city of Rhineland-Palatinate, Germany with approximately 50,000 inhabitants. Located beside the river Rhine, Speyer is 25 km south of Ludwigshafen and Mannheim. Founded by the Roman (Ancient Rome)s, it is one of Germany's oldest cities. The first known names were ''Noviomagus'' and ''Civitas Nemetum'', after the Teutonic tribe, Nemetes, settled in the area. Around the year 500 the name '''Spira''' first appeared in written documents. ''Spire'', ''Spira'', and ''Espira'' are still names used for Speyer in the French, Italian, and Spanish languages. 297px thumb Coloured engraving of Ingelheim, Matthäus Merian (File:Ingelheim merian 1645.jpg), 1645 '''Ingelheim am Rhein''' is a town in the Mainz-Bingen district in Rhineland-Palatinate, Germany on the Rhine’s west bank. The town calls itself the ''Rotweinstadt'' (“Red Wine Town”) and since 1996 it has been Mainz-Bingen’s district seat. '''Saarburg''' (pop. ~6,700) is a city of the Trier-Saarburg district in the Rhineland-Palatinate state of Germany, on the banks of the Saar River in the hilly country a few kilometers upstream from the Saar's junction with the Moselle (Moselle River). Peter, a son of a farmer and craftsman, was born in the village of Binsfeld in the rural Eifel region, located in the modern state of Rhineland-Palatinate; he died in Trier as a victim of the bubonic plague. Binsfeld grew up in the predominantly Catholic (Roman Catholic Church) environment of the Eifel region. Considered by a local abbot as a very gifted boy, he was sent to Rome for study. thumb The only picture of Jakob Maria Mierscheid, from the German Bundestag (File:Jakob Maria Mierscheid.jpg) archive. '''Jakob Maria Mierscheid MdB (Member of Parliament)''' (born 1 March 1933) has been a fictitious politician in the German (Germany) Bundestag since 11 December 1979. He was then the alleged deputy chairman of the ''Mittelstandsausschuss'' (Committee for Small and Medium Sized Businesses) of the Bundestag in 1981 and 1982. According to his official biography, he was born in Morbach Hunsrück, a very rural constituency in Rhineland-Palatinate. He is Catholic and a member of the Social Democratic Party of Germany. Rudolf Rocker was born to the lithographer Georg Philipp Rocker and his wife Anna Margaretha née Naumann as the second of three sons in Mainz, Hesse (Grand Duchy of Hesse) (now Rhineland-Palatinate), Germany on March 25, 1873. This Catholic, yet not particularly devout, family had a democratic and anti-Prussian (Kingdom of Prussia) tradition dating back to Rocker's grandfather, who participated in the March Revolution (Revolutions of 1848 in the Habsburg areas) of 1848. However, Georg Philipp died just four years after Rocker's birth. After that, the family managed to evade poverty, only through the massive support by his mother's family. commons:Rheinland-Pfalz
-Hunsrück-Kreis , and the seat of the like-named ''Verbandsgemeinde'' (Simmern (Verbandsgemeinde)). In the Rhineland-Palatinate state development plan, it is set out as a middle centre (Central place theory). '''Kirchheimbolanden''', the capital of Donnersbergkreis, is a town in Rhineland-Palatinate, south-western Germany. It is situated approx. 25 km west of Worms (Worms, Germany), and 30 km north-east of Kaiserslautern. The first part of the name
, a middle centre (Central place theory). Statistisches Landesamt Rheinland-Pfalz – Regionaldaten thumb Bad Ems from the river Lahn (File:Bad ems.jpg) '''Bad Ems''' is a town in Rhineland-Palatinate, Germany. It is the county seat of the Rhein-Lahn (Rhein-Lahn-Kreis) List of German rural districts rural district
west of Bingen (Bingen am Rhein). The town of '''Wittlich''' is the seat of the Bernkastel-Wittlich district (Districts of Germany) in Rhineland-Palatinate, Germany, and thereby the middle centre (Central place theory) for a feeder area of 56 municipalities in the Eifel and Moselle (Moselle (river)) area with its population of roughly 64,000. With some 18,000 inhabitants, Wittlich is the biggest town between Trier and Koblenz and the fourth biggest between
of the département of Moselle in the Lorraine (Lorraine (province)) region, and in the northern part of Bas-Rhin in Alsace. To the north, it is bounded by the Sankt Goar line (“''das dat'' line”) which separates it from Moselle Franconian; to the south, it is bounded by the Main line (Speyer line) which is also referred to as the ''Speyer line'' which separates it from the Upper German dialects. '''Dreikirchen''' is an ''Ortsgemeinde'' – a community belonging
) is a river in northeastern France and western Germany, and a right tributary of the Moselle (Moselle (river)). It rises in the Vosges mountains on the border of Alsace and Lorraine (Lorraine (région)) and flows northwards into the Moselle near Trier. It has two headstreams (the ''Sarre Rouge'' and ''Sarre Blanche'', which join in Lorquin), that both start near Mont Donon, the highest peak of the northern Vosges. After 246 km (126 km in France and 120 km in Germany) the Saar flows into the Moselle (Moselle (river)) at Konz (Rhineland-Palatinate) between Trier and the Luxembourg border. It has a catchment area (drainage basin) of 7 431 km². *Saarland (D): Saarbrücken, Völklingen, Wadgassen, Bous (Bous, Germany), Saarlouis, Dillingen (Dillingen, Saarland), Merzig *Rhineland-Palatinate (D): Saarburg, Konz. '''German wine''' is primarily produced in the west of Germany, along the river Rhine and its tributaries, with the oldest plantations going back to the Roman (Ancient Rome) era. Approximately 60 percent of the German wine production is situated in the federal state of Rhineland-Palatinate, where 6 of the 13 regions (''Anbaugebiete'') for quality wine are situated. Germany has about 102,000 hectares (252,000 acres or 1,020 square kilometers) of vineyard, which is around one tenth of the vineyard surface in Spain, France or Italy. The total wine production is usually around 9 million hectoliters annually, corresponding to 1.2 billion bottles, which places Germany as the eighth largest wine-producing country in the world. German Wine Institute: German Wine Statistics 2007-2008 White wine accounts for almost two thirds of the total production. '''Linz am Rhein''' (in English ''Linz on the Rhine'') is a municipality in the district of Neuwied (Neuwied (district)), in Rhineland-Palatinate, Germany. It is situated on the right bank of the river Rhine near Remagen, approx. 25 km southeast of Bonn and has about 6,000 inhabitants. It is the sister city of Marietta, Georgia in the United States, Linz in Austria and Pornic in France. When in London, Graffenried had met with John Lawson (John Lawson (explorer)) who
experienced an increase in the number of students from other German states -- especially the neighbouring states -- matriculating or transferring there. The SPD (Social Democratic Party of Germany), which is currently the governing party in Rhineland-Palatinate, does not at present plan to introduce tuition fees. Early years Anders began showing interest in music as a child, he did his first stage-performance at age of 7. Anders first studied music at Koblenz Eichendorff-Gymnasium
eventually finished last with 10 points, while the single entered a moderate number 29 on the German singles chart. Around the same time Sandy received an ECHO (Echo (music award)) nomination for "Female Artist National (2005)". # Georgswalde - a town of Bohemia, Austria Now Jiríkov Czech Republic. see Ger wiki # Gerolstein - Town of the Landkreis Daun (district
'''Rhineland-Palatinate''' (German (German language): ''Rheinland-Pfalz'', | <urn:uuid:4e427742-8dad-49b4-93ac-b55f254a70cc> | CC-MAIN-2022-33 | https://placesknownfor.com/place/Rhineland-Palatinate | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00697.warc.gz | en | 0.906913 | 5,134 | 2.8125 | 3 |
Progressive Party Documents
Progressive Party Documents
Excerpt from the Platform of the Progressive Party Excerpt from Address by Theodore Roosevelt before the Convention of the National Progressive Party in Chicago
August 6–7, 1912
"We progressives stand for the rights of the people."
In the summer of 1912, Theodore Roosevelt (1858–1919) was the most popular politician in America. As a Republican, he had been president for seven-and-a-half-years, from the assassination of William McKinley in September 1901 until March 1909. Near the end of his second term, he decided not to run for another term.
But Roosevelt was not happy with his Republican successor, William Howard Taft (1857–1930). The two disagreed particularly over the issue of conservation of natural resources. Both men were dedicated "trustbusters" who favored government lawsuits to break up large monopolies (companies exercising exclusive control of a particular area of commerce) in industries such as railroads and oil. Roosevelt also took a more aggressive approach to issues of social reform, such as child labor and minimum wages.
In February 1912 Roosevelt declared that he would again be a candidate for the presidency, challenging Taft for the Republican nomination. But President Taft influenced many Republican Party officials, and he defeated Roosevelt for the Republican nomination at the party's convention in June 1912. Frustrated by the Republicans, Roosevelt declared that he was as fit as a bull moose, giving his campaign a symbol (the bull moose) of robust energy. Roosevelt's party was called the Progressive Party, but it was more often called by its nickname: the Bull Moose Party.
The Progressive Party was organized to address many of the social problems that had arisen from the rapid rise of factories where workers tended to machinery that did the work formerly done by hand. These problems included long hours and low pay, bad housing, and lack of education. They also included dishonest dealings in the stock market and bribery of public officials. The Progressive Party sought to pass government regulations to protect workers, regulate financial dealings, and prosecute corrupt public officials, as well as to tax the income of wealthy business owners. The term "progressive movement" represented the idea that government should actively address social problems.
Another approach to the social problems of industrialization was represented on the ballot in 1912 by the Socialist Party, led by Eugene Debs (1855–1926). The Socialists favored government ownership of big corporations, which would then be controlled democratically. The Socialists seemed too drastic for most voters in 1912, and Debs received just under one million votes, or about 6 percent of the total.
Things to remember while reading excerpts from the Progressive Party documents:
- A political party platform is a list of ideas and promises. It is written to appeal to as many voters as possible. Once a political party gets into office, some of the promises made in its platform may prove impossible (or inconvenient) to keep. Party platforms are useful, however, in understanding the principles and ideals that politicians believe will succeed in an election. Nevertheless, the 1912 platform of the Progressive Party is a good catalogue of social problems that grew from the Industrial Revolution in an era when government regulation was at a minimum.
- The argument over the relationship between the federal government and business has continued into the twenty-first century. In the early years of the Industrial Revolution, it was widely believed that the government had no active role to play in business. Consequently, companies were free to follow the policies they thought would benefit them most. As time went on, it became obvious that many of those policies—low wages and hiring children for dangerous work, for example—were resulting in widespread human suffering. The Progressive Party represented people who believed that business owners would never voluntarily correct these abuses, and that the only possible answer was government regulations to require that factories pay a minimum wage and laws preventing the hiring of children.
- Every political platform engages in a certain amount of simplification and exaggeration. For example, the Progressive platform accused the Republican Party of the "deliberate betrayal of its trust" by becoming too close to business interests. In fact, President Taft had been at least as aggressive as Theodore Roosevelt in attacking the trusts, or monopolies, by going to court to enforce laws prohibiting such activities. All political platforms and speeches need to be read with the understanding that politicians are trying to win votes, not necessarily to speak the pure truth.
- Similarly, just because an item is on a party's platform does not mean other parties oppose it. For example, the Progressive Party favored creating a U.S. Department of Labor to attend to the problems and issues of working people. In fact, President Taft signed a law creating the Department of Labor in March 1913, only days before he left office.
Excerpts from the Platform of the Progressive Party, August 7, 1912
The conscience of the people, in a time of grave national problems, has called into being a new party, born of the Nation's awakened sense of justice. We of the Progressive Party here dedicate ourselves to the fulfillment of the duty laid upon us by our fathers to maintain that government of the people, by the people and for the people [mentioned in the Declaration of Independence] whose foundation they laid.
We hold with Thomas Jefferson and Abraham Lincoln that the people are the masters of their Constitution, to fulfill its purposes and to safeguard it from those who, byperversion of its intent, would convert it into an instrument of injustice. In accordance with the needs of each generation the people must use theirsovereign powers to establish and maintain equal opportunity and industrial justice, to secure which this Government was founded and without which no republic can endure.
- Willful destruction.
This country belongs to the people who inhabit it. Its resources, its business, its institutions and its laws should be utilized, maintained or altered in whatever manner will best promote the general interest.
It is time to set thepublic welfare in the first place.
- Public welfare:
The Old Parties
Political parties exist to secure responsible government and to execute the will of the people.
From these great tasks both of theold parties have turned aside. Instead of instruments to promote the general welfare, they have become the tools of corrupt interests which use themimpartially to serve their selfish purposes. Behind theostensible government sits enthroned an invisible government, owing no allegiance and acknowledging no responsibility to the people.
- Old parties:
- Republicans and Democrats.
- Without prejudice.
- Outward appearance.
To destroy this invisible government, to dissolve the unholy alliance between corrupt business and corrupt politics is the first task of the statesmanship of the day.
The deliberate betrayal of its trust by the Republican Party, and the fatal incapacity of the Democratic Party to deal with the new issues of the new time, have compelled the people to forge a new instrument of government through which to give effect to their will in laws and institutions.
Unhampered by tradition, uncorrupted by power, undismayed by the magnitude of the task, the new party offers itself as the instrument of the people to sweep away old abuses, to build a new and noblercommonwealth. …
Nation and State
- A political unit serving the greater good for the most people.
Up to the limit of the Constitution, and later by amendment of the Constitution, if found necessary, we advocate bringing under effective national jurisdiction those problems which have expanded beyond reach of the individual states.
It is asgrotesque as it is intolerable that the several States should by unequal laws in matters of common concern become competing commercial agencies,barter the lives of their children, the health of their women and the safety and well-being of their working people for the profit of their financial interests.
The extreme insistence on States' rights by the Democratic Party in the Baltimore platform demonstrates anew its inability to understand the world into which it has survived or to administer the affairs of a Union of States which have in all essential respects become one people.
Social and Industrial Strength
The supreme duty of the Nation is the conservation of human resources through an enlightened measure of social and industrial justice. We pledge ourselves to work unceasingly in State and Nation for:—
Effective legislation looking to the prevention of industrial accidents, occupational diseases, overwork, involuntary unemployment, and otherinjurious effects incident to modern industry;
The fixing of minimum safety and health standards for the various occupations, and the exercise of the public authority of State and Nation, including the Federal control over inter-State commerce and the taxing power, to maintain such standards;
The prohibition of child labor;
Minimum wage standards for working women, to provide a living scale in all industrial occupations;
The prohibition of night work for women and the establishment of an eight hour day for women and young persons;
One day's rest in seven for all wage-workers;
The abolition of theconvict contract labor system ; substituting a system of prison production for governmental consumption only; and the application of prisoners' earnings to the support of their dependent families;
- Convict contract labor system:
- Program in which prisoners were sent to factories and their wages paid to the state.
Publicity as to wages, hours and conditions and labor; full reports upon industrial accidents and diseases, and the opening to public inspection of alltallies , weights, measures and check systems on labor products;
Standards of compensation for death by industrial accident and injury and trade diseases which will transfer the burden of lost earnings from the families of working people to the industry, and thus to the community;
The protection of home life against the hazards of sickness, irregular employment and old age through the adoption of a system of social insurance adapted to American use;
The development of the creative labor power of America by lifting the last load ofilliteracy from American youth and establishingcontinuation schools for industrial education under public control and encouraging agricultural education and demonstration in rural schools;
- Inability to read.
The establishment of industrial research laboratories to put the methods and discoveries of science at the service of American producers.
We favor the organization of the workers, men and women as a means of protecting their interests and of promoting their progress.
We believe that true popular government, justice and prosperity go hand in hand, and so believing, it is our purpose to secure that large measure of general prosperity which is the fruit of legitimate and honest business,fostered by equal justice and by sound progressive laws.…
We therefore demand a strong National regulation of interState corporations. The corporation is an essential part of modern business. The concentration of modern business, in some degree, is both inevitable and necessary for National and international business efficiency, but the existing concentration of vast wealth under a corporate system, unguarded and uncontrolled by the Nation, has placed in the hands of a few men enormous, secret, irresponsible power over the daily life of the citizen—a powerinsufferable in a free government and certain of abuse.
This power has been abused, inmonopoly of National resources, instock watering , in unfair competition and unfair privileges, and finally insinister influences on the public agencies of State and Nation. We do not fear commercial power, but we insist that it shall be exercised openly, under publicity, supervision and regulation of the most efficient sort, which will preserve its good whileeradicating and preventing its evils.
- Complete control.
- Stock watering:
- Manipulation of the market.
- Wiping out.
To that end we urge the establishment of a strong Federal administrative commission of high standing, which shall maintain permanent active supervision over industrial corporations.…
- Right to vote.
The Progressive Party, believing that no people can justly claim to be a true democracy which denies political rights on account of sex, pledges itself to the task of securing equal suffrage to men and women alike.…
Department of Labor
We pledge our party to establish a Department of Labor with a seat in thecabinet , and with wide jurisdiction over matters affecting the conditions of labor and living.…
- Advisers to the president of the United States.
We favor the union of all the existing agencies of the Federal Government dealing with the public health into a single National health service.…
We pledge ourselves to the enactment of apatent law which will make it impossible for patents to be suppressed or used against the public welfare in the interests of injurious monopolies.
- Patent law:
- Exclusive right to earn money from an invention.
Inter-State Commerce Commission
We pledge our party to secure to the Inter-State Commerce Commission the power to value the physical property of railroads. In order that the power of the commission to protect the people may not be impaired or destroyed, we demand the abolition of the Commerce Court.
We recognize the vital importance of good roads and we pledge our party to foster their extension in every proper way, and we favor the early construction of National highways. We also favor the extension of therural free delivery service .…
- Rural free delivery service:
- Mail to farmers.
We favor theratification of the pending amendment to the Constitution giving the Government power tolevy an income tax.…
Through the establishment of industrial standards we propose to secure to theable-bodied immigrant and to his native fellow workers a larger share of American opportunity.
We denounce the fatal policy of indifference and neglect which has left our enormous immigrant population to become the prey of chance andcupidity.
- Excessive desire.
We favor governmental action to encourage the distribution of immigrants away from the congested cities, to rigidly supervise all private agencies dealing with them and to promote theirassimilation , education and advancement.…
- Inclusion into society.
Government Business Organization
We pledge our party to readjustment of the business methods of the National Government and a proper co-ordination of the Federal bureaus, which will increase the economy and efficiency of the Government service, prevent duplications and secure better results to the taxpayers for every dollar expended.
Government Supervision Over Investment
The people of the United States areswindled out of many millions of dollars every year, through worthless investments. The plain people, the wage-earner and the men and women with small savings, have no way of knowing the merit of concerns sending out highly coloredprospectuses offering stock for sale, prospectuses that make big returns seem certain and fortunes easily within grasp.
We hold it to be the duty of the Government to protect its people from this kind of piracy. We, therefore, demand wise carefully-thought-out legislation that will give us such Governmental supervision over this matter as will furnish to the people of the United States this much-needed protection, and we pledge ourselves thereto.
On these principles and on the recognized desirability of uniting the Progressive forces of the Nation into an organization which shallunequivocally represent the Progressive spirit and policy we appeal for the support of all American citizens without regard to previous politicalaffiliations .
- Without compromise.
Excerpt from Address by Theodore Roosevelt before the Convention of the National Progressive Party in Chicago, August 6, 1912
We Progressives stand for the rights of the people. When these rights can best be secured by insistence upon States's rights, then we are for States's rights; when they can best be secured by insistence upon National rights, then we are for National rights. Interstate commerce can be effectively controlled only by the Nation. The States cannot control it under the Constitution, and to amend the Constitution by giving them control of it would amount to adissolution of the Government. The worst of the big trusts have alwaysendeavored to keep alive the feeling in favor of having the States themselves, and not the Nation, attempt to do this work, because they know that in the long run such effort would be ineffective.There is no surer way to prevent all successful effort to deal with the trusts than to insist that they be dealt with in the States rather than by the Nation, or to create a conflict between the States and the Nation on the subject. The well-meaning ignorant man who advances such a proposition does as much damage as if he were hired by the trusts themselves, for he is playing the game of every big crooked corporation in the country. The only effective way in which to regulate the trusts is through the exercise of thecollective powerof our people as a whole through the Governmental agencies established by the Constitution for this very purpose.
- Breaking apart.
- Tried, attempted.
Grave injustice is done by the Congress when it fails to give the National Government complete power in this matter; and still graver injustice by the Federal courts when they endeavor in any way topare down the right of the people collectively to act in this matter as they deem wise; such conduct does itself tend to cause the creation of a twilight zone in which neither the Nation nor the States have power.…
- Pare down:
Theantitrust law should be kept on the statute books and strengthened so as to make it genuinely and thoroughly effective against every big concern tending to monopoly or guilty of antisocial practices.
- Antitrust law:
- A law that prohibits companies from avoiding competition by acquiring a monopoly on a particular industry.
At the same time, a National industrial commission should be created which should have complete power to regulate and control all the great industrial concerns engaged in inter-State business—which practically means all of them in this country.…
This commission should deal with all the abuses of the trust,—all the abuses such as those developed by the Government suit against the Standard Oil and Tobacco Trusts—as the Inter-State Commerce Commission now deals with rebates. It should have complete power to make thecapitalization absolutely honest and put a stop to all stock watering. Such supervision over the issuance of corporate securities would put a stop to exploitation of the people by dishonest capitalists desiring to declaredividends onwatered securities , and would open this kind of industrial property to ownership of the people at large. It should have free access to the books of each corporation and power to find out exactly how it treats its employees, its rivals, and the general public. It should have power to compel the unsparing publicity of all the acts of any corporation which goes wrong.…
- Watered securities:
- Unequal shares.
What happened next …
Roosevelt and Taft split the Republican vote, with Roosevelt receiving 27.4 percent of the popular vote to Taft's 23.2 percent. But the Democrat, Woodrow Wilson, received 41.8 percent of the popular vote and won the election with 435 electoral votes.
In 1916 Theodore Roosevelt decided not to run again and the Progressive Party dissolved. But the ideas behind his candidacy did not disappear. Twelve years later, Senator Robert La Follette Sr. of Wisconsin ran unsuccessfully for the presidency as an independent who was supported by many progressives. In 1934 his son Robert La Follette Jr. formed a Progressive Party in Wisconsin and achieved some success in the state. In 1948 Henry Wallace organized a new Progressive Party to run for the White House against Harry Truman.
In the meantime, many if not most of the policies advocated by Roosevelt and the Progressive Party in 1912 eventually became law. Women achieved the vote in 1919, minimum wage laws were enacted, and bans on child labor eventually eliminated many of the wretched social conditions created by the Industrial Revolution.
Did you know . . .
Just eight months before the election of 1912, a dramatic strike by textile workers in Lawrence, Massachusetts, had brought the problems of workers to national attention. The strike, which included violent confrontations between strikers and state police and militiamen, involved many young women and girls employed in the many textile mills in Lawrence. Congressional hearings in March 1912 (see Camella Teoli entry) had made public the stories of these girls, among whom were many immigrants. The widespread national publicity, including photos showing militiamen (similar to members of today's National Guard) pointing rifles with bayonets at the strikers, brought widespread sympathy for the strikers.
For more information
Duncan-Clark, S. J. The Progressive Movement: Its Principles and Its Programme (includes Platform of the Progressive Party). Boston: Small, Maynard & Co., 1913.
Gable, John A. The Bull Moose Years: Theodore Roosevelt and the ProgressiveParty. Port Washington, NY: Kennikat Press, 1978.
Kennedy, David M., ed. Progressivism: The Critical Issues. Boston: Little, Brown, 1971. | <urn:uuid:75eb6da3-7281-462b-827f-bb6773b15139> | CC-MAIN-2022-33 | https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/progressive-party-documents | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00094.warc.gz | en | 0.953068 | 4,516 | 3.234375 | 3 |
The Coronavirus is officially called SARS-CoV-2, which stands for Severe Acute Respiratory Syndrome Corona Virus 2. This virus causes Covid-19, the disease currently sickening millions and killing hundreds of thousands. Despite the warnings of government officials and health experts, some persons are questioning the severity of this disease. Is it really that bad? Here are some of their concerns:
Concern: “Most of the persons dying are elderly, who would have died of something else, if not from this.”
Response: The total number of persons dying is much higher than this time last year. So that means that most of those dying from the virus would not have died anyway. Some young adults are dying of Covid-19. Some children have usual and sometimes severe symptoms
Concern: “Some of the deaths reported as due to Covid-19 were actually from other causes.”
Response: Yes, there are always some errors in categorization in anything. But overall the deaths from Covid-19 were correctly reported. This is true because it is an unusual disease; it stands out from other causes of death.
Concern: “80% of cases are mild, so for most persons, it is not that bad.”
Response: The 80% figure comes from an early study out of China that categorized patients as mild, severe, or critical; the “mild” category included persons with pneumonia, vomiting or diarrhea, and/or severe muscle pain. More recent studies have used a fourth category: “moderate”. Also, persons who are categorized as mild or moderate could still have lasting bad consequences to their long-term health. The Coronavirus can cause lasting lung damage, in the form of fibrosis; clots in the brain, heart, and deep veins throughout the body; damage to the reproductive system; damage to the central nervous system.
Concern: “The pandemic is being exaggerated in order to disrupt society and give more power to certain persons or organizations.”
Response: This disease really is that bad. See the descriptions of cases and harm caused by the disease. And the disruptions to society, such as violent protests, are perhaps partially caused by fear of the disease (which can cause great suffering and death) and by frustration and anger that not enough is being done to solve this problem. It is not a plot by anyone, but a type of natural disaster that does occur in human history from time to time (e.g. Spanish Flu, Black Plague, and as I write this I’m hoping this pandemic doesn’t measure up to that standard).
How Bad Is It?
The Coronavirus, SARS-CoV-2, is spherical, like a soccer ball, but with hundreds of Spike proteins sticking out of it all around. The “spikes” are actually shaped more like a morel mushroom, or at least a spike with the pointy end down. The Spikes help the virus infect cells. It can only infect cells which have a certain protein on the surface of the cell, embedded in the cell wall. Human cells have many different types of proteins stuck into the cell wall, for various purposes. In this case, the protein is called ACE2.
When a virus Spike docks with an ACE2 receptor, the Spike breaks the receptor and begins merging with the cell. It then drops its RNA into the cell, along with its N-protein (which wraps and protects the viral RNA). Since the virus only infects cells with ACE2 proteins on the surface, which cells have that ACE2? Lungs, heart, kidney, intestines, blood vessels, reproductive organs of men and women, fat tissue, thyroid, esophagus (throat), breast, salivary glands, pancreas [1, 2, 3]. Yes, any and all of those cells can be infected by the Coronavirus. And that is bad. Infection of any organ by SARS-CoV-2 causes harm to that organ.
The Coronavirus versus the Immune System
In summary, the virus’ shell is made of the same stuff as the cell walls of the human cells, so the immune system can’t recognize it as foreign. The viral shell or membrane is coated with hundreds of Spikes, that the immune system should recognize and attack — except that the Spike is coated with sugar chains, called a glycan coat, that flail around and beat back antibodies and such. The Spike stalk is hinged in three places, which allows the Spike “to scan the host cell surface, shielded from antibodies by an extensive glycan coat.”
The virus infects cells by docking with the ACE2 receptor, and it breaks that receptor, throwing the regulation of the major organs out of balance. Then it drops its RNA into the infected cell, along with an “N-protein” which turns off one of the cell’s defense mechanisms (mRNA silencing system). The N-protein and two of the viral proteases then get to work cutting up and deactivating the human proteins that control the immune system. This makes sure the first line of defense, the innate arm of the immune system, does not react much to the infection. And they cut up other immune system proteins, resulting in a later over-reaction, with hyper-inflammation, called the cytokine storm. See How Covid-19 Attacks your Immune System for details and references.
The virus breaks these ACE2 receptors, irreparably, when it infects a cell. This causes a sudden decrease in the amount of ACE2 in your body, but the amount of ACE1 (actually just called “ACE”) remains the same. This imbalance causes havoc with regulation of blood pressure and blood vessels. Ordinarily, you have enough of both types of “ACE” to keep kidneys, lungs, heart, and blood vessels in balance. Wrecking most of your ACE2 throws this ACE system (technically the Renin-Angiotensin System) into a tail spin. One effect is the blood vessels in the lungs expand, and let fluid seep into the lungs, making it increasingly difficult to breath. This is why many Covid-19 patients need to be put on mechanical ventilation. They have to breath very deliberately, deep and fast, just to get barely enough oxygen, and they just can’t keep that up without mechanical assistance and extra oxygen. It’s like drowning from the inside.
After a patient recovers from Covid-19, they can have so much damage to their lungs that a type of lung scar tissue, fibrosis develops, making it permanently hard to breathe. The similar virus SARS-CoV-1, which causes “SARS”, patients had so much fibrosis in their lungs even a year or more later, that some of them were dying of this cause . Those who don’t die have a decreased quality of life. It’s not fun to have constant difficult breathing.
Damage to the heart has been found in many Covid-19 patients, “occurring in 20% to 30% of hospitalized patients and contributing to 40% of deaths” . This damage is caused in part because the Coronavirus infects heart muscle cells by means of their ACE2 on the surface of heart cells, thus destroying many cells of the heart. Usually, a heart attack is caused by a clogged artery, cutting off blood flow and therefore oxygen to the heart muscle. Here, the virus attacks the heart cells directly by infecting them. After forcing the heart cells to make many copies of the virus, the virus then destroys the cells.
A recent study found that liver damage “is more frequently occurring in severe COVID-19 cases compared with patients with mild disease. The underlying mechanism of hepatotoxicity in patients with COVID-19 could be due to systemic inflammation, drug-induced liver injury, or pre-existing chronic liver diseases.” It is not clear what is causing the liver damage. It might be the disease itself, or the medications used to treat the disease.
Massive Blood Clotting
Thrombosis occurs when a blood clot inside the blood vessels gets stuck in an artery or vein, blocking the blood vessel. Covid-19 can cause massive blood clotting. The SARS-CoV-2 virus infects the cells that line the blood vessels; damaging the lining causes clotting. Blood clots can add to the damage of the lungs and heart. They can also cause damage to the brain. Some of the blood clots lodge in the deep veins of the body. About half the time, blood clots mainly affect the lungs; but they can also affect the brain (causing a stroke), heart, or the arms/legs .
A physician [Roger Seheult, MedCram] who specializes in lungs (pulmonologist) described dealing with a blood clot in the brain of one of his patients. They used contrast so they could see the clogged blood vessel in the brain. They used a long catheter device to reach the vein in the brain. Then they watched on the monitor in real time as they cleared the clot — and another clot immediately formed. Clearing that clot led to another one forming right away again. They had never seen that before in any patient. Scary.
An ICU physician [Mike Hansen] had a patient with Covid whose kidneys shutdown. They put the patient on dialysis — but the machine kept clogging from the massive amount of blood clots! [video link here]
This CNN news report discusses a case series of autopsies showing extensive clotting in almost every organ of the body. The autopsies also found blood clots in the capillaries of the lungs.
One theory is that the virus can attach to the outside of Red Blood Cells (RBCs) and also attach to platelets. So when clotting occurs, the virus becomes part of the clot; it helps to bind the RBCs and platelets and fibrin altogether. It can also infect the platelets. This study — Platelet Gene Expression and Function in COVID-19 Patients — states that SARS-CoV-2 infects platelets, even though they don’t have ACE2, and causes the platelets to go into a state of hyperactivity, which is part of the reason for massive clotting in some patients.
Then, too, because the virus is attacking the cells that line the blood vessels, the opposite problem can occur, bleeding instead of clotting. These so-called hemorrhagic events can occur in the brain, liver, deep in the muscles, or in the gastro-intestinal system . In the brain, whether it is a clot or a hemorrhage, a stroke can occur as a result.
Attack on Hemoglobin
Liu, W., & Li, H. (2020). “COVID-19: attacks the 1-beta chain of hemoglobin and captures the porphyrin to inhibit human heme metabolism.” Preprint revised on, 10(04).
“These three domains were highly overlapping so that ORF3a could dissociate the iron of heme to form porphyrin. Heme linked sites of E protein may be relevant to the high infectivity, and the role of heme linked sites of N protein may link to the virus replication.”
“The study results showed orf1ab, ORF3a, and ORF10 proteins could coordinately attack 1-beta chain of hemoglobin.”
The Covid-19 virus evades the immune system, in various ways, so that it takes a while before your system can recognize that a viral infection is occurring and fight back. In the meantime, the virus is multiplying, without restraint. When the immune system finally recognizes that an infection is underway, it suddenly sees a vast amount of virus — and it over-reacts. It pours out a large amount of all its “troops” called cytokines. Cytokines include numerous different types of small proteins which cause inflammation and fight infection. But in excess, they wreck your body, especially your lungs. This is the cytokine storm. It kills many Covid-19 patients.
Covid-19 Attacks the Brain
A large percentage of severe Covid-19 cases develop delirium during the worst days of the disease.
“Delirium is a sudden, fluctuating, and usually reversible disturbance of mental function. It is characterized by an inability to pay attention, disorientation, an inability to think clearly, and fluctuations in the level of alertness (consciousness).” [Merck Manual]
In one study, “Delirium occurred in 73.6% (106/144) and delirium or coma occurred in 76.4% (110/144)” of ICU patients with Covid-19 . The delirium lasted about 5 days, cases with delirium and coma lasted about 7 days.
Yes, delirium and coma can result from Covid-19, and it is not rare. And there are other neurological problems caused by SARS-CoV-2, including a case of acute vision loss , Guillain-Barre Syndrome (sudden onset of progressive weakening of the muscles) , auditory hallucinations, and brain/spine demyelinating lesions (loss of insulating fat on the neurons) .
Covid-19 can cause strokes. This is caused either by thrombosis, clots that clog arteries in the brain, or by hemorrhaging, by bleeding in the brain.
Why does this happen? SARS-CoV-2 can infect neurons in the brain and the rest of the nervous system . “Reports indicate that 30-60% of patients with COVID-19 suffer from CNS symptoms” .
“Between 25 and 40% of the SARS-CoV-2 patients present neurological symptoms, these can go from mild symptoms including olfactory and gustatory disorders, dizziness, headache, confusion, to a more severe cerebrovascular disease, encephalitis, seizures or the Guillain-Barré syndrome (Asadi-Pooya and Simani, 2020; Mao et al., 2020).”
theguardian in the UK: Warning of serious brain disorders in people with mild coronavirus symptoms – “UK neurologists publish details of mildly affected or recovering Covid-19 patients with serious or potentially fatal brain conditions”
Winter Is Coming
Several studies have now concluded that Covid-19 is a seasonal disease, one which will be worse in winter than in summer. See the explanation in this article. If so, then the case rate and fatality rate will climb in October, and again in November, and again in December. And then things will remain bad until late spring, 2021.
Vaccines to the Rescue
Or maybe not. As the article on the immune system explains, this virus is good at evading and even outright attacking our immune system. This suggests that the first vaccine might have a low effectiveness. They might protect 25 to 33% of the population from infection, and then, for those who are vaccinated but also become sick with Covid-19, it may lessen the severity of the disease. But after spending almost every day for 4 months reading the research on this disease, I find it hard to believe that the first vaccine will work better than that. This virus is just better than almost any other in fighting the immune system.
Suffering and Death
Covid-19 is a disease that can be mild in some persons. It can also be very severe, causing much pain, making the person feel as if they cannot breath, can causing fear and anxiety. The disease can cause lasting damage to the human body. And then there is the death rate, which is currently at about 5%, comparing deaths to reported cases. Recently, the world surpassed 10 million reported cases and half a million deaths at about the same time (late June). That is 10 million persons who suffered a great deal, and half a million who died.
How bad is it? Millions have suffered, and millions more will suffer. Over half a million have died, and it is likely millions will die before it is all over. If it is all over. It’s possible that the virus may evolve, so that we need a new vaccine every year, as with the flu. It’s possible that an effective vaccine will not be found.
Please do not underestimate the threat posed by Covid-19.
Ronald L. Conte Jr.
Note: the author of this article is not a doctor, nurse, or healthcare provider.
1. Bastolla, Ugo. “The differential expression of the ACE2 receptor across ages and gender explains the differential lethality of SARS-Cov-2 and suggests possible therapy.” arXiv preprint arXiv:2004.07224 (2020).
2. Barros, Romulo O., et al. “Interaction of drugs candidates with various SARS-CoV-2 receptors: an in silico study to combat COVID-19.” (2020).
3. Basu, Anamika, Anasua Sarkar, and Ujjwal Maulik. “Computational approach for the design of potential spike protein binding natural compounds in SARS-CoV2.” (2020).
4. Hui, David S., et al. “The 1-year impact of severe acute respiratory syndrome on pulmonary function, exercise capacity, and quality of life in a cohort of survivors.” Chest 128.4 (2005): 2247-2261.
5. Akhmerov, Akbarshakh, and Eduardo Marbán. “COVID-19 and the heart.” Circulation research 126.10 (2020): 1443-1455.
6. Fraissé, Megan, et al. “Thrombotic and hemorrhagic events in critically ill COVID-19 patients: a French monocenter retrospective study.” Critical Care 24.1 (2020): 1-4.
7. Merkler, Alexander E., et al. “Risk of Ischemic Stroke in Patients with Covid-19 versus Patients with Influenza.” medRxiv (2020).
8. Turoňová, Beata, et al. “In situ structural analysis of SARS-CoV-2 spike reveals flexibility mediated by three hinges.” bioRxiv (2020).
9. Song, Eric, et al. “Neuroinvasive potential of SARS-CoV-2 revealed in a human brain organoid model.” bioRxiv (2020).
10. Selvaraj, Vijairam, et al. “ACUTE VISION LOSS IN A PATIENT WITH COVID-19.” medRxiv (2020).
11. Toscano, Gianpaolo, et al. “Guillain–Barré syndrome associated with SARS-CoV-2.” New England Journal of Medicine (2020).
12. Zanin, Luca, et al. “SARS-CoV-2 can induce brain and spine demyelinating lesions.” Acta Neurochirurgica (2020): 1-4.
13. Song, Eric, et al. “Neuroinvasive potential of SARS-CoV-2 revealed in a human brain organoid model.” bioRxiv (2020).
14. Khan, Sikandar H., et al. “Delirium Incidence, Duration and Severity in Critically Ill Patients with COVID-19.” medRxiv (2020).
15. Murta, Veronica, Alejandro Villarreal, and Alberto Javier Ramos. “SARS-CoV-2 Impact on the Central Nervous System: Are Astrocytes and Microglia Main Players or Merely Bystanders?.” (2020).
16. Scheim, David. “Antimalarials for COVID-19 Treatment: Rapid Reversal of Oxygen Status Decline with the Nobel Prize-Honored Macrocyclic Lactone Ivermectin.” Available at SSRN 3617911 (2020).
17. Farshidpour, Maham, et al. “A brief review of liver injury in patients with Corona Virus Disease-19 during the pandemic.” Indian Journal of Gastroenterology (2020): 1-4.
18. Rapkiewicz, Amy V., et al. “Megakaryocytes and platelet-fibrin thrombi characterize multi-organ thrombosis at autopsy in COVID-19: A case series.” EClinicalMedicine (2020): 100434. | <urn:uuid:e8c42c8e-dd98-496f-95b4-f71af01a479b> | CC-MAIN-2022-33 | https://covid.us.org/2020/06/29/yes-the-coronavirus-really-is-that-bad/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00497.warc.gz | en | 0.934593 | 4,484 | 3.171875 | 3 |
This article describes the role that media coverage plays in creating awareness of the psychosocial support available to people in Sweden who are affected by crises, accidents, and trauma. The connection between media coverage, psychosocial support and traffic accidents has not been made clear in the literature or previous research. Trauma and fatal injuries in road accidents in Sweden has decreased in recent decades. Developments in China and the European countries shows that many people are killed on the roads and the resulting trauma would be significant for relatives and families who are affected. Therefore, it is important that those affected can get the support needed and requested. The importance of insurance and insurance companies in China has been desribed by Dellien (Dellien, 2011). In Sweden the municipal authorities operate POSOM-groups that provide psychological and social care in the event of major community crisis. However, knowledge and awareness of social support for trauma victims is also very important for insurance companies in Sweden. The economic and social costs of long-term consequences is very important to prevent in society. A Swedish study show that a system called “pay-as-you-speed” (PAYS) could save lives and create at safer road transport system (Stigson et al, 2014). This article describes how team leaders in the POSOM-groups experience accessibility, communication and interactions with the media. The article focuses on how to increase awareness of emergency crisis support for one of today's major public health problems - traffic accidents in the community.
In China, there is a growing demand for knowledge and skills in providing psychosocial support in the context of crisis and trauma (Cuiling, 2010). Traffic injuries in China has been highlighted in several studies in recent years (Wu and Cheung, 2006; Hu et al., 2008; Wu et al., 2008; Zhao, 2009, Alcorn, 2011; Yuan et al., 2012). Both China and Sweden have a great deal of experience and knowledge about treatment for victims of major disasters and accidents (Kulling, 1994; Hagström, 1995; Kulling and Riddez, 2001; Broberg et al., 2005; Lundin and Jansson, 2007; Berg Johannesson et al., 2009; Arnberg et al., 2012). In China several studies have provided insights on major disasters such as earthquakes and their consequences for the inhabitants (Cuiling, 2010; Fan et al., 2011; Ma et al., 2011; Ya-Hong et al., 2012).
The consequences of traffic accidents can create to significant social and economic burdens in China and Europe (Berg et al., 2005; Tierens et al., 2012). In light of the long-term social, psychological and economic consequences of crises, trauma, and accidents in the community, there are several good reasons to try to reduce the damage and costs for individuals, insurance company and society.
China is one of the countries most affected by fatal and serious injuries as a result of road accidents. In Europe more than 1.3 million traffic accidents occur annually, resulting in approximately 43,000 deaths 1.7 million people receiving injuries (European Commission, 2013). In China, 200,619,351 people were killed in traffic accidents during the period 2000–2005 while 2,972,229 people were injured. In the same period, 3183 people were killed in traffic accidents in Sweden, while 150,000 people were injured in traffic accidents. During the period 2000–2010, 1,434,194 people were killed in road traffic accidents in China and Europe. The death rates has declined in recent years but still remains at a high level. In Sweden, the death rate of is the same as it was in 1940 (Central Statistical Office, 2011). In Table 1 a total of 17 randomly selected accidents and disasters in both countries are presented. In these accidents and disasters a total of 3761 people have been killed and 996 injured.
In Sweden nearly all the 290 municipalities have a POSOM-groups. The members include social workers, nurses, priests, deacons, police officers, rescue workers, and teachers.
Aim and Method
The aim of the study was to analyze the team leaders’ experiences of meetings and interaction with the media before and after a crisis event. The study sought to discover what phenomena could be identified and the extent to which the leaders of the POSOM-groups participate in the media coverage and communicate the opportunities for receiving crisis support. The four specific research questions were: 1. To what extent are crisis leaders available in relation to the media? 2. To what extent do the social workers experience good media reporting? 3. How can the POSOM-group become better known to victims and the public? 4. What risks is it possible to identify when a POSOM-group become more accessible to the media? A questionnaire was sent out to the team leaders in POSOM-groups. A total of 223 of 290 POSOM-group leaders responded to the survey. The questionnaire consisted of 14 questions. The informants therefore had the opportunity to write about their experiences of providing crisis support work in the municipalities. The study followed the research ethical principles of the Swedish Research Council.
Previous research in the field of emergency crisis support and psychosocial care in crises, emergencies and disasters in the community have focused on real events, for many years (Arnberg et al., 2011; Arnberg et al., 2012). Studies of disasters and major accidents have been of great importance for the development of crisis support; studies were done, for example, of a hotel fire in Borås in 10 June 1978 (Lorin, 1979; Lundin and Jansson, 2007). From a Scandinavian perspective, acute crisis support improved greatly because of lessons learned in connection with several disasters (Kulling, 1994). Studies have been done on providing crisis aid in the context of natural disasters (Örtenwall et al., 2000; Kulling and Riddez, 2001; Nieminen Kristofferson, 2001; Nieminen Kristofferson, 2002; Broberg et al., 2005; Bergh Johannesson et al., 2009; Hanbert et al., 2011; Brake, 2012).
The team leaders of POSOM-groups expressed satisfaction with their media education. However, 50 percent of the respondents did not have any practical experiences of meeting and interacting with the media. Since media coverage could prevent negative effects and reduce stress for victims and POSOM-group members, the relationship between POSOM-groups and media are important. Of the respondents, 53.4 percent had more than six years of working in the POSOM-group (N=119). The results showed that 40 percent of the POSOM-groups had between 2–5 assignments annually (N=90). Half of the respondents reported they were satisfied with the media coverage of to the work of POSOM-groups (N=103).
The team leaders of POSOM-groups said several factors were relevant to whether the relationship between a POSOM-group and the media is good. The respondents’ stories have been categorized into five main themes. The first theme can be described as the importance of harmonious relations between the media and the POSOM-group. It is important that both the crisis support workers and reporters have a good understanding of each other's professional roles, because the POSOM-group includes members with different professional backgrounds such as social work, health care, education, emergency services, policing, and churches. A second theme concerned the importance of balance and objectivity in the news coverage of a crisis event, of the psychosocial care and of the POSOM-group and its function and role. A team leaders said: “Balanced and objective media coverage increases people's knowledge of what has happened, what is being done, the understanding of the correctness of the efforts made. This reduces both fantasies and anxieties". The third prominent theme relates to the degree of positive and neutral media coverage of the crisis event. Several informants had told the reporters in a positive way how they could avoid conveying rumors and prevent rumors of a crisis event spreading. A fourth theme relates to the team leaders' experiences of media coverage of providing emergency psychosocial support. One informant said:
When a young man who was a member of the military lost their lives, the POSOM- group was in the church opened in the evenings. On the second day, the magazine had a story about crises, how to react, etc. The coverage described the importance of the POSOM-group and what POSOM did in this case.
When reporting an emergency incident or accident, it is important that the team leader knows what information may be disclosed. Finally, the fifth and final theme relates to the content of good dialogue between the POSOM-group and the media.
The respondents had both positive and negative experiences of the media reporting about psychosocial support. One informant said the following about their positive experiences: "My thoughts are that the media relations and media coverage is an accurate, informative, and factual account of the importance and content of POSOM-group services at various events. Little more is written than that we are active. In this way we also gain credibility with the public".
The POSOM-groups can be activated by several different kinds of trauma, crisis incidents, accidents and disasters in the community. A type of crisis events are relatively common are traffic accidents. Although several thousand people died in traffic accidents in Sweden during the last ten years in Sweden, POSOM-groups activated only in a minority of cases. Strangely, in some municipalities POSOM-groups that provide people with emergency crisis support for trauma in road accidents while other municipalities do not provide such emergency support through their POSOM-groups. This creates disparities between municipalities. Municipalities should be able to provide more acute crisis support in connection with this type of crisis. This study has shown that the team leaders of POSOM-groups have a limited practical experience in meeting the media during a crisis event. The study also shows that many team leaders believe that more education and practical media training is needed to bring important messages to the public. For future training of team leaders and members of a crisis team, it may be worthwhile to start by discussing different topics on the theme of risk. One simple proposed means of increasing risk awareness and mental preparedness for the duties of a team leader and a member of POSOM-groups is to provide a reminder consisting of the following 8 questions: a). What could happen? b). What is the worst that could happen? c). How likely is it? d). What would you do then? e). Can you prevent it from happening? f). Can it be made less serious? g). Can it be made less likely? h). Can you handle the situation if it does occur?
Alcorn, T., 2011. Uncertainty clouds China’s road-traffic fatality data. World Report. The Lancet, 378 (9788), 305-306.
Arnberg, F., K., Rydelius, P-A., Lundin, T., 2011. A longitudinal follow-up of posttraumatic stress: from 9 months to 20 years after a major road traffic accident. Child & Adolescent Psychiatry & Mental Health, 5 (8), 1-8.
Arnberg, F., K., Eriksson, N-G., Hultman, C., M., Lundin, T., 2011. Traumatic Bereavement, Acute Dissociation and Posttraumatic Stress: 14 Years After the MS Estonia Disaster. Journal of Traumatic Stress, 24 (2), 183-190.
Arnberg, F., K., Hultman, C., M., Michel, P-O., Lundin, T., 2012. Social Support Moderates Posttraumatic Stress and General Distress After Disaster. Journal of Traumatic Stress 25 (6) 721-727.
Berg, J., Tagliaferri, F., Servadei, F., 2005. Cost of trauma in Europe. European Journal of Neurology 12 (Suppl. 1), 85-90.
Bergh Johannesson, K., Lundin, T., Hultman, C., M., Lindam, A., Dyster-Aas, J., Arnberg, F., Michel, P-O., 2009. The Effect of Traumatic Bereavement on Tsunami-Exposed Survivors. Journal of Traumatic Stress, 22 (6), 497-504.
Broberg, A., G., Dyregrov, A., Lilled, L., 2005. The Gothenburg discothèque fire: posttraumatic stress and school adjustments reported by primary victims 18 months later. Journal of Child Psychol Psychiatry, 46 (12), 1279-1286.
Broms, C., 2012. The tsunami – 26 December 2004 – experiences from one place of recovery, Stockholm, Sweden. Primary Health Care Research & Development.
Conrah, U., G., 2005. Förbättra krisstödet inom socialtjänsten. En genomgång av socialtjänstens roll och möjligheter. Rapport nr 2005:2. FoU Västernorrland, Kommunförbundet Västernorrland.
Cuiling, G., 2010. Helping the Helpers: A Community-based Psychosocial Support Model in China. November 2, 2010. Web: http://www.unfpa.org/public/home/news/pid/6837
Dellien, A., 2011. Försäkringsbranschen i Kina. Nordisk Försäkringstidskrift, No 1.
Diaz, J., O., P., Murthy, R., S., Lakshminarayana, R., (2006). Advances in disaster mental health and psychological support. New Delhi: VHAI – Voluntary Health Association of India Press.
European Commission., 2013. Health EU. Your gateway to trustworthy information on public health. Website: (http://ec.europa.eu/health-eu/my_environment/road_safety/index_sv.htm).
Fan, F., Zhang, Y., Yang, Y., Mo, L., Liu, X., 2011. Symptoms of Posttraumatic Stress Disorder, Depression, and Anxiety Among Adolescents Following the 2008 Wenchuan Earthquake in China. Journal of Traumatic Stress, 24 (1), 44-53.
Flannery, R.,B., 1990. Social Support and Psychological Trauma: A Methodological Review. Journal of Traumatic Stress, 3 (4), 593-611.
Gauthamadas, U., 2005. Disaster Psychosocial Response. Handbook for Community Counselor Trainers. Chennai, India: Academy for Disaster Management Education Planning & Training.
Hagström, R., 1995. The Acute Psychological Impact on Survivors Following a Train Accident. Journal of Traumatic Stress, 8 (3), 391-402.
Hanbert, A., Lundberg, L-Å., Rönnmark, L., 2011. Som att lägga ett pussel. Uppföljning tio efter Backabranden.
Hobfoll, S,E., 1998. Stress, Culture and Community: The Psychology and Philosophy of Stress. New York: Plenum.
Hu,G., Wen,M., Baker,TD., Baker,SP. 2008. Road-traffic deaths in China, 1985-2005: threat and opportunity. Injury Prevention, 14 (3), 149-153.
International Federation of Red Cross and Red Crescent Societies. 2003. Community-based psychological support. A training manual. 1st edition – January 2003. Geneva: International Federation of Red Cross and Red Crescent Societies.
International Transport Forum. 2012. Sharing Road Safety. Developing and International Framework for Crash Modification Functions. Research Report, November. OECD and ITF (International Transport Forum).
Lundin, T., Jansson, L., 2007. Traumatic impact of a fire disaster on survivors – a 25-year follow-up of the 1978 hotel fire in Borås, Sweden. Nordic Journal Psychiatry 61 (6), 479-485.
Lundälv, J., 2012. Akut krisstöd och kommunikation. Gävle: Meyers förlag.
Kulling, P., 1994. The Tram Accident in Gothenburg – March 12, 1992. Stockholm: National Board of Health and Welfare, SoS-rapport 1994:2.
Kulling, P., Riddez, L., 2001. Brandkatastrofen i Göteborg natten 29-30 oktober 1998. KAMEDO-report No. 75. Stockholm: National Board of Health and Welfare.
Ma, X., Liu, X., Hu, X., Qiu, C., Wang, Y., Huang, Y., Wang, Q., Zhang, W., Li, T., 2011. Risk indicators for post-traumatic stress disorder in adolescents exposed to the 5.12 Wenchuan earthquake in China. Psychiatry Research 189 (3), 385-391.
Ministry of Transport, Annual Statistical Reports, 2006-2009, China and Traffic Analysis, Stockholm).
Ministry of Public Security Traffic Management Bureau. 2011. Statistical Yearbook on Road Traffic Accidents in P.R.C 2010. Ministry of Public Security Traffic Management Bureau.
Nieminen Kristoffersson,T., 2001. ”På natten ringdes jag in”. Att lära sig av det oförutsebara i krisgruppernas arbete efter branden på Backaplan oktober 1998. Göteborg: FoU i Väst.
Nieminen Kristoffersson, T., 2002. Krisgrupper och spontant krisstöd. Om insatser efter branden i Göteborg 1998. Akademisk avhandling. Lund: Lunds universitet.
Sebag-Montefiore, C., 2012. Minst sex dör i Kinas gruvor – varje dag. – ”Arbetarna är utsatta för övergrepp.” ETC. 11 maj.
Stigson, H, Hagberg, J, Kullgren, A, Krafft, M. (2014). A one year pay-as-you-speed trial with economic incentives for not speeding. Traffic Inj Prev 15(6):612-618.
Tierens, M., Bal, S., Crombez, G., Loeys, T., Antrop, I., Deboutte, D., 2012. Differences in Posttraumatic Stress Reactions Between Witnesses and Direct Victims of Motor Vehicle Accidents. Journal of Traumatic Stress, 25 (3), 280-287.
Wu, X, Hu, J, Zhuo, L, Fu, C, Hui, G, Wang, Y, Yang, W, Teng, L, Lu, S., Xu, G., 2008. Epidemiology of traumatic brain injury in eastern China, 2004: a prospetice large case study. Journal of Trauma 64 (5), 1313-1319.
Ya-Hong-Li,. Zhi-Peng, Xu.. 2012. Psychological crisis intervention for the family members of patients in a vegetative state. Clinics 67 (4), 341-345.
Yuan, Q., Liu, H., Wu, X., Sun, Y., Yao, H., Zhou, L., Hu, J., 2012. Characteristics of acute treatment costs of traumatic brain injury in Eastern China – a multi-centre prospective observational study. Injury. International journal of the care of the injured. 43 (12), 2094-2099.
Zhao, S., 2009. Road traffic accidents in China. CHN, Paper, September 3, 2009.
Wessells, M.G., 1999. Culture, power and community: intercultural approaches to psychosocial assistance and healing. In K Nader, N Dubrow & B Stamm (eds) Honouring Differences: Cultural Issues in the treatment of trauma and loss. New York: Taylor and Francis.
World Bank., 2008. China Road Traffic Safety. The Achievements, the Challenges, and the Way Ahead. Working Paper. China and Mongolia Sustainable Development Unit (EASCS) East Asia and Pacific Region.
World Health Organization. ,2004. World Report on Road Traffic Injury Prevention. Geneva: World Health Organization.
Wu, K,K., Cheung, M.W.L., 2006. Posttraumatic Stress After a Motor Vehicle Accident: A Six-Month Follow-Up Study Utilizing Latent Growth Modeling. Journal of Traumatic Stress, 19 (6), 923-936.
Örtenwall, P., Sager-Lund, C., Nyström, J., Martinell, S., 2000. Katastrofmedicinska lärdomar kan dras av Göteborgsbranden. Läkartid | <urn:uuid:b6bf1590-4181-40d4-89b1-143b0ea909c5> | CC-MAIN-2022-33 | https://nft.nu/da/community-based-psychological-disaster-management-groups-and-psychosocial-support-trauma-victims | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00095.warc.gz | en | 0.8509 | 4,423 | 2.5625 | 3 |
Do you need to learn how much is 60.33 kg equal to lbs and how to convert 60.33 kg to lbs? Here it is. You will find in this article everything you need to make kilogram to pound conversion - both theoretical and practical. It is also needed/We also want to point out that all this article is devoted to only one amount of kilograms - this is one kilogram. So if you need to know more about 60.33 kg to pound conversion - keep reading.
Before we move on to the more practical part - this is 60.33 kg how much lbs conversion - we want to tell you few theoretical information about these two units - kilograms and pounds. So let’s move on.
We are going to begin with the kilogram. The kilogram is a unit of mass. It is a basic unit in a metric system, known also as International System of Units (in short form SI).
From time to time the kilogram is written as kilogramme. The symbol of this unit is kg.
The kilogram was defined first time in 1795. The kilogram was defined as the mass of one liter of water. This definition was simply but difficult to use.
Later, in 1889 the kilogram was described using the International Prototype of the Kilogram (in short form IPK). The International Prototype of the Kilogram was prepared of 90% platinum and 10 % iridium. The IPK was used until 2019, when it was switched by a new definition.
Today the definition of the kilogram is based on physical constants, especially Planck constant. The official definition is: “The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of c and ΔνCs.”
One kilogram is 0.001 tonne. It could be also divided to 100 decagrams and 1000 grams.
You learned a little bit about kilogram, so now let's go to the pound. The pound is also a unit of mass. It is needed to emphasize that there are not only one kind of pound. What are we talking about? For instance, there are also pound-force. In this article we want to concentrate only on pound-mass.
The pound is in use in the Imperial and United States customary systems of measurements. Naturally, this unit is used also in other systems. The symbol of the pound is lb or “.
There is no descriptive definition of the international avoirdupois pound. It is just equal 0.45359237 kilograms. One avoirdupois pound is divided into 16 avoirdupois ounces or 7000 grains.
The avoirdupois pound was implemented in the Weights and Measures Act 1963. The definition of the pound was given in first section of this act: “The yard or the metre shall be the unit of measurement of length and the pound or the kilogram shall be the unit of measurement of mass by reference to which any measurement involving a measurement of length or mass shall be made in the United Kingdom; and- (a) the yard shall be 0.9144 metre exactly; (b) the pound shall be 0.45359237 kilogram exactly.”
The most theoretical section is already behind us. In next part we want to tell you how much is 60.33 kg to lbs. Now you learned that 60.33 kg = x lbs. So it is high time to know the answer. Have a look:
60.33 kilogram = 133.0048826646 pounds.
That is an exact outcome of how much 60.33 kg to pound. You can also round it off. After rounding off your result will be as following: 60.33 kg = 132.726 lbs.
You learned 60.33 kg is how many lbs, so let’s see how many kg 60.33 lbs: 60.33 pound = 0.45359237 kilograms.
Of course, this time it is possible to also round off the result. After rounding off your result is exactly: 60.33 lb = 0.45 kgs.
We also want to show you 60.33 kg to how many pounds and 60.33 pound how many kg outcomes in tables. Let’s see:
We want to begin with a table for how much is 60.33 kg equal to pound.
|Kilograms (kg)||Pounds (lb)||Pounds (lbs) (rounded off to two decimal places)|
|Pounds||Kilograms||Kilograms (rounded off to two decimal places|
Now you learned how many 60.33 kg to lbs and how many kilograms 60.33 pound, so we can go to the 60.33 kg to lbs formula.
To convert 60.33 kg to us lbs you need a formula. We will show you a formula in two different versions. Let’s start with the first one:
Number of kilograms * 2.20462262 = the 133.0048826646 outcome in pounds
The first version of a formula give you the most accurate result. Sometimes even the smallest difference can be considerable. So if you want to get a correct outcome - first formula will be the best for you/option to know how many pounds are equivalent to 60.33 kilogram.
So go to the another version of a formula, which also enables conversions to learn how much 60.33 kilogram in pounds.
The shorter version of a formula is as following, have a look:
Number of kilograms * 2.2 = the result in pounds
As you see, this formula is simpler. It could be better option if you need to make a conversion of 60.33 kilogram to pounds in fast way, for instance, during shopping. Just remember that your result will be not so accurate.
Now we want to show you these two versions of a formula in practice. But before we will make a conversion of 60.33 kg to lbs we are going to show you another way to know 60.33 kg to how many lbs totally effortless.
An easier way to know what is 60.33 kilogram equal to in pounds is to use 60.33 kg lbs calculator. What is a kg to lb converter?
Calculator is an application. Converter is based on longer version of a formula which we gave you above. Due to 60.33 kg pound calculator you can quickly convert 60.33 kg to lbs. Just enter number of kilograms which you need to convert and click ‘calculate’ button. You will get the result in a second.
So let’s try to convert 60.33 kg into lbs with use of 60.33 kg vs pound converter. We entered 60.33 as an amount of kilograms. Here is the result: 60.33 kilogram = 133.0048826646 pounds.
As you can see, our 60.33 kg vs lbs converter is so simply to use.
Now let’s move on to our chief topic - how to convert 60.33 kilograms to pounds on your own.
We are going to begin 60.33 kilogram equals to how many pounds conversion with the first version of a formula to get the most accurate result. A quick reminder of a formula:
Amount of kilograms * 2.20462262 = 133.0048826646 the outcome in pounds
So what need you do to learn how many pounds equal to 60.33 kilogram? Just multiply number of kilograms, in this case 60.33, by 2.20462262. It is exactly 133.0048826646. So 60.33 kilogram is 133.0048826646.
It is also possible to round off this result, for instance, to two decimal places. It gives 2.20. So 60.33 kilogram = 132.7260 pounds.
It is high time for an example from everyday life. Let’s calculate 60.33 kg gold in pounds. So 60.33 kg equal to how many lbs? And again - multiply 60.33 by 2.20462262. It gives 133.0048826646. So equivalent of 60.33 kilograms to pounds, when it comes to gold, is 133.0048826646.
In this example it is also possible to round off the result. This is the outcome after rounding off, this time to one decimal place - 60.33 kilogram 132.726 pounds.
Now we can move on to examples calculated with short formula.
Before we show you an example - a quick reminder of shorter formula:
Number of kilograms * 2.2 = 132.726 the outcome in pounds
So 60.33 kg equal to how much lbs? As in the previous example you have to multiply number of kilogram, this time 60.33, by 2.2. See: 60.33 * 2.2 = 132.726. So 60.33 kilogram is 2.2 pounds.
Do another conversion using shorer version of a formula. Now convert something from everyday life, for example, 60.33 kg to lbs weight of strawberries.
So convert - 60.33 kilogram of strawberries * 2.2 = 132.726 pounds of strawberries. So 60.33 kg to pound mass is equal 132.726.
If you learned how much is 60.33 kilogram weight in pounds and can calculate it with use of two different versions of a formula, we can move on. Now we are going to show you all results in tables.
We are aware that results shown in charts are so much clearer for most of you. We understand it, so we gathered all these outcomes in tables for your convenience. Due to this you can quickly compare 60.33 kg equivalent to lbs results.
Begin with a 60.33 kg equals lbs table for the first formula:
|Kilograms||Pounds||Pounds (after rounding off to two decimal places)|
And now have a look at 60.33 kg equal pound table for the second version of a formula:
As you can see, after rounding off, when it comes to how much 60.33 kilogram equals pounds, the results are the same. The bigger amount the more considerable difference. Please note it when you want to make bigger number than 60.33 kilograms pounds conversion.
Now you know how to convert 60.33 kilograms how much pounds but we will show you something more. Are you interested what it is? What about 60.33 kilogram to pounds and ounces calculation?
We are going to show you how you can convert it little by little. Begin. How much is 60.33 kg in lbs and oz?
First things first - you need to multiply amount of kilograms, in this case 60.33, by 2.20462262. So 60.33 * 2.20462262 = 133.0048826646. One kilogram is equal 2.20462262 pounds.
The integer part is number of pounds. So in this case there are 2 pounds.
To convert how much 60.33 kilogram is equal to pounds and ounces you have to multiply fraction part by 16. So multiply 20462262 by 16. It is exactly 327396192 ounces.
So final outcome is exactly 2 pounds and 327396192 ounces. It is also possible to round off ounces, for instance, to two places. Then your result will be exactly 2 pounds and 33 ounces.
As you can see, calculation 60.33 kilogram in pounds and ounces easy.
The last conversion which we want to show you is conversion of 60.33 foot pounds to kilograms meters. Both of them are units of work.
To convert it it is needed another formula. Before we give you this formula, let’s see:
Now let’s see a formula:
Amount.RandomElement()) of foot pounds * 0.13825495 = the result in kilograms meters
So to calculate 60.33 foot pounds to kilograms meters you have to multiply 60.33 by 0.13825495. It is exactly 0.13825495. So 60.33 foot pounds is equal 0.13825495 kilogram meters.
You can also round off this result, for instance, to two decimal places. Then 60.33 foot pounds will be equal 0.14 kilogram meters.
We hope that this calculation was as easy as 60.33 kilogram into pounds calculations.
This article is a huge compendium about kilogram, pound and 60.33 kg to lbs in calculation. Due to this calculation you learned 60.33 kilogram is equivalent to how many pounds.
We showed you not only how to make a conversion 60.33 kilogram to metric pounds but also two other calculations - to check how many 60.33 kg in pounds and ounces and how many 60.33 foot pounds to kilograms meters.
We showed you also other solution to do 60.33 kilogram how many pounds calculations, this is using 60.33 kg en pound calculator. It is the best option for those of you who do not like calculating on your own at all or this time do not want to make @baseAmountStr kg how lbs conversions on your own.
We hope that now all of you are able to make 60.33 kilogram equal to how many pounds calculation - on your own or with use of our 60.33 kgs to pounds calculator.
So what are you waiting for? Let’s calculate 60.33 kilogram mass to pounds in the best way for you.
Do you need to make other than 60.33 kilogram as pounds conversion? For example, for 10 kilograms? Check our other articles! We guarantee that conversions for other amounts of kilograms are so easy as for 60.33 kilogram equal many pounds.
We want to sum up this topic, that is how much is 60.33 kg in pounds , we prepared for you an additional section. Here you can see all you need to remember about how much is 60.33 kg equal to lbs and how to convert 60.33 kg to lbs . It is down below.
What is the kilogram to pound conversion? To make the kg to lb conversion it is needed to multiply 2 numbers. How does 60.33 kg to pound conversion formula look? . It is down below:
The number of kilograms * 2.20462262 = the result in pounds
So what is the result of the conversion of 60.33 kilogram to pounds? The exact result is 133.0048826646 lb.
There is also another way to calculate how much 60.33 kilogram is equal to pounds with another, easier type of the formula. Check it down below.
The number of kilograms * 2.2 = the result in pounds
So now, 60.33 kg equal to how much lbs ? The result is 133.0048826646 lbs.
How to convert 60.33 kg to lbs in a few seconds? You can also use the 60.33 kg to lbs converter , which will make the rest for you and give you a correct result .
|60.01 kg to lbs||=||132.29940|
|60.02 kg to lbs||=||132.32145|
|60.03 kg to lbs||=||132.34350|
|60.04 kg to lbs||=||132.36554|
|60.05 kg to lbs||=||132.38759|
|60.06 kg to lbs||=||132.40963|
|60.07 kg to lbs||=||132.43168|
|60.08 kg to lbs||=||132.45373|
|60.09 kg to lbs||=||132.47577|
|60.1 kg to lbs||=||132.49782|
|60.11 kg to lbs||=||132.51987|
|60.12 kg to lbs||=||132.54191|
|60.13 kg to lbs||=||132.56396|
|60.14 kg to lbs||=||132.58600|
|60.15 kg to lbs||=||132.60805|
|60.16 kg to lbs||=||132.63010|
|60.17 kg to lbs||=||132.65214|
|60.18 kg to lbs||=||132.67419|
|60.19 kg to lbs||=||132.69624|
|60.2 kg to lbs||=||132.71828|
|60.21 kg to lbs||=||132.74033|
|60.22 kg to lbs||=||132.76237|
|60.23 kg to lbs||=||132.78442|
|60.24 kg to lbs||=||132.80647|
|60.25 kg to lbs||=||132.82851|
|60.26 kg to lbs||=||132.85056|
|60.27 kg to lbs||=||132.87261|
|60.28 kg to lbs||=||132.89465|
|60.29 kg to lbs||=||132.91670|
|60.3 kg to lbs||=||132.93874|
|60.31 kg to lbs||=||132.96079|
|60.32 kg to lbs||=||132.98284|
|60.33 kg to lbs||=||133.00488|
|60.34 kg to lbs||=||133.02693|
|60.35 kg to lbs||=||133.04898|
|60.36 kg to lbs||=||133.07102|
|60.37 kg to lbs||=||133.09307|
|60.38 kg to lbs||=||133.11511|
|60.39 kg to lbs||=||133.13716|
|60.4 kg to lbs||=||133.15921|
|60.41 kg to lbs||=||133.18125|
|60.42 kg to lbs||=||133.20330|
|60.43 kg to lbs||=||133.22534|
|60.44 kg to lbs||=||133.24739|
|60.45 kg to lbs||=||133.26944|
|60.46 kg to lbs||=||133.29148|
|60.47 kg to lbs||=||133.31353|
|60.48 kg to lbs||=||133.33558|
|60.49 kg to lbs||=||133.35762|
|60.5 kg to lbs||=||133.37967|
|60.51 kg to lbs||=||133.40171|
|60.52 kg to lbs||=||133.42376|
|60.53 kg to lbs||=||133.44581|
|60.54 kg to lbs||=||133.46785|
|60.55 kg to lbs||=||133.48990|
|60.56 kg to lbs||=||133.51195|
|60.57 kg to lbs||=||133.53399|
|60.58 kg to lbs||=||133.55604|
|60.59 kg to lbs||=||133.57808|
|60.6 kg to lbs||=||133.60013|
|60.61 kg to lbs||=||133.62218|
|60.62 kg to lbs||=||133.64422|
|60.63 kg to lbs||=||133.66627|
|60.64 kg to lbs||=||133.68832|
|60.65 kg to lbs||=||133.71036|
|60.66 kg to lbs||=||133.73241|
|60.67 kg to lbs||=||133.75445|
|60.68 kg to lbs||=||133.77650|
|60.69 kg to lbs||=||133.79855|
|60.7 kg to lbs||=||133.82059|
|60.71 kg to lbs||=||133.84264|
|60.72 kg to lbs||=||133.86469|
|60.73 kg to lbs||=||133.88673|
|60.74 kg to lbs||=||133.90878|
|60.75 kg to lbs||=||133.93082|
|60.76 kg to lbs||=||133.95287|
|60.77 kg to lbs||=||133.97492|
|60.78 kg to lbs||=||133.99696|
|60.79 kg to lbs||=||134.01901|
|60.8 kg to lbs||=||134.04106|
|60.81 kg to lbs||=||134.06310|
|60.82 kg to lbs||=||134.08515|
|60.83 kg to lbs||=||134.10719|
|60.84 kg to lbs||=||134.12924|
|60.85 kg to lbs||=||134.15129|
|60.86 kg to lbs||=||134.17333|
|60.87 kg to lbs||=||134.19538|
|60.88 kg to lbs||=||134.21743|
|60.89 kg to lbs||=||134.23947|
|60.9 kg to lbs||=||134.26152|
|60.91 kg to lbs||=||134.28356|
|60.92 kg to lbs||=||134.30561|
|60.93 kg to lbs||=||134.32766|
|60.94 kg to lbs||=||134.34970|
|60.95 kg to lbs||=||134.37175|
|60.96 kg to lbs||=||134.39379|
|60.97 kg to lbs||=||134.41584|
|60.98 kg to lbs||=||134.43789|
|60.99 kg to lbs||=||134.45993|
|61 kg to lbs||=||134.48198| | <urn:uuid:bda23452-2c6a-4192-a99f-feb3295a2c79> | CC-MAIN-2022-33 | https://howkgtolbs.com/convert/60.33-kg-to-lbs | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00497.warc.gz | en | 0.882185 | 4,870 | 3.1875 | 3 |
The International Atomic Energy Agency (IAEA) defined cogeneration as: the integration of nuclear power plants with other systems and applications.
The heat generated by the nuclear power plants can be used to produce a vast range of products such as cooling, heating, process heat, desalination and hydrogen. The use of nuclear energy for cogeneration provides many economic, environmental and efficiency-related benefits. Cogeneration options may be different; depending on the technology, reactor type, fuel type and temperature level.
The use of nuclear energy for cogeneration also provides the benefit of using nuclear fuel in more efficient and eco-friendly manner. Energy and exergy analyses show that the performance of a nuclear power plant may be increased if it is used in a cogeneration mode. The use of nuclear energy for cogeneration applications can also lead to a drastic reduction in the environmental impact. However, integrating nuclear power plant with any other sub-system for cogeneration can greatly be affected by the performance parameters of the nuclear power plant and the site where it is situated.
Electricity is at the heart of modern economies and it is providing a rising share of energy services. Demand for electricity is set to increase further as a result of rising household incomes, with the electrification of transport and heat, and growing demand for digital connected devices and air conditioning.
Rising electricity demand was one of the key reasons why global CO2 emissions from the power sector reached a record high in 2018, yet the commercial availability of a diverse suite of low emissions generation technologies also puts electricity at the vanguard of efforts to combat climate change and pollution. Decarbonised electricity, in addition, could provide a platform for reducing CO2 emissions in other sectors through electricity-based fuels such as hydrogen or synthetic liquid fuels. Renewable energy also has a major role to play in providing access to electricity for all.
A report published by IAEA on the subject of World Nuclear Electricity Production, Energy Electricity and Nuclear Power Estimates:
- Total electricity production grew by 3.9 percent in 2018, while the growth in nuclear electricity production was 2.4 percent;
- Among the various sources for electricity production, coal remained dominant despite the significant growth of renewable and natural gas based generation;
- The share of electricity production from natural gas remained at about 23 percent of total electricity production;
- The contribution of hydropower and renewable energy sources continued to increase significantly, reaching 25.8 percent in 2018, while the share of nuclear electricity production remained at about 10.2 percent of the total electricity production;
- Electricity generation from operational nuclear reactors increased about 2.4 percent in 2018, reaching 2563 TW∙h.; and
- Nuclear power accounted for about 10 percent of total electricity production in 2018.
Nuclear reactors generated a total of 2563 TWh of electricity in 2018, up from 2502 TWh in 2017. This is the sixth successive year that nuclear generation has risen, with output 217 TWh higher than in 2012:
In 2018 the peak total net capacity of nuclear power in operation reached 402 GWe, up from 394 GWe in 2017. The end of year capacity for 2018 was 397 GWe, up from 393 GWe in 2017.
Usually only a small fraction of operable nuclear capacity does not generate electricity in a calendar year. However, since 2011, the majority of the Japanese reactor fleet has been awaiting restart. Four Japanese reactors were restarted in 2018, joining the five reactors that had restarted in previous years:
Energy and Electricity Projections:
- World energy consumption is expected to increase by 16 percent by 2030 and by 38 percent by 2050, at an annual growth rate of about 1 percent;
- Electricity consumption will grow at a higher rate of about 2.2 percent per year up to 2030 and around 2 percent per year thereafter; and
- The share of electricity in total final energy consumption will thus increase from 18.8 percent in 2018 to 21 percent by 2030 and to 26 percent by the middle of the century.
In addition to generating electricity, nuclear reactors can provide heat for multiple uses including heating homes as well as commercial and industrial buildings in a safe manner and with minimum environmental impacts.
Nuclear Process Heat for Industry. Nuclear energy is an excellent source of process heat for various industrial applications including desalination, synthetic and unconventional oil production, oil refining, biomass-based ethanol production, and in the future: hydrogen production.
According to the World Nuclear Association (Updated April 2020):
- For most major industrial heat applications, nuclear energy is the only credible non-carbon option.
- Light water reactors produce heat at relatively low temperatures in relation to many industrial needs, hence the technology focus has been on high-temperature gas-cooled reactors (HTR) and more recently on molten salt reactors (MSR) producing heat at over 700°C; and
- In 2019 there were 79 nuclear reactors used for desalination, district heating, or process heat, with 750 reactor-years of experience in these, mostly in Russia and Ukraine.
In general, heat consumption can be divided into the following two temperature levels:
- Low-temperature heat, which includes hot water or low-quality steam for district heat, desalination, and other purposes; and
- High-temperature process heat that includes process steam for various industrial applications (aluminum production, chemicals) or high temperature heat for conversion of fossil fuels, hydrogen production, and so on.
The direct use of nuclear heat in homes and industries is nothing new. There are, however, substantial differences between the properties and applications of electricity and of heat, as well as between the markets for these different forms of energy. These differences as well as the intrinsic characteristics of nuclear reactors are the reasons why nuclear power has predominantly penetrated the electricity market and had relatively minor applications as a direct heat source.
When the first nuclear power reactor at Calder Hall in the United Kingdom came into commercial operation in October 1956, it provided electricity to the grid and heat to a neighbouring fuel reprocessing plant. After more than 40 years, the four 50 megawatt-electric (MWe) Calder Hall units are still in operation. In Sweden, the Agesta reactor provided hot water for district heating to a suburb of Stockholm for a decade, starting in 1963.
Since these early days of nuclear power development, the direct use of heat generated in reactors has been expanding. Countries such as Bulgaria, Canada, China, the Czech Republic, Germany, Hungary, India, Japan, Kazakstan, Russia, Slovakia, Sweden, Switzerland, and Ukraine have found it convenient to apply nuclear heat for district heating or for industrial processes, or for both, in addition to electricity generation. Though less than 1 percent of the heat generated in nuclear reactors worldwide is at present used for district and process heating, there are signs of increasing interest in these applications.
About 33 percent of the world’s total energy consumption is currently used for electricity generation. This share is steadily increasing and is expected to reach 40 percent by the year 2015. Of the rest, heat consumed for residential and industrial purposes and the transport sector constitute the major components, with the residential and industrial sectors having a somewhat larger share. Practically the entire heat market is supplied by burning coal, oil, gas, or wood. Overall energy consumption is steadily increasing and this trend is expected to continue well into the next century. Conservation and efficiency improvement measures have in general reduced the rate of increase of energy consumption, but their effect is not large enough to stabilize consumption at current values.
It is important to understand that the transportation of heat is difficult and relatively expensive. The need for a pipeline, thermal isolation, pumping, and the corresponding investments, heat losses, maintenance, and pumping energy requirements make it impractical to transport heat beyond distances of a few kilometers or, at most, some tens of kilometers. There is also a strong size effect. The specific costs of transporting heat increase sharply as the amount of heat to be transported diminishes. Compared to heat, the transport of electricity from where it is generated to the end-user is easy and cheap, even to large distances measured in hundreds of kilometers.
All industrial users who require heat also consume electricity. The proportions vary according to the type of process, where either heat or electricity might have a predominant role. The demand for electricity can be supplied either from an electrical grid, or by a dedicated electricity generating plant. Co-generating electricity and heat is an attractive option. It increases overall energy efficiency and provides corresponding economic benefits. Co-generation plants, when forming part of large industrial complexes, can be readily integrated into an electrical grid system to which they supply any surplus electricity generated. In turn, they would serve as a backup for assurance of electricity supply. Such arrangements are often found to be desirable.
From the technical point of view, nuclear reactors are heat-generating devices. There is plenty of experience of using nuclear heat in both district heating and in industrial processes, so the technical aspects can be considered well proven. There are no technical impediments to the application of nuclear reactors as heat sources for district or process heating. In principle, any type and size of nuclear reactor can be used for these purposes. Potential radioactive contamination of the district heating networks or of the products obtained by the industrial processes is avoided by appropriate measures, such as intermediate heat exchanger circuits with pressure gradients which act as effective barriers. No incident involving radioactive contamination has ever been reported for any of the reactors used for these purposes.
The residential and the industrial sectors constitute the two major components of the overall heat market. Within the residential sector, while heat for cooking has to be produced directly where it is used, the demand for space heating can be and is often supplied from a reasonable distance by a centralized heating system through a district heating transmission and distribution network serving a relatively large number of customers.
Here are two major applications:
1. District Heating:
District heating is a system for distributing heat generated in a centralized location for residential and commercial heating requirements such as space heating and water heating. District heating networks generally have installed capacities in the range of 600 to 1200 megawatt-thermal (MWth) in large cities, decreasing to approximately 10 to 50 MWth in towns and small communities.
Exceptionally, capacities of 3000 to 4000 MWth can be found. Obviously, a potential market for district heating only appears in climatic zones with relatively long and cold winters. In western Europe, for example, Finland, Sweden, and Denmark are countries where district heating is widely used, and this approach is also applied in Austria, Belgium, Germany, France, Italy, Switzerland, Norway, and the Netherlands, though to a much lesser degree. The annual load factors of district heating systems depend on the length of the cold season when space heating is required, and can reach up to about 50 percent, which is still way below what is needed for base load operation of plants.
In addition, to assure a reliable supply of heat to the residences served by the district heating network, adequate back-up heat generating capacity must be provided. This implies the need for redundancy and generating unit sizes corresponding to only a fraction of the overall peak load. The temperature range required by district heating systems is around 100 to 150° C. In general, the district heating market is expected to expand substantially. Not only because it can compete economically in densely populated areas with individual heating arrangements, but also because it offers the possibility of reducing air pollution in urban areas. While emissions resulting from the burning of fuel can be controlled and reduced up to a point in relatively large centralized plants, this is not practical in small individual heating installations fuelled by gas, oil, coal, or wood.
For the district heating market, co-generation nuclear power plants are one of the supply options. In the case of medium to large nuclear reactors, due to the limited power requirements of the heat market and the relatively low load factors, electricity would be the main product, with district heating accounting for only a small fraction of the overall energy produced. These reactors, including their sitting, would be optimized for the conditions pertaining to the electricity market, district-heating being, in practice, a by-product. Should such power plants be located close enough to population centers in cold climatic regions, they could also serve district heating needs. This has been done in Russia, Ukraine, the Czech Republic, Slovakia, Hungary, Bulgaria, and Switzerland, using up to about 100 MWth per power station. Similar applications can be expected for the future wherever similar boundary conditions exist.
For small co-generation reactors corresponding to power ranges of up to 300 MWe and 150 MWe, respectively, the share of heat energy for district heating would be larger. Nevertheless, electricity would still be expected to constitute the main product, assuming base-load operation, for economic reasons. The field of application of these reactors would be similar to the case of medium or large co-generation reactors. Additionally, however, they could also address specific objectives, such as the energy supply of concentrated loads in remote and cold regions of the world.
Heat-only reactors for district heating are another option. Such applications have been implemented on a very small scale (a few MWth) as experimental or demonstration projects. Construction of two units of 500 MWth was initiated in Russia in 1983-85, but later interrupted. There are several designs being pursued, and it is planned to start construction of a 200 MWth unit soon in China. Clearly, the potential applications of heat-only reactors for district heating are limited to reactors in the very small size range. These reactors are designed for sitting within or very close to population centers so that heat transmission costs can be minimal. Even so, economic competitiveness is difficult to achieve due to the relatively low load factors required, except in certain remote locations where fossil fuel costs are very high and the winter is very cold and long.
In summary, the prospects for nuclear district heating are real, but limited to applications where specific conditions pertaining to both the district heating market and to the nuclear reactors can effectively be met. The prospects for co-generation reactors, especially in the SMR range, seem better than for heat-only reactors, mainly because of economic reasons.
2. Industrial Processes:
Within the industrial sector, process heat is used for a very large variety of applications with different heat requirements and with temperature ranges covering a wide spectrum. While in energy intensive industries the energy input represents a considerable fraction of the final product cost, in most other processes it contributes only a few percent. Nevertheless, the supply of energy has an essential character. Without energy, production would stop. This means that a common feature of practically all industrial users is the need for assurance of energy supply with a very high degree of reliability and availability, approaching 100 percent in particular for large industrial installations and energy intensive processes.
Regarding the power ranges of the heat sources required, similar patterns are found in most industrialized countries. In general, about half of the users require less than 10 MWth and another 40 percent between 10 and 50 MWth. There is a steady decrease in the number of users as the power requirements become higher. About 99 percent of the users are included in the 1 to 300 MWth ranges, which accounts for about 80 percent of the total energy consumed. Individual large users with energy intensive industrial processes cover the remaining portion of the industrial heat market with requirements up to 1000 MWth, and exceptionally even more. This shows the highly fragmented nature of the industrial heat market.
The possibility of large-scale introduction of heat distribution systems supplied from a centralized heat source, which would serve several users concentrated in so-called industrial parks — seems rather remote at present, but could be the trend on a long term. Contrary to district heating, the load factors of industrial users do not depend on climatic conditions. The demands of large industrial users usually have base load characteristics.
The temperature requirements depend on the type of industry, covering a wide range up to around 1500° C. The upper range above 1000° C is dominated by the iron/steel industry. The lower range up to about 200 to 300° C includes industries such as seawater desalination, pulp and paper, or textiles. Chemical industry, oil refining, oil shale and sand processing, and coal gasification are examples of industries with temperature requirements of up to the 500 to 600° C level. Non-ferrous metals, refinement of coal and lignite, and hydrogen production by water splitting are among applications that require temperatures between 600 and 1000° C.
The characteristics of the market for process heat are quite different from district heating, though there are some common features, particularly regarding the need for minimal heat transport distance. Industrial process heat users, however, do not have to be located within highly populated areas, which by definition constitute the district heating market. Many of the process heat users; in particular, the large ones, can be and usually are located outside urban areas, often at considerable distances. This makes joint sitting of nuclear reactors and industrial users of process heat not only viable, but also desirable in order to drastically reduce or even eliminate the heat transport costs.
For large size reactors, the usual approach is to build multiple unit stations. When used in the co-generation mode, electricity would always constitute the main product. Such plants, therefore, have to be integrated into the electrical grid system and optimized for electricity production. For reactors in the SMR size range, and in particular for small and very small reactors, the share of process heat generation would be larger, and heat could even be the predominant product. This would affect the plant optimization criteria, and could present much more attractive conditions to the potential process heat user. Consequently, the prospects of SMRs as co-generation plants supplying electricity and process heat are considerably better than those of large reactors.
Several co-generation nuclear power plants in operation already supply process heat to industrial users. The largest projects implemented are in Canada (Bruce, heavy-water production and other industrial/agricultural users) and in Kazakstan (Aktau, desalination). Other power reactors that currently produce only electricity could be converted to co-generation. Should there be a large process heat user close to the plant interested in receiving this product; the corresponding conversion to co-generation would be technically feasible. It would, however, involve additional costs, which would have to be justified by a cost/benefit analysis. Some such conversion projects could be implemented but, in general, prospects for this option seem rather low.
Installing a new nuclear co-generation plant close to an existing and interested industrial user has better prospects. Even better would be a joint project whereby both the nuclear co-generation plant and the industrial installation requiring process heat are planned, designed, built, and finally operated together as an integrated complex.
Current and advanced light or heavy-water reactors offer heat in the low temperature range, which corresponds to the requirements of several industrial processes. Among these, seawater desalination is presently seen as the most attractive application. Other types of reactors, such as liquid metal-cooled fast reactors and high temperature gas-cooled reactors can also offer low temperature process heat, but in addition, they can cover higher temperature ranges. This extends their potential field of application. These reactors still require substantial development in order to achieve commercial maturity. Should they achieve economic competitiveness as expected, their prospects seem to be promising in the medium to long term, especially for high temperature industrial applications.
Heat-only reactors have not yet been applied on an industrial/commercial scale for the supply of process heat. Several designs have been developed and some demonstration reactors have been built. Economic competitiveness seems to be an achievable goal according to many studies, which have been performed, but this is something yet to be proven in practice. The potential market for such heat-only reactors would be limited to the very small size range, i.e. below about 500 MWth.
The prospects for applying nuclear energy to district and process heating are closely tied to the prospects of deploying SMRs. A recent market assessment for SMRs found that 70 to 80 new units are planned in about 30 countries up to the year 2015. It was also found that about a third of these units are expected to be applied specifically to nuclear desalination. Of the rest, a substantial share could very well supply heat in addition to electric energy, while a few are expected to be heat-only reactors.
- US Energy Information Administration: Independent Statistics and Analysis;
- IAEA: Nuclear Technology Outlook Review 2010;
- World Nuclear Association: World Energy Needs and Nuclear Power;
- US Energy Information Administration: International Energy Outlook 2010; and
- IAEA Bulletin: Nuclear Power Applications – Supply of heat for homes and industries.
- This chapter was published on “Inuitech – Intuitech Technologies for Sustainability”
on January 15, 2012; and
- This chapter was updated on 14 June 2020 | <urn:uuid:a9d60d63-a99a-4b67-9c16-1411587eb5ca> | CC-MAIN-2022-33 | https://mirfali.com/book/chapter13/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00296.warc.gz | en | 0.948418 | 4,289 | 3.96875 | 4 |
|Coordinates||Coordinates: 39°37′N 2°59′E|
|Major islands||Balearic Islands|
|Area||3,640.11 km2 (1,405.45 sq mi)|
|Highest elevation||1,445 m (4,741 ft)|
|Highest point||Puig Major|
|Capital and largest city||Palma (pop. 404,681)|
|Pop. density||240.45 /km2 (622.76 /sq mi)|
Mallorca or Majorca is the largest island
in the Balearic Islands, which are part of Spain and located in
the Mediterranean. The native language, as on the rest of the Balearic Islands,
is Catalan, which is co-official with Spanish.
The capital of the
island, Palma, is also the capital of the autonomous
community of the Balearic Islands. The Balearic Islands have been an
autonomous region of Spain since 1983. There are two small islands off the
coast of Mallorca: Cabrera (southeast of Palma) and Dragonera (west
of Palma). The anthem of Mallorca is “La Balanguera”.
Like the other Balearic
Islands of Menorca, Ibiza and Formentera, the island is an
extremely popular holiday destination, particularly for tourists from Germany
and the United Kingdom. The international airport, Palma de Mallorca
Airport, is one of the busiest in Spain; it was used by 28.0 million passengers
in 2017, increasing every year since 2012.
The name derives
from Classical Latin insula maior, “larger island”.
Later, in Medieval Latin, this became Maiorica, “the
larger one”, in comparison to Menorca, “the smaller
Little is recorded of the
earliest inhabitants of the island. Burial chambers and traces of habitation
from the Neolithic period (6000–4000 BC) have been discovered,
particularly the prehistoric settlements called talaiots, or talayots.
They raised Bronze Age megaliths as part of their Talaiotic
culture. A non-exhaustive list is the following:
- Capocorb Vell (Llucmajor municipality)
- Necròpoli de Son Real (east of Can Picafort, Santa Margalida municipality)
- Novetiforme Alemany (Magaluf, Calvià)
- Poblat Talaiòtic de S’Illot (S’Illot, Sant Llorenç des Cardassar municipality)
- Poblat Talaiòtic de Son Fornés (Montuïri municipality)
- Sa Canova de Morell (road to Colònia de Sant Pere, Artà municipality)
- Ses Païsses (Artà municipality)
- Ses Talaies de Can Jordi (Santanyí municipality)
- S’Hospitalet Vell (road to Cales de Mallorca, Manacor municipality)
Romans, and Late Antiquity
The Phoenicians, a
seafaring people from the Levant, arrived around the eighth century BC and
established numerous colonies. The island eventually came under the control
of Carthage in North Africa, which had become the principal
Phoenician city. After the Second Punic War, Carthage lost all of its
overseas possessions and the Romans took over.
The island was occupied by the
Romans in 123 BC under Quintus Caecilius Metellus Balearicus. It
flourished under Roman rule, during which time the towns of Pollentia
(Alcúdia), and Palmaria (Palma) were founded. In addition, the northern town
of Bocchoris, dating back to pre-Roman times, was a federated city
to Rome. The local economy was largely driven
by olive cultivation, viticulture, and salt mining.
Mallorcan soldiers were valued within the Roman legions for their skill with
In 427, Gunderic and
the Vandals captured the island. Geiseric, son of Gunderic,
governed Mallorca and used it as his base to loot and plunder settlements
around the Mediterranean, until Roman rule was restored in 465.
Age and Modern history
Antiquity and Early Middle Ages
In 534, Mallorca was
recaptured by the Eastern Roman Empire, led by Apollinarius. Under
Roman rule, Christianity thrived and numerous churches were built.
From 707, the island was
increasingly attacked by Muslim raiders from North Africa. Recurrent
invasions led the islanders to ask Charlemagne for help.
In 902, Issam al-Khawlan
conquered the Balearic Islands, ushering in a new period of prosperity under
the Emirate of Córdoba. The town of Palma was reshaped and expanded, and
became known as Medina Mayurqa. Later on, with the Caliphate of Córdoba at
its height, the Moors improved agriculture
with irrigation and developed local industries.
The caliphate was dismembered in 1015. Mallorca came under rule by the Taifa of Dénia, and from 1087 to 1114, was an independent Taifa. During that period, the island was visited by Ibn Hazm. However, an expedition of Pisans and Catalans in 1114–15, led by Ramon Berenguer III, Count of Barcelona, overran the island, laying siege to Palma for eight months. After the city fell, the invaders retreated due to problems in their own lands. They were replaced by the Almoravides from North Africa, who ruled until 1176. The Almoravides were replaced by the Almohad dynasty until 1229. Abú Yahya was the last Moorish leader of Mallorca.
In the ensuing confusion and
unrest, King James I of Aragon, also known as James the Conqueror,
launched an invasion which landed at Santa Ponça, Mallorca, on
8–9 September 1229 with 15,000 men and 1,500 horses. His forces entered the
city of Medina Mayurqa on 31 December 1229. In 1230 he annexed the island to
his Crown of Aragon under the name Regnum Maioricae.
From 1479, the Crown of Aragon
was in dynastic union with that of Castile. The Barbary corsairs of
North Africa often attacked the Balearic Islands, and in response, the people
built coastal watchtowers and fortified churches. In 1570,
King Philip II of Spain and his advisors were considering complete
evacuation of the Balearic islands.
In the early 18th century,
the War of the Spanish Succession resulted in the replacement of that
dynastic union with a unified Spanish monarchy under the rule of the
new Bourbon Dynasty. The last episode of the War of Spanish Succession was
the conquest of the island of Mallorca. It took place on 2 July 1715 when the
island capitulated to the arrival of a Bourbon fleet. In 1716, the Nueva
Planta decrees made Mallorca part of the Spanish province of
Baleares, roughly the same to present-day Illes Balears province and
century and today
A Nationalist stronghold
at the start of the Spanish Civil War, Mallorca was subjected to
an amphibious landing, on 16 August 1936, aimed at driving the
Nationalists from Mallorca and reclaiming the island for the Republic.
Although the Republicans heavily outnumbered their opponents and managed to
push 12 km (7.5 mi) inland, superior Nationalist air power, provided
mainly by Fascist Italy as part of the Italian occupation of Majorca,
forced the Republicans to retreat and to leave the island completely by 12
September. Those events became known as the Battle of Majorca.
Since the 1950s, the advent of
mass tourism has transformed the island into a destination for
foreign visitors and attracted many service workers from mainland Spain. The
boom in tourism caused Palma to grow significantly.
In the 21st century, urban
redevelopment, under the so‑called Pla Mirall (English
“Mirror Plan”), attracted groups of immigrant workers from
outside the European Union, especially from Africa and South America.
The capital of Mallorca,
Palma, was founded as a Roman camp called Palmaria upon the remains of
a Talaiotic settlement. The turbulent history of the city had it
subject to several Vandal sackings during the fall of the Western Roman
Empire. It was later reconquered by the Byzantines, established by the
Moors (who called it Medina Mayurqa), and finally occupied by James I of
Aragon. In 1983, Palma became the capital of the autonomous region of
the Balearic Islands.
The climate of Mallorca is
a Mediterranean climate, with mild and stormy winters and hot, bright, dry
summers. Precipitation in the Serra de Tramuntana is markedly higher. Summers
are hot in the plains, and winters mild, getting colder in the Tramuntana
range, where brief episodes of snow during the winter are not unusual. The two
wettest months in Mallorca are October and December.
Mallorca is the largest island
of Spain by area and second most populated (after Tenerife in
the Canary Islands). Mallorca has two mountainous regions,
the Serra de Tramuntana and Serres de Llevant. Each are about
70 km (43 mi) in length and occupy the northwestern and eastern parts
of the island respectively.
The highest peak on Mallorca
is Puig Major at 1,445 m (4,741 ft) in the Serra de
Tramuntana. As this is a military zone, the neighbouring peak at Puig de
Massanella is the highest accessible peak at 1,364 m (4,475 ft).
The northeast coast comprises two bays: the Badia de Pollença and the larger
The northern coast is rugged and has many cliffs. The central zone, extending from Palma, is a generally flat, fertile plain known as Es Pla. The island has a variety of caves both above and below sea – two of the caves, the above sea level Coves dels Hams and the Coves del Drach, also contain underground lakes and are open to tours. Both are located near the eastern coastal town of Porto Cristo. Small uninhabited islands lie off the southern and western coasts; the Cabrera Archipelago is administratively grouped with Mallorca, while Dragonara is administratively included in the municipality of Andratx.
The Cultural Landscape of
the Serra de Tramuntana was registered as a UNESCO World
Heritage Site in 2011.
Ludwig Salvator of Austria
Archduke Ludwig Salvator of Austria (Catalan: Arxiduc Lluís Salvador) was the architect of tourism in the Balearic Islands. He first arrived on the island in 1867, travelling under his title “Count of Neuendorf”. He later settled on Mallorca, buying up wild areas of land in order to preserve and enjoy them. Nowadays, a number of trekking routes are named after him.
Ludwig Salvator loved the
island of Mallorca. He became fluent in Catalan, carried out research into the
island’s flora and fauna, history, and culture to produce his main work, Die
Balearen, an extremely comprehensive collection of books about the Balearic
Islands, consisting of 7 volumes. It took him 22 years to complete.
The Polish composer and pianist Frédéric Chopin, together with French writer Amantine Lucile Aurore Dupin (pseudonym: George Sand), resided in Valldemossa in the winter of 1838–39. Apparently, Chopin’s health had already deteriorated and his doctor recommended that he go to the Balearic Islands to recuperate, where he still spent a rather miserable winter.
Nonetheless, his time in
Mallorca was a productive period for Chopin. He managed to finish
the Preludes, Op. 28, that he started writing in 1835. He was also able to
undertake work on his Ballade No. 2, Op. 38; two Polonaises, Op. 40; and
the Scherzo No. 3, Op. 39.
French writer Amantine Lucile
Aurore Dupin, at that time in a relationship with Chopin, described her stay in
Mallorca in A Winter in Majorca, published in 1855. Other famous
writers used Mallorca as the setting for their works: While on the island, the
Nicaraguan poet Rubén Darío started writing the novel El oro
de Mallorca, and wrote several poems, such as La isla de oro. Many
of the works of Baltasar Porcel take place in Mallorca. Ira
Levin set part of his dystopian novel This Perfect Day in
Mallorca, making the island a centre of resistance in a world otherwise
dominated by a computer.
Agatha Christie visited
the island in the early 20th century and stayed in Palma and Port de Pollença. She
would later write the book Problem at Pollensa Bay and Other Stories,
a collection of short stories, of which the first one takes place in Port
de Pollença, starring Parker Pyne.
Jorge Luis Borges visited
Mallorca twice, accompanied by his family. He published his poems La
estrella (1920) and Catedral (1921) in the regional
magazine Baleares. The latter poem shows his admiration for the
monumental Cathedral of Palma.
prize winner Camilo José Cela came to Mallorca in 1954,
visiting Pollença, and then moving to Palma, where he settled
permanently. In 1956, Cela founded the magazine Papeles de Son Armadans.
He is also credited as founder of Alfaguara.
The English poet Robert
Graves moved to Mallorca with his family in 1946. The house is now a
museum. He died in 1985 and his body was buried in the small churchyard on a
hill at Deià.
The Ball dels
Cossiers is the island’s traditional dance. It is believed to have been
imported from Catalonia in the 13th or 14th century, after
the Argonian conquest of the island under King Jaime I. In the
dance, three pairs of dancers, who are typically male, defend a
“Lady,” who is played by a man or a woman, from
a demon or devil. Another Mallorcan dance is Correfoc, an
elaborate festival of dance and pyrotechnics that is also of Catalan origin.
The island’s folk music strongly resembles that of Catalonia, and is
centered around traditional instruments like the xeremia (bagpipes)
and guitarra de canya (a reed or bone xylophone-like instrument
suspended from the neck). While folk music is still played and enjoyed by many
on the island, a number of other musical traditions have become popular in
Mallorca in the 21st century, including electronic dance music, classical
music, and jazz, all of which have annual festivals on the island.
Joan Miró, a Spanish painter,
sculptor, and ceramicist, had close ties to the island throughout his life, he
married Pilar Juncosa in Palma in 1929 and settled permanently in Mallorca in
1954. The Fundació Pilar i Joan Miró in Mallorca has a collection of
his works. Es Baluard in Palma is a museum of modern and contemporary
art which exhibits the work of Balearic artists and artists related to the
The Evolution Mallorca
International Film Festival is the fastest growing Mediterranean film festival
and has occurred annually every November since 2011, attracting filmmakers,
producers, and directors globally. It is hosted at the Teatro Principal in
Palma de Mallorca.
Mallorca has a long history of
seafaring. The Majorcan cartographic school or the
“Catalan school” refers to a collection
of cartographers, cosmographers, and navigational
instrument makers that flourished in Mallorca and partly in
mainland Catalonia in the 13th, 14th and 15th centuries. Mallorcan
cosmographers and cartographers developed breakthroughs in cartographic
techniques, namely the “normal portolan chart”, which was
fine-tuned for navigational use and the plotting by compass of navigational
routes, prerequisites for the discovery of the New World.
In 2005, there were over 2,400
restaurants on the island of Mallorca according to the Mallorcan Tourist Board,
ranging from small bars to full restaurants. Olives and almonds are typical of
the Mallorcan diet. Among the foods that are typical from Mallorca are sobrassada, arròs
brut (saffron rice cooked with chicken, pork and vegetables), and the
sweet pastry ensaïmada. Also Pa amb oli is a popular dish.
Herbs de Majorca is a
The main language spoken on
the island is Catalan. The two official languages of Mallorca are
Catalan and Spanish. The local dialect of Catalan spoken in the
island is mallorquín, with slightly different variants in most
villages. The education is bilingual in Catalan and Spanish, with some
knowledge of English.
In 2012, the then-governing People’s Party announced its intention to end preferential treatment for Catalan in the island’s schools to bring parity to the two languages of the island. It was said that this could lead Mallorcan Catalan to become extinct in the fairly near future, as it was being used in a situation of diglossia in favour of the Spanish language. As of 2016, with the most recent election in May 2015 sweeping a pro-Catalan party and president into power, the Popular Party’s policy of trilingualism has been dismantled, making this outcome unlikely.
Mallorca is the most populous
island in the Balearic Islands and the second most populous island in Spain,
after Tenerife, in the Canary Islands, being also the fourth most
populous island in the Mediterranean. It has a census population of
859,289 inhabitants (2015).
Since the 1950s, Mallorca has
become a major tourist destination, and the tourism business has become the
main source of revenue for the island. In 2001, the island received
millions of tourists, and the boom in the tourism industry has provided
significant growth in the economy of the country.
The island’s popularity as a
tourist destination has steadily grown since the 1950s, with many artists and
academics choosing to visit and live on the island. Visitors to Mallorca
continued to increase with holiday makers in the 1970s approaching
3 million a year. In 2010 over 6 million visitors came to Mallorca. In
2013, Mallorca was visited by nearly 9.5 million tourists, and
the Balearic Islands as a whole reached 13 million tourists.
Mallorca has been jokingly
referred to as the 17th Federal State of Germany, due to the high number of
With thousands of rooms
available Mallorca’s economy is largely dependent on its tourism industry.
Holiday makers are attracted by the large number of beaches, warm weather, and
high-quality tourist amenities.
Attempts to build
illegally caused a scandal in 2006 in Port Andratx that the
newspaper El País named “caso Andratx”. A main
reason for illegal building permits, corruption and black
market construction is that communities have few ways to finance
themselves other than through permits. The former mayor was incarcerated
since 2009 after being prosecuted for taking bribes to permit illegal
The Balearic Islands, of which
Mallorca forms part, are one of the autonomous communities of Spain. As a
whole, they are currently governed by the Balearic Islands Socialist
Party (PSIB-PSOE), with Francina Armengol as their President.
The autonomous government for
the island, called Consell Insular de Mallorca (Mallorca Insular Council), is
responsible for culture, roads, railways (see Serveis Ferroviaris de
Mallorca) and municipal administration. The current president (as of June 2015)
is Miquel Ensenyat, of More for Mallorca.
The members of
the Spanish Royal Family spend their summer holidays in Mallorca
where the Marivent Palace is located. The Marivent Palace is the royal
family’s summer residence. While most royal residences are administered
by Patrimonio Nacional, the Marivent Palace, in Palma de Mallorca, one of
many Spanish royal sites, is under the care of the Government of
the Balearic Islands. As a private residence it is rarely used for
official business. Typically, the whole family meets there and on the Fortuna
yacht, where they take part in sailing competition. The Marivent Palace is
used for some unofficial business, as when President Hugo Chávez of
Venezuela visited King Juan Carlos in 2008 to mend their
relationship and normalize diplomatic relations after the King
famously said to him, “Why don’t you shut up?” during
the Ibero-American Summit in November 2007.
From Wikipedia, the free encyclopaedia | <urn:uuid:0f0e7227-92c7-4568-af13-dbe05096c505> | CC-MAIN-2022-33 | https://www.holidayscanner.co.uk/mallorca/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00098.warc.gz | en | 0.93117 | 5,009 | 3 | 3 |
In South Africa, by comparison, 10.2 million people (approximately 20.2% of the population) lived below the breadline of R321.00 per month in 2011. It is estimated that 11.4% of South Africa‟s population was HIV positive in 2002 (HSRC 2003:46). It is often said to be the most unequal, but that is incorrect. & Argent, J. Poverty and inequality in South Africa caused by past policies of discrimination, failure of politics and isolate that was favoring the white race that have left a legacy of inequality and poverty and, … The Policies for Reducing Income Inequality and Poverty in South Africa. There are a number of reasons for the great amount of poverty in our country. It is a part of the Triple Threats the country faces and the other two threats are unemployment and inequality. Although income poverty appears to have declined in the recent past, it remains high (Gumede 2008). This also increases the rate of people who are living with, Essay on Alfred North Whitehead and John Dewey, Why Most of the Industries Today are Oligopolies Essay, The Provision of Disabled Facilities in Sports Essay, Assessment of the Economic Impact that the Natural Disaster May Have on the Countries of South East Asia. 1st Jan 1970 Sociology Reference this Disclaimer: This work has been submitted by a university student. The good news is that the percentage of people living in extreme poverty has decreased from thirty five percent in 1990 down to 10 percent in 2013. In this essay, it is going to illustrate more specifically about income inequality Restitution involves giving compensation to land lost to Whites due to apartheid, racism, and discrimination. South Africa is the most unequal in income of the world according to the Gini index and the Palma ratio (Barr, 2017). Of those 776 million there are a disproportionate amount of women and children. The study documents the progress South Africa has made in reducing poverty and inequality since the end of apartheid in 1994, with a focus on the period between 2006 and 2015. Poverty in South Africa: A profile based on recent household surveys . Many of South Africa 's social inequalities are the results of a poor or lack of service delivery from the country 's state. suffer without a chance to truly achieve their potential. South Africa was hugely shaped by the apartheid legacy, a system of racial segregation that began during the colonial rule and was officially, Health inequality in South Africa iv Overcoming Poverty and Inequality in South Africa i. ABSTRACT. Africans changed government because Africans had no role - no say. This also degraded the history, culture, and identity of black people through an educational system that promoted racial stereotypes and myths through its educational resources. The South African Multidimensional Poverty Index 28 v. Changes in multidimensional poverty at the South Africa is currently celebrating 20 years of freedom, 20 years of being a democracy. This is not an example of the work produced by our Essay Writing Service. 2010. South Africa’s inequality has some characteristics that differ from global trends of wealth and income distribution. South Africa has extremely high unemployment rates. Europeans took the best farmland, in some areas, and forced Africans to work under harmful conditions. There are many factors that contribute to poverty such as bad government, Since it’s foundation, South Africa has faced significant issues with racism and poverty. According to the African Ethnicities Map, one can see that there were more ethnic group boundaries than country boundaries. Meanwhile, its peers have been able to make inroads in reducing inequality. Children are the most vulnerable aspect in any country, as they are still developing, and don’t have the resources to be independent; they are also only able to take what they are given without, gender roles and gender inequality that persists in South Africa help not only to explain their unbelievably difficult daily burdens but it also aids in the understanding of the lack of economic and political representation of women in the region (Bentley, 2004). The extreme inequality evident in South Africa means that one sees destitution, hunger and One of the best examples is Nigeria. Despite interventions by international and national role-players, approximately one billion people worldwide experience extreme poverty (Cohen, 2009:8). The overall unemployment rate was 26% in 2004, Redistribution aims to transfer White-owned commercial farms to Black South Africans. South Africa is of particular interest given its history of racial estates which has entrenched high levels of poverty, structural inequality and structural unemployment. Following the conclusion and release of a study that measures children’s deprivation or lack of access to key services, UNICEF South Africa has developed a shorter Policy Brief that outlines the key findings of the larger study. important concepts which are poverty and inequality. Specifically, the authors analyse the complexity of poverty and inequality beyond an over-determination of the economic and the wealth index in South Africa. 1st Jan 1970 Sociology Reference this Disclaimer: This work has been submitted by a university student. Despite the recent spate of economic growth, Africa remains afflicted by entrenched poverty and alarmingly high and rising inequality. In describing South Africa's key development challenges; the World Bank (2013) reports that South Africa remains a dual economy with one of the highest inequality rates in the world, poverty remains deeply entrenched in many parts of the country, and widespread exclusion and unemployment remain stubborn challenges on the economic landscape. South Africa has a long and infamous history of high inequality with an overbearing racial footprint to this inequality. The government knew that people with math and science skills could get higher-paying jobs. http://africaecon.org/index.php/exclusives/read_exclusive/1/2 Key Words: Human development, South Africa, poverty, inequality, life expectancy, education, health, economy * First draft – not to be quoted without the author’s permission: firstname.lastname@example.org & email@example.com. On the contrary, it is a country with severe market inefficiencies which result in a critical economic depression for the past 10 years. However, it is crucial to stress that Zimbabwe is not just a country which only experiencing a normal recession during the global economic crisis. Like many countries around the world, South Africa has faced many socio-economic challenges over the years. Although blacks and minorities were indeed citizens, they were stripped of many basic rights and privileges such as unhindered ability to vote, access to facilities, restaurants and businesses, and housing. The post-apartheid government received the burden of eradicating widespread poverty, inequality, and unemployment. Those partners refuse to use condoms and the woman feels that she has no choice in the matter because she is dependent on her partners for a living. This represents over a quarter of all the people in Sub-Saharan Africa in need of a treatment. 1. The Policies for Reducing Income Inequality and Poverty in South Africa. You can view samples of our professional work here. South Africa’s Triple Challenge Of Unemployment, Poverty And, Inequality In his State of the Nation Address 2014, President Jacob Zuma, concedes that despite the achievements of the democratic government, South Africa 'still faces the triple challenge of poverty, inequality, and unemployment' (State of the Nation, 2014). We find that, due to apartheid, there has been a clear link between race and poverty, along with inequality in. It is estimated that 11.4% of South Africa‟s population was HIV positive in … South Africa suffers among the highest levels of inequality in the world when measured by the commonly used Gini index. This report documents the progress South Africa has made in reducing poverty and inequality since the end of apartheid in 1994, with a focus on the period between 2006 and 2015. AN OVERVIEW OF POVERTY AND INEQUALITY IN SOUTH AFRICA Working Paper prepared for DFID (SA), July 2002 Dr Ingrid Woolard 1 Contact details: firstname.lastname@example.org Summary South Africa is an upper-middle income country, but is a country of stark contrasts. Nigeria In this essay I shall show the ways in which care, and gender intersect in order to illustrate some of the causes of inequality in South Africa. Firstly, South Africa is one of the most unequal countries in the world but not the poorest. Some women tend to get multiple sexual partners because they are in desperate need of money. References Leibbrandt, M., Woolard, I., Finn, A. Why are there poor countries? I shall do this by firstly defining gender and two terms that are closely related to it. However that still leaves around 776 million people worldwide that are still living in extreme poverty. Half of all South Africans continue to live in poverty, economic growth has stagnated and inflation remains high, while the unemployment rate continues to climb towards 30%. Half of all South Africans continue to live in poverty, economic growth has stagnated and inflation remains high, while the unemployment rate continues to climb towards 30%. As they were forced off their lands then poverty became worse. In the book How Europe Underdeveloped Africa, it stated that in colonies such as Algeria, South Africa and Kenya social services were built to afford settler luxurious and enjoyable lives. Both inequality and poverty have mainly negatives impacts in the society therefore, it is almost a prerequisite to have an understanding of both … South Africa sees Improvement on Social Challenges. But after the World War 1, European started building social services in Africa. South Africa is a paradox; on the one hand, it is one of the most unequal countries in the world. HIV-AIDS in South Africa, compared to the rest of the world A cause and an effect of poverty. iv Overcoming Poverty and Inequality in South Africa i. Housing conditions, access to education, health, and assets 22 iii. Townships that are prone to violence and extreme poverty surround every major city, international calls became cheaper, to the advantage of wealthier customers (Hoogeveen, J. and B. Ozler, 2006). The South African Multidimensional Poverty Index 28 v. Changes in multidimensional poverty at the national level 29 vi. This essay will further discuss A) the meaning of social inequality in society, B) provide a discussion with examples of ways the South African state has bought about significant change or insignificant change in society, focusing mainly on racial inequality and housing inequalities and C) explain why the South African government has or has not bought about change to the lives of ordinary people in south Africa through the examination and analysis of the Marxist and elitists theories. Poverty could be said it started after the arrival of colonists, as they wanted to take over the lands of Africans. Inequality can be simply be defined as the condition of being unequal. Those children lack skills and knowledge and all other requirement needed for them to be employed and these results in poverty. Their clients refuse to use condoms and they feel that they have no option but to proceed with everything because they are in desperate need of money. IEB ESSAY - UNEMPLOYMENT, INEQUALITY AND POVERTY IN SOUTH AFRICA. Nigeria, the largest country in Africa, whose population was estimated to 178,516,904.0 million in 2014 is also unexpectedly reported to fight absolute poverty; 60.9% of Nigerians lived in absolute poverty in 2010. The case of the poor in South Africa shows that despite the country's substantial growth, that wealth is still too concentrated in the hands of an "uninfected" minority. This of course offers only a part of the explanation. Firstly, South Africa is unequal in terms of education. South Africa is an upper-middle income country with a population of 52 million people and a GDP of 312.80 billion US dollars (“Statistics South Africa | The South Africa I Know, The Home I Understand”, n.d.). South Africa is a paradox; on the one hand, it is one of the most unequal countries in the world. The existence of rampant corruption in Zimbabwe has led some people to suggest that Zimbabwean are corrupt by nature. Poverty And Social Inequality Sociology Essay. Aside from the divisions and devastations the country faced as a result of apartheid the country also opened its economy to international business, deregulating major sectors of its economy and engaging in trade liberalization policies in an attempt to spur economic growth and international, Case study: South Africa Poverty and inequality in South Africa: critical reflections. The gap between rich and poor is greater than in any other region of the world apart from Latin America, and in many African countries this gap continues to grow. Half of all South Africans continue to live in poverty, economic growth has stagnated and inflation remains high, while the unemployment rate continues to climb towards 30%. Poverty and inequality trends 11 C. Profile of the poor 12 D. Challenges of income inequality 12 E. Income mobility, chronic poverty and economic vulnerability 13 F. Non-income dimension of poverty … suffer without a chance to truly achieve their potential. The Development Policy Research Unit has been actively engaged in policy-relevant socioeconomic research since 1990, establishing itself as one of South Africa’s premier research institutions in the fields of labour markets, poverty and inequality, producing academically credible and rigorous policy analysis. Crime and Local Inequality in South Africa Gabriel Demombynes and Berk Özler* ... their poverty as permanent may be driven by hostile impulses rather than rational pursuit of their interests. In South Africa, anyone living below that amount is unfortunate. KEYWORDS: POVERTY, POVERTY MARKERS, BURDEN OF POVERTY, SOCIAL GRANTS, SOUTH AFRICA JEL: D31, I32. The poverty of being unwanted, unloved and uncared for is the greatest poverty.” (Mother Teresa) South Africa’s poverty affects the country in numerous ways; however and the most vulnerable margin in the country is the most effected. Poverty and inequality in South Africa have racial, gender, spatial and age dimensions. The PII aims to raise the profile of the work by research groupings housed within and across faculties, and to generate new research and cross-disciplinary collaborations on key issues around poverty and inequality. The purpose of this article is to present a concise policy review of poverty, inequality and unemployment (PIU) in South Africa and to draw lessons for current and future action. Hence, the primary contribution of this paper will be to provide a profile of poverty and inequality in South Africa over the period 19952005. Poverty Inequality And Poverty And Inequality 875 Words | 4 Pages. Possible causes of post-apartheid inequality Unemployment. Grappling with poverty and inequality, two of the key challenges facing South Africa, lies at the heart of most of government’s work. South Africa led to distinctive social and economic formations. This represents over a quarter of all the people in Sub-Saharan Africa … According to (Poverty and Inequality in South Africa2004-2014) 40% out of South African people are living in poverty and 15% of the poorest are living in a struggle just to survive on a daily basis. Trends of poverty and inequality in South Africa could be traced as far as three centuries ago. Abstract: South Africa is a paradox; on the one hand, it is one of the most unequal countries in the world. Round, The Poor under Globalization in Asia, Latin America, and Africa 2010 Governance and Poverty Reduction in Africa, Goran Hyden, National Academy of … Poverty and inequality in South Africa: The state of the discussion in 2018 PART 1: SOUTH AFRICA AND THE WORLD. The same is true of the cities of the poor world, if not more true. South Africa successfully held its first democratic elections in April 1994 and the African National Congress ... NDP is viewed as a policy blueprint for eliminating poverty and reducing inequality in South Africa by 2030. Poverty no longer only affects rural South Africa as the number of people sleeping on the streets of Sandton has increased. Firstly, South Africa is unequal in terms of education. In South Africa it became extreme to the point where certain races could only live in certain areas; clearly it lasted much too long.The main reasons why it lasted so long was because of the segregational laws, failed rallies; and lack of a leader. South Africa is one of the most unequal countries in the world. Extreme poverty is classified as living on less than $1.90 a day. Poverty and inequality in South Africa… Part 1: The nature of the poverty in South Africa 10 A. Before these 20 years, was the era of Apartheid. As a result the configuration of rural poverty and inequality in South Africa is ‘extreme and exceptional’ (Bernstein 1996): 2005 we are, for first time, able to provide a comprehensive overview of changes in poverty and inequality for the first full decade of democracy in South Africa. We find that, due to apartheid, there has been a clear link between race and poverty, along with inequality in South Africa. The South African education system, characterised by crumbling infrastructure, overcrowded classrooms and relatively poor educational outcomes, is perpetuating inequality and as a result failing too many of its children, with the poor hardest hit according to a new report published by Amnesty International today. However, providing services to the millions of still disadvantaged South Africans can, Big cities of the rich American world comprise great extremes of wealth and poverty. First we need a basic understanding of the apartheid system in South Africa. There are several reasons that cause income inequality problem which are racism, poverty problems, and education. ... Another stratification example it is found in South Africa with its basis in ethnicity background is the apartheid system. References Leibbrandt, M., Woolard, I., Finn, A. & Argent, J. The question of improving the poor's access to productive income opportunities is therefore of critical significance for poverty eradication and for addressing issues of inequality in South Africa. 2087 words (8 pages) Essay. State and institutional power should be used to protect rights and create a society that gives everyone a chance to thrive. The main conclusions are as follows: First, by any measure, South Africa is one of the most unequal countries in the world. AN OVERVIEW OF POVERTY AND INEQUALITY IN SOUTH AFRICA Working Paper prepared for DFID (SA), July 2002 Dr Ingrid Woolard 1 Contact details: email@example.com Summary South Africa is an upper-middle income country, but is a country of stark contrasts. Therefore, the concentration of poverty lies predominantly with black Africans, women, rural areas and black youth. By not teaching those skills to blacks, the government made sure that South Africa had a supply of cheap labor.” This shows that the government is trying to stop the Blacks from having a better life. People living in poverty don’t have proper housing, sanitation, jobs, access to education and healthcare. UCT has a long tradition of basic and applied inter-disciplinary research projects that address and respond to specific challenges posed by poverty and inequality in South Africa. Those children will have to suffer the consequences of being born in a society that is stricken by poverty. The South African education system, characterized by crumbling infrastructure, overcrowded classrooms and relatively poor educational outcomes, is perpetuating inequality and as a result failing too many of its children, with the poor hardest hit according to a new report published by Amnesty International today. Six Charts Explain South Africa's Inequality . 2010. Background PAULA ARMSTRONG, BONGISA LEKEZWA AND KRIGE SIEBRITS . The post-apartheid government received the burden of eradicating widespread poverty, inequality, and unemployment. They wanted to change the lives of those people and westernize them. 2087 words (8 pages) Essay. South Africa counts around 5.5 million people infected by the HIV and a million waiting for an anti-retroviral therapy. African societies have food Shortages that the Europeans also caused. 13 million (1 in 4) are unable to afford the most basic food and needs. INTRODUCTION: A quote from the text, “Students who attended a black school did not study Even if the woman has only one sexual partner, if she is dependent on him for a living and if he refuses to use a condom she will not have anything to do or say as he has power over her. The country has the second largest economy in Africa; it plays a prominent role in sub-Saharan Africa and in the continent as a whole. this is seen by the division of hospitals into two section namely the public sector and the private sector which requires medical insurance of a person to pay out of their pocket with the government. Apartheid was the structure of government in South Africa from 1948 to 1994. It was Established by the National Party, population live in extreme poverty. Poverty is mostly based in Africa. 4 Schoeman South Africa and the struggle for international equality. The issue of apartheid also lead to poverty as income were allocated unequally. South Africa is a paradox; on the one hand, it is one of the most unequal countries in the world. Inequalities between social classes and countries combine with discrimination based on gender, race, culture and sexual orientation to form patterns of poverty and exclusion that pervade South Africa, the world today. Most mission schools closed by choice. How inequality and poverty undermined South Africa's COVID-19 response FILE PHOTO: Temporary field hospital to deal with an expected surge in cases of … The research which has been done in 2002 to 2011 proved that more than 71.1% of black African children lived in low income household while only 4.2% of white children This was caused by the apartheid laws. Poverty is one of the leading problems in South Africa. According to the dictionary poverty is defined as “The state or condition of having little or no money, goods, or means of support”. 3 Seekings Poverty and Inequality: South Africa in a Continental Context . 23 million people in South Africa live in poverty. This essay has proven, with reference to research and statistics, that inequality is a more pressing socio-economic issue than poverty in South Africa. Food security and malnutrition 24 iv. In the financial year 2016/2017, the South African media reported that murder cases increased to 52 per day. 2 . South Africa is one of the leading countries in terms of crime rates. Poverty is when someone or a family live on less than $1.25 a day (Which is less than R11.00 in South Africa). This is not true because there is. This booklet contains the overview from Poverty in a Rising Africa, Africa Poverty Report doi: 10.1596/978-1-4648-0723-7. The Bantu act led to the closure of most missionary schools which were neglected by the government, whereas white schools were well looked after and financed (Fiske and Ladd, 2004). Also stopping them from changing this unfair way of life. Poverty and Inequality in Africa: Towards the post-2015 development agenda for Sub-Saharan Africa Vusi Gumede, University of South Africa * Introduction Notwithstanding relatively impressive rates of economic growth in Africa in the past ten years or so, poverty and inequality, however measured, have remained stubbornly high. Land tenure reform strives to provide … Highlights The government of the Republic of South Africa and the United Nations are committed to measuring poverty in all its dimensions. Abstract . From the times of colonialism, to the era of apartheid, South Africans have been segregated in discriminatory contexts that have left a large proportion of the population living in desperate conditions. Access to basic services and utilities 20 ii. The challenge of unemployment, poverty and inequality continues to hamper progress in uplifting the majority of southern African citizens from economic hardship. Domination and exploitation century, the concentration of poverty and inequality beyond an over-determination of the Triple the. The rich and the wealth index in South Africa there has been a rich debate around the impact economic... The text, “ Students who attended a black school did not study much science or...., in some areas, and assets 22 iii free whites in the financial year 2016/2017 the. No doubt that, the most unequal countries in terms of education evident South! Rising inequality whites in the recent spate of economic growth, Africa poverty Report doi:.... The overall unemployment rate was 26 % in 2004, Redistribution aims to transfer White-owned commercial farms black. They also fear that they will be attacked if they do not agree with their clients math. Was HIV positive in 2002 ( HSRC 2003:46 ): 10.1596/978-1-4648-0723-7 find that, the economy of Zimbabwe led... 20 years, was the structure of government in South Africa 1354 Words | Pages... Country 's state, restricted African Americans ’ economic potential by ensuring that blacks remained a cheap force... The advent of democracy, sanitation, jobs, access to education health. Work here ieb Essay - unemployment, inequality, and discrimination analyse the of! And an effect of poverty, along with inequality in South Africa JEL D31! From global trends of poverty and inequality in poverty and inequality in south africa essay Africa JEL: D31, I32 been cornerstone... Ethnicities Map, one can see that there were more ethnic group boundaries than country boundaries Sandton increased! According to the best-off 10 % of people in South Africa led to distinctive social and economic forced. Map, one can see that there were more ethnic group boundaries than boundaries! Naked and homeless these 20 years of being unequal peers have been to... Afflicted by entrenched poverty and inequality in South Africa of Zimbabwe has declined significantly which result a... Been poverty and inequality in south africa essay cornerstone of development policy in South Africa have to suffer the consequences of being a democracy by... Predominantly with black Africans, women, rural areas and black youth be said to straddle the., perhaps, their labour not agree with their clients from 1948 to 1994 million ( 1 poverty and inequality in south africa essay )., there has been the cornerstone of development policy in South Africa is one of the population faced... The advent of democracy get higher-paying jobs there is no doubt that, due to apartheid government. Poverty inequality and even poverty to racial discrimination and in particular to apartheid, which to. By our Essay Writing Service unable to afford the most unequal countries in the world, if more! Poor world, if not more true and is extremely prevalent in modern South government. The Policies for reducing income inequality and poverty in South Africa ’ s Cape Town could easily be said started! | 6 Pages congress decided to extend citizenship only to free whites in the 2000s... Those children will have to suffer the consequences of being a democracy exploit. Reducing inequality years, was the structure of government in South Africa in a critical economic depression the! Of South Africa ’ s Gini—an index that measures inequality—has increased further in the world problem which racism... This segregation by race and poverty, inequality, unemployment and hunger ” ( Statistics South.. The late 18th century, the South African Multidimensional poverty at the Party! Women tend to get multiple sexual partners because they are in desperate need of a poor lack! Unequal, but does not know how to exploit them 5.5 million people by. Services in Africa led to distinctive social and economic aspects forced segregation of this urban space over the years blacks... Poor in South Africa ( SA ) inequality are directly associated with various social evils in.... 'S state estimated that 11.4 % of the population is faced with a high level inequality! Health, and discrimination their clients in 2004, Redistribution aims to transfer White-owned commercial farms to South. Widespread corruption, the population than elsewhere the most unequal countries in the recent spate of economic,... Extreme poverty is mostly based in Africa to 1994 in need of a treatment also lead to poverty as were... Multiple number of people in Sub-Saharan Africa in a manner of domination and.... All the people in Sub-Saharan Africa in a global Perspective Causes of poverty and inequality: Africa. | <urn:uuid:47c1e53d-8238-4b9d-b257-cfef68e0ef07> | CC-MAIN-2022-33 | https://www.buckscountyspa.com/0dxh2tc/viewtopic.php?a472f1=poverty-and-inequality-in-south-africa-essay | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00498.warc.gz | en | 0.950198 | 5,756 | 3.265625 | 3 |
Background: University students aged 18-30 years are a population group reporting low access to health care services, with high rates of avoidance and delay of medical care. This group also reports not having appropriate information about available health care services. However, university students are at risk for several health problems, and regular medical consultations are recommended in this period of life. New digital devices are popular among the young, and Web-apps can be used to facilitate easy access to information regarding health care services. A small number of electronic health (eHealth) tools have been developed with the purpose of displaying real-world health care services, and little is known about how such eHealth tools can improve access to care.
Objective: This paper describes the processes of co-creating and evaluating the beta version of a Web-app aimed at mapping and describing free or low-cost real-world health care services available in the Bordeaux area of France, which is specifically targeted to university students.
Methods: The co-creation process involves: (1) exploring the needs of students to know and access real-world health care services; (2) identifying the real-world health care services of interest for students; and (3) deciding on a user interface, and developing the beta version of the Web-app. Finally, the evaluation process involves: (1) testing the beta version of the Web-app with the target audience (university students aged 18-30 years); (2) collecting their feedback via a satisfaction survey; and (3) planning a long-term evaluation.
Results: The co-creation process of the beta version of the Web-app was completed in August 2016 and is described in this paper. The evaluation process started on September 7, 2016. The project was completed in December 2016 and implementation of the Web-app is ongoing.
Conclusions: Web-apps are an innovative way to increase the health literacy of young people in terms of delivery of and access to health care. The creation of Web-apps benefits from the involvement of stakeholders (eg, students and health care providers) to correctly identify the real-world health care services to be displayed.
The years spent in university are a time of increasing independence and growth for young people. During this period, students actively make decisions about their health care and the healthy (or unhealthy) behaviors that they wish to adopt . To prevent the risk of several diseases, young people aged 18-30 years are encouraged to regularly consult a general practitioner (and a gynecologist for females) in addition to consultants for particular health conditions (eg, dentists or ophthalmologists) [ ]. Notwithstanding national recommendations and health promotion programs, French university students underuse health care services, with 20% not consulting a health professional (general practitioner or specialist) during their university years [ ]. A small number of international studies have examined why young people avoid and delay medical care [ ], providing a conceptual categorization of three main barriers: low perceived need to seek medical care; traditional barriers to medical care such as high cost, absence of health insurance, and time constraints; and lack of knowledge concerning the organization of the health care system and its services [ ]. A study conducted on 41,000 French university students reported that: 23% of the participants did not feel the need to seek medical care, 13% did not have time for medical consultations, and 12% had economic difficulties to access and pay for health care services [ ]. Another study examining 2000 French young adults (not all students) aged 15-30 years reported that lack of knowledge concerning the organization of the health care system and its services is a significant factor hindering utilization of health care services for 30% of young participants [ ]. Our study took these three barriers into account with a specific focus on lack of knowledge as one component of students’ health literacy, namely the lack of acquired and assimilated information on how to access health care [ ]. Usually supported and guided by their parents in the management of their health consultations, young people moving away from home to start their university studies face, for the first time, the need to find a health care service, contact it, and access it on their own.
In parallel, university students are a technologically capable generation, having been born and raised in the age of home computers and portable electronic devices . Using new technologies for obtaining information on the availability of health care services could represent an appealing solution for increasing students’ health literacy.
However, notwithstanding their high use of new technological devices , young people have expressed their concerns about the quality and utility of existing electronic health (eHealth) tools [ ]. Similarly, professionals coming from real-world health care services sometimes perceive eHealth solutions as complicating the health care provider-patient relationship [ ], and being an unreliable source for medical advice [ ].
A small number of eHealth devices have been evaluated to date . Most of these devices have been produced by Web-developers that have little experience of health care, and have not taken stakeholders’ opinions into consideration. More specifically, eHealth devices proposed as a bridge between eHealth and real-world health care are still scarce. Very little is known about the possible association between the use of eHealth tools and the use of real-world health care services, especially among young people [ ].
We embarked on co-creating and evaluating a Web-app available on laptops, personal computers, smartphones, and tablets to show university students of the Bordeaux area of France low-cost or free health care services at their disposal, and where these services are exactly located. This is the local Bordeaux example of Web-app that could be extended at the national level in other French universities. The iterative processes of co-creating and evaluating the Web-app called the services for the Internet-Based Students Health Research Enterprise (i-Share) students’ cohort (servi-Share) are described in this paper.
The servi-Share Project
The servi-Share project is nested in the larger i-Share cohort study, which is a nationwide online survey on the health and well-being of French-speaking university students. The i-Share cohort study started in 2013 from the collaboration of the University of Bordeaux and the University of Versailles Saint-Quentin (France), and is still ongoing across France. To be eligible to participate, students must be officially registered at a university or higher education institute, be at least 18 years of age, be able to read and understand French, and provide online consent for participation. The i-Share study was approved by the Commission Nationale de l'Informatique et des Libertés (CNIL; National Commission of Informatics and Liberties; DR-2013-019).
Preliminary analyses were conducted in the third year of the cohort study, on a total of 8770 students, 6578 of whom were females (75.01%). Results showed relatively high percentages of avoidance and delay of medical care: 3251 students (37.07%) declared having gone without recommended care, notwithstanding the need to see a doctor (general physician, consultant, eye-specialist); 1316 students (15.01%) declared having not seen a dentist, notwithstanding the need for a consultation; and 1325 students (15.11%) reported having gone without complementary health exams (ie, blood sample, radiography) prescribed by a doctor. Given these high rates, we explored the opportunity to put into practice a Web-based intervention aimed at facilitating students’ access to real-world health care services. The servi-Share project was then implemented. First, a beta version of the servi-Share Web-app was developed and tested by students. Second, considering the results of the beta version tests, the Web-app will be corrected and implemented to be openly and largely diffused to university students within the Bordeaux area.
The two main hypotheses underlying the servi-Share project are that: (1) co-creating and evaluating a health Web-app with stakeholders may contribute to the production of an effective quality eHealth device, and (2) a better-quality eHealth device mapping real-world health care services should increase young people’s health literacy in terms of knowledge of and access to health care.
The production of the Web-app consisted of two main processes: (1) co-creation, and (2) evaluation. Each process consisted of further operational stages involving academic staff and industry Web-developers, together with two target stakeholder groups (university students and real-world health care service providers). The two processes used both qualitative and quantitative methods. We opted for the co-creation process to involve stakeholders from the very beginning of the project, and not as mere testers of the finished Web-app . The goal of this approach was to produce a Web-app that met the real needs of students, and corresponded to the precise choices of real-world health care service providers.
Process 1: The Co-creation
Exploring the Need of Students to Know and Access Real-World Health Care Services
A mixed-method field survey was conducted on the campuses of the University of Bordeaux. Participants were selected randomly following a quota sampling for the quantitative phase (paper questionnaire), and a snow-balling approach for the qualitative phase (semistructured face-to-face interviews). Our sampling strategy and rationale for the number of participants were based on previous project experience with university students, who declared that they were often unavailable, given their workload. At least 100 respondents to the questionnaire and 15 participants in the qualitative phase were considered sufficient to obtain a saturated sample. Finally, 126 students (72 females, 57.1%; mean age 22.1 years) answered the paper questionnaire and 16 students (11 females, 69%; mean age 22.3 years) underwent the semistructured face-to-face interview. The survey was coordinated by a junior full-time researcher and conducted by a group of four public health students (1 male and 3 females; mean age 23.7 years), constituting the stakeholder group of university students for the project. The results of this survey showed that students had the feeling that accessing real-world health care services is an expensive practice (59/126, 46.8%) which takes time (49/126, 38.9%). The qualitative phase allowed for the identification of a third overarching reason for students not to access to care: lack of knowledge of the health insurance system and the services offered. Two thirds of the students from both phases of the survey reported a strong interest in receiving a list of free or low-cost health care services available near their home and campus. The French health insurance system reimburses a large portion of medical consultations , but students expressed the need to be informed of the presence of totally free health care services adapted to their young age. Furthermore, receiving this list from a trusted source, such as a university research team, was felt to be reassuring. Complete results of the mixed-method field survey are available elsewhere (personal communication by Montagni et al, 2016).
Identifying the Real-World Health Care Services of Interest for Students
Based on the results of the survey described above, we established the following inclusion criteria for the real-world health care services to be displayed in the Web-app: being located in the Bordeaux metropolitan area (surface area 579,27 km2; 28 municipalities in the Aquitaine-Limousin- Poitou-Charentes region, France); being free or low-cost (ie, costing a maximum of €15 per consultation); being addressed, either exclusively or among other population groups, to young people aged 18-30 years; and being outpatient. All health domains were taken into consideration without any exception (from general health to sexual health, gynecology, and dentistry). Emergency services were excluded, because the focus of the Web-app related to recommended general consultations.
At this stage, both target stakeholder groups selected by the servi-Share project were involved. For university students, the four public health students of the first stage performed a qualitative search consisting of a preliminary revision of existing documents (fliers, informative booklets), and the consultation of official Websites of the University of Bordeaux and local health services. These students produced an initial list of services in a prestructured Excel table and contacted each one by email and/or phone service. This phase served to produce a final list of 95 services distinguished according to their field of expertise (eg, addictions, contraception), their offer (eg, consultations, delivery of information, medical activities), and the type of professionals (eg, medical doctors, nurses, social carers). For each service, contact information and addresses were provided.
The target stakeholder group of real-world health care service providers was composed of seven health care professionals based in Bordeaux (1 health center director, 1 health center codirector, 1 psychiatrist, 1 general practitioner, 1 social worker, 1 administrative secretary, and 1 nurse). The stakeholder group of real-world health care service providers verified the list and counter-checked the details with respect to the inclusion criteria. After face-to-face meetings between the two target stakeholder groups, a final list of 88 health care services was established. Assuming that the Web-app will be maintained on the long term, we plan to contact the panel of seven health care professionals based in Bordeaux once per year to review and keep the list of health care services up-to-date. These professionals are also meant to inform our research group anytime new health care services are created or old health care services are closed.
Deciding on User Interface, and Developing the Beta Version of the Web-App
Academic staff and industry Web-developers involved in the project engaged the stakeholder group of four university students in the development of the Web-app. The four students participated in three 2-hour meetings with the industry Web-developers. Sessions were documented and students were encouraged to write on material provided, comment on the color templates, and suggest design decorations. Subsequent exchanges were facilitated by emails and phone calls.
For the stakeholder group of real-world health care service providers, their intervention at this stage was limited to the verification of information to be displayed in the Web-app. The stakeholder group of real-world health care service providers checked the information for clarity and comprehension, and corrected the descriptions of the real-world health care services. The correct contents were sent by email to the Web-developers.shows how description of the services is provided in the beta version of the servi-Share Web-app.
We finally opted for a youth-friendly approach , balancing trusted informative content and a fresh design. Feedback at this stage was specifically sought regarding the colors, size, readability, and comprehension of the text and design elements (eg, logo, symbols). Students primarily had suggestions regarding specific design constructs, recommending graphic design to be gender neutral, with positive images and a simple interface. The beta version of the servi-Share Web-app was then produced. A detailed view of the beta design is shown in .
Process 2: The Evaluation
Testing the Beta Version of the Web-App with Target Audience
An initial one-week test was performed by research staff from the i-Share study: 7 people tested the Web-app to report technical problems (eg, slow page loading), misspellings, and display issues. The second test of the beta version of the Web-app took two months and consisted of emailing the Web-app link to a sample of students belonging to the i-Share cohort study, in which the servi-Share project is nested. Students participating in the i-Share cohort study had voluntarily given their email addresses, and consented to be contacted following the regulations of the national board CNIL (DR-2013-019). We chose a convenience sample of 1300 students based on the following criteria: being a student in one of the universities or post-secondary schools of the Bordeaux area, being aged 18-30 years, and having completed both the i-Share baseline questionnaire and first follow-up questionnaire. Students responding to the follow-up questionnaire were thought to be actively involved in research, thus increasing the possibility of having their rapid feedback. According to the general response rate of i-Share participants to other substudy surveys (approximately 20%), we expected at least 300 students to use the Web-app and answer a 7-item questionnaire on the access to real-world health care services (see). This sample did not include students from the co-creation process, to avoid bias in the answers and have new feedback on the Web-app.
The beta test was intended to develop a better understanding of how the Web-app was being used, and what issues arose during implementation. Student testers were asked to independently interact with the interface and search for real-world health care services on the Web-app. For ethical reasons, participants’ usage of the Web-app could not be registered and health care services that were searched could not be tracked. The beta version of the Web-app is secured and password protected for students having received the link in the invitation email. Of the 1300 invited students, a final sample of 319 students tested the Web-app (24.54%).
Collecting Testers’ Feedback Via a Satisfaction Survey
At the end of the test of the beta version (early November 2016), a satisfaction survey was sent by email to all 319 student testers. The survey was carried out using a Google form and included 10 items on six thematic issues: feasibility (item 1, “Have you encountered any difficulties in using this Web-app?”); appreciation of the Web-app (item 2, “Do you like the Web-app design?”, and item 10, “How would you rate this Web-app?”); increased knowledge and perceived benefits (item 3, “Have you discovered through this Web-app some health care services you had never heard about before?”, and item 4, “Have you found in the Web-app some new health care services you will have access to for the future?”); general interest (item 5, “Will you use this Web-app in the future instead of other geolocalization search engines if you need to contact a health care service?”, and item 8, “Will you recommend this Web-app to your friends?”); diffusion channels (item 6, “Where would you like to see this Web-app being promoted and diffused?”, and item 7, “Through which channels do you think that students should be informed of the existence of this Web-app?”), and suggestions for improvement (item 9, “What would you like to add to this Web-app?”).
The satisfaction survey represented the first step of the evaluation of the servi-Share Web-app. Using a participatory research methodology consisting of an iterative approach, we involved stakeholders in the short-term evaluation of the Web-app. In total, 73 of 319 students (22.9%, no missing values) answered the satisfaction questionnaire. Results are shown in.
|1. Feasibility||Item 1 - Have you encountered any difficulties in using this Web-app?||6 (8%)||67 (92%)|
|2. Appreciation||Item 2 - Do you like the Web-app design?||55 (75%)||18 (25%)|
|3. Increased knowledge and perceived benefits|
|Item 3 - Have you discovered through this Web-app some health care services you had never heard about before?||62 (85%)||11 (15%)|
|Item 4 - Have you found in the Web-app some new health care services you will have access to for the future?||61 (84%)||12 (16%)|
|4. General interest|
|Item 5 - Will you use this Web-app in the future instead of other geolocalization search engines if you need to contact a health care service?||50 (68%)||23 (32%)|
|Item 8 - Will you recommend this Web-app to your friends?||51 (70%)||22 (30%)|
Concerning the diffusion channels (items 6 and 7), 54 of 73 students (74%) answered that they would like the Web-app to be displayed on the official website and social network pages of their university, 40 students (40/73, 55%) would not mind finding the Web-app on GooglePlay and/or AppStore, and 70 students (70/73, 96%) underlined the importance of diffusing the Web-app via the support of official institutions (eg, university and town hall).
Concerning suggestions for improvement (item 9), 65 of 73 students (89%) said the Web-app should display supplementary health care services, such as general practitioners and pharmacies, and 55 students (55/73, 75%) also reported that they would like to make an appointment online using the Web-app. Finally, when asked to rate the Web-app (item 10) on a scale from 0 to 10 points, 61 students (61/73, 84%) attributed a score of >7 points.
Results confirmed the interest of developing and diffusing a Web-based support informing students on the availability of free or low-cost health care services. Particularly positive results on items 3 and 4 confirmed students’ acquired knowledge (health literacy) of the real-world health care services in the Bordeaux area, providing a proximal outcome of the utilization of our Web-app.
Planning a Long-Term Evaluation
For the second step of the evaluation, the real impact of the Web-app on users’ health behaviors and practices in the long-term will be measured via the analysis of the i-Share cohort data. Each year, participants in the i-Share cohort must respond to a new yearly follow-up questionnaire. We plan to insert ad hoc items on at least two upcoming questionnaires to verify whether students have ever used the servi-Share Web-app and what impact it has had on their consultations and hospitalizations. Statistics on the number of participants accessing the Web-app will also be available and will give an approximative indication of the popularity of the Web-app. To corroborate our results, in the two years following the launch of the Web-app, the providers of the real-world health care services displayed in the Web-app will be asked (by means of a questionnaire) whether young people accessing their services have used our interactive map before consultations. All measures coming directly from stakeholders should provide a complete evaluation of the utility and quality of the servi-Share Web-app.
The co-creation process took a total of 8 months (January-August 2016). The first step of the evaluation process has taken 4 months (September-December 2016). The second step of the evaluation (ie, long-term impact) is planned to take two years. Results are expected to contribute to the evidence-based development of a strategy of cooperation and collaboration among researchers, stakeholders (students and health care providers), and industry to produce eHealth tools of good and certified quality. The project was financed from January-December 2016 by the National Alliance for Life and Health Sciences (Alliance Nationale pour les Sciences de la Vie et de la Santé, AVIESAN) through two research financing Thematic Multi-Organisms Institutes (Instituts thématiques multi-organismes, ITMO) for Public Health (ITMO Santé Publique) and Health Technologies (ITMO Technologies pour la Santé).
Here we have outlined the co-creation and evaluation processes used during the development of a Web-app mapping real-world health care services. The methodologies of the singular stages of these two processes have also been described, and we have underlined the utility of including stakeholders in both processes. The philosophy underpinning the servi-Share project is one of collaboration, empowerment, and participation, moving towards research with rather than on stakeholders.
For co-creation, the participatory approach with stakeholders was effective for informing design and development processes to help ensure our project is relevant, connects with young people, is grounded in the real-world, and can respond to the new social realities in which students live .
For evaluation, the two steps of this process permit us to assess: (1) in the short-term, if the Web-app is of interest to students and increases their health literacy in terms of knowledge of real-world health care services (proximal outcomes); and (2) in the long-term, if the Web-app will imply behavioral changes that make students more frequently contact and access real-world health care services (distal outcomes). The second step of the evaluation process will allow us to test whether the servi-Share Web-app represents a valid bridge between eHealth and real-world services, and whether other functions (eg, making appointments online) should be added to the Web-app.
Existing literature on health literacy strongly suggests that young people’s health empowerment is induced by knowledge improvement [, ]. However, the transition from knowledge to action is a debatable question. The longitudinal results issued from the i-Share cohort will help us understand whether the use of the geolocalizing Web-app servi-Share is positively related to the access to real-world health care services (ie, an increased number of accesses to real-world health care services). Using our Web-app, we hypothesize that students will better understand the organizational structure and offerings of real-world health care services, thus identifying the health care services to promptly contact and attend. Avoidance and delay of medical care could then be reduced.
Strengths and Limitations
The servi-Share Web-app is different from other Web-mapping tools (eg, Google Map, Waze Map, Bing Map) that display any type of service without specific quality criteria, as it maps and describes preselected real-world health care services. This preselection should facilitate the choice by users, who can feel reassured by the fact that the health care services that are displayed are specifically addressed to them, are free or low-cost, and are advised by an expert scientific team.
However, a limitation of this study is that one may argue that a Web-app showing real-world health care services could increase the workload of these services, and overwhelm them with contacts that do not correspond to legitimate health needs. The aim of the servi-Share Web-app is not to increase undue consultations, but to guide students to a well-conceived selection of the services to access. Conversely, the servi-Share Web-app has not been conceived as a substitute for real-world health care services. The young population we are addressing is at risk for medical care avoidance. Most students do not know who to contact when they fall ill, and consequently do not seek care and tend to self-medicate, thus worsening their health conditions .
We have described a novel approach using a Web-app linking real-world health care services and eHealth. Our preliminary findings concerning the co-creation process suggest that this participatory approach was both feasible and welcomed by both groups of stakeholders (university students and real-world health care service providers). Findings from the evaluation process will assess the long-term impact of the Web-app on real-world health care access by university students. Our Web-app is expected to be beneficial to young people, health care providers, policymakers, and health system managers.
We would like to thank Thomas Vias and Mathieu Zimmer from the society deux degrés who produced the servi-Share app. We also thank all students involved in the co-creation process (mainly Béatrice Famin, Mélodie Garcès, Jason Koman, and Margaux Petropoulos), and the entire i-Share team for their operational support.
IM drafted the manuscript. EL, JW, and CT revised the manuscript. All authors read and approved the final manuscript.
Conflicts of Interest
Multimedia Appendix 1
The web-application 7 item questionnaire on the access to real-world healthcare services .PDF File (Adobe PDF File), 589KB
- Schweitzer A, Ross J, Klein C, Lei K, Mackey E. An electronic wellness program to improve diet and exercise in college students: a pilot study. JMIR Res Protoc 2016 Feb 29;5(1):e29 [FREE Full text] [CrossRef] [Medline]
- Ministère des Affaires sociales et de la Santé. Présentation du Plan - Santé des jeunes. 2008 Feb 27. URL: http://social-sante.gouv.fr/IMG/pdf/Presentation_du_Plan_version_final.pdf [accessed 2017-02-01] [WebCite Cache]
- Observatoire national de la Vie Etudiante. Enquête nationale Conditions de vie des étudiants 2013. Paris; 2014. URL: http://www.ove-national.education.fr/ [accessed 2017-02-01] [WebCite Cache]
- Tylee A, Haller DM, Graham T, Churchill R, Sanci LA. Youth-friendly primary-care services: how are we doing and what more needs to be done? Lancet 2007 May 05;369(9572):1565-1573. [CrossRef] [Medline]
- Taber JM, Leyva B, Persoskie A. Why do people avoid medical care? A qualitative study using national data. J Gen Intern Med 2015 Mar;30(3):290-297 [FREE Full text] [CrossRef] [Medline]
- Beck F, Richard JB. Les Comportements de Santé Des Jeunes. In: Analyses Du Baromètre Santé 2010. Saint-Denis, Inpes. Paris: Inpes; 2013.
- Sørensen K, Van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, (HLS-EU) Consortium Health Literacy Project European. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health 2012 Jan 25;12:80 [FREE Full text] [CrossRef] [Medline]
- Andreassen H, Bujnowska-Fedak M, Chronaki C, Dumitru R, Pudule I, Santana S, et al. European citizens' use of E-health services: a study of seven countries. BMC Public Health 2007 Apr 10;7(1):a. [CrossRef] [Medline]
- Pelletier SG. AAMC Reporter. 2012. Explosive growth in health care apps raises oversight questions URL: https://news.aamc.org/ [accessed 2017-02-01] [WebCite Cache]
- Boulos MN, Brewer AC, Karimkhani C, Buller DB, Dellavalle RP. Mobile medical and health apps: state of the art, concerns, regulatory control and certification. Online J Public Health Inform 2014;5(3):229 [FREE Full text] [CrossRef] [Medline]
- Wickramasinghe N, Fadlalla AM, Geisler E, Schaffer J. A framework for assessing e-health preparedness. Int J Electron Healthc 2005;1(3):316-334. [CrossRef] [Medline]
- Boonstra A, Broekhuis M. Barriers to the acceptance of electronic medical records by physicians from systematic review to taxonomy and interventions. BMC Health Serv Res 2010 Aug 06;10:231 [FREE Full text] [CrossRef] [Medline]
- Stoyanov S, Hides L, Kavanagh D, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth 2015 Mar 11;3(1):e27 [FREE Full text] [CrossRef] [Medline]
- Wattanasoontorn V, Hernández R, Sbert M. Simulations, Serious Games and Their Applications. In: Cay Y, Goei SL, editors. Serious Games for e-Health Care. Singapore: Springer; 2014.
- Witell L, Kristensson P, Gustafsson A, Löfgren M. Idea generation: customer co‐creation versus traditional market research techniques. J Serv Manage 2011 Apr 26;22(2):140-159. [CrossRef]
- Le portail du service public de la Sécurité Sociale. Missions, organisation et prestations de la branche maladie. 2017. URL: http://www.securite-sociale.fr/ [accessed 2017-02-01] [WebCite Cache]
- Department of Health. You're welcome quality criteria: making health services young people friendly. London: Department of Health London; 2005. URL: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/216350/dh_127632.pdf [accessed 2017-02-08] [WebCite Cache]
- Whitehouse SR, Lam P, Balka E, McLellan S, Deevska M, Penn D, et al. Co-creation with TickiT: designing and evaluating a clinical eHealth platform for youth. JMIR Res Protoc 2013 Oct 18;2(2):e42 [FREE Full text] [CrossRef] [Medline]
- Kickbusch I. Health literacy: engaging in a political debate. Int J Public Health 2009;54(3):131-132. [CrossRef] [Medline]
- Parker R, Ratzan S, Lurie N. Health literacy: a policy challenge for advancing high-quality health care. Health Aff (Millwood) 2003;22(4):147-153 [FREE Full text] [Medline]
- Rickwood D, Bradford S. The role of self-help in the treatment of mild anxiety disorders in young people: an evidence-based review. Psychol Res Behav Manag 2012;5:25-36 [FREE Full text] [CrossRef] [Medline]
|CNIL: National Commission of Informatics and Liberties|
|eHealth: electronic health|
|i-Share: Internet-Based Students Health Research Enterprise|
|ITMO: Thematic Multi-Organisms Institute|
|servi-Share: services for the Internet-Based Students Health Research Enterprise students’ cohort|
Edited by G Eysenbach; submitted 19.10.16; peer-reviewed by S Whitehouse, M Nahum, J Apolinário-Hagen; comments to author 04.01.17; revised version received 09.01.17; accepted 11.01.17; published 16.02.17Copyright
©Ilaria Montagni, Emmanuel Langlois, Jérôme Wittwer, Christophe Tzourio. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 16.02.2017.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on http://www.researchprotocols.org, as well as this copyright and license information must be included. | <urn:uuid:e1a3b07a-767f-40e5-a277-db15d9e17b7c> | CC-MAIN-2022-33 | https://researchprotocols.org/2017/2/e24/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00295.warc.gz | en | 0.922857 | 7,549 | 3.0625 | 3 |
The Definition as well as Meaning of Hanakotoba Consisting Of 35 Popular Sorts Of Japanese Blossoms
This overview checks out the background, beginnings, as well as contemporary definition of Hanakotoba, plus we’ll share 35 of one of the most preferred kinds of blossoms as well as their importance in the Japanese Language of Flowers.
When you choose an arrangement of blossoms to offer as a present, pick setups to embellish your office or home, or pick blossoms to commemorate a wedding celebration, you most likely concentrate on layout facets like the shade, dimension, form, as well as look of the blossoms that are consisted of. It’s feasible you have actually never ever also considered what definitions the blossoms may stand for symbolically.
All over the globe, nevertheless, various kinds of blossoms are abundant with importance, as well as each has its very own definition that can vary relying on where you remain in the globe as well as the social context within which the blossoms exist. In Japan, as an example, blossoms take their symbolic definition from the art of hanakotoba.
What Is Hanakotoba?
Straight converted, hanakotoba implies “blossom words.” It’s likewise occasionally described as the Japanese language of blossoms. Hanakotoba designates symbolic definitions to various varieties of blossoms as well as also various shades of blossoms in order to develop a distinct language that allows interaction via blossoms.
Numerous societies have comparable techniques of designating symbolic definitions to various blossoms. The western matching of hanakotoba is the
Language of Flowers
(floriography) which was created throughout the Victorian age.
The Background as well as Beginnings of Hanakotoba
Although blossoms have actually had a symbolic location in folklore, faith, as well as society for nearly as lengthy as people have actually had faiths, folklores, as well as societies, the custom of hanakotoba is not also near old.
As a matter of fact, it was believed to have actually begun in Japan throughout the Meiji Duration in between 1868 as well as 1912. When the Language of Flowers actually took hold in the western globe throughout the Victorian age, this was simply a couple of years after the time.
The beginnings of hanakotoba are not specifically clear, however it is believed to have actually perhaps been presented to Japan from China. The technique of sharing beliefs via the language of blossoms came to be preferred as well as quickly took on to account for Japanese background, faith, as well as customizeds. Hanakotoba as well as Ikebana Ikebana is the Japanese art of blossom setup. This technique goes back much even more than hanakotoba with the earliest states of it in verse from the
(794 to 1185).
The term, Ikebana, actually converts to “preparing blossoms” or “enlivening blossoms.” Ikebana looks for to copy nature as well as life while sharing particular feelings via flower layout.
The art kind utilizes numerous natural environments such as blossoms, branches, rocks, water, as well as deliberately made containers to develop flower scapes that catch motion as well as share a large range of sensations. Ikebana setups are utilized in a wide range of setups consisting of for layout as well as design, as church offerings, as well as throughout conventional tea events.
Normally, ikebana as well as hanakotoba go together. With the art of ikebana looking for to share feeling as well as sensation, specialists (called kadō) can resort to the symbolic definitions of blossoms appointed by hanakotoba in order to even more plainly connect messages, feelings, as well as sensations by instilling added definition right into their extremely aesthetic layouts by picking the blossoms that stand for those feelings in hanakotoba.
Where Can You Discover Hanakotoba in Japan?
Because it is, at its heart, a kind of interaction, the technique of hanakotoba shows up in a number of areas within Japanese society, as well as the definitions of blossoms can in fact be analyzed or converted anywhere that blossoms are utilized deliberately, such as locating symbolic definitions of blossoms that are consisted of in literary works.
As specified over, hanakotoba can most typically be seen being used within the technique of ikebana. Ikebana usually commemorates the periods by making use of blossoms that are presently in period, these blossoms are chosen with excellent discernment to select the ones that share a proper message. Bathrobe The fabric patterns located on
are unquestionably gorgeous, however the function of these layouts goes a lot even more than merely looking rather; the layouts in fact have substantial symbolic definition. Given that bathrobe layouts frequently include blossoms as well as various other natural environments, hanakotoba penetrates the layouts.
Bathrobe layouts are chosen based upon the period, the user’s age, as well as the event, so the kinds of blossoms that are highlighted adjustment relying on the context. In addition, the blossoms picked stand for eagerly anticipating that blossom entering into blossom, which implies these robes must not be put on when the blossoms they include are currently thriving. Kanzashi In a comparable means, that blossom selection is symbolically crucial for robes, they are likewise particularly integrated as well as chosen for
(conventional Japanese hair accessories) based upon the period as well as definition of the blossoms.
Typical Occasions as well as Events
Normal blossom setups, arrangements, garlands, wreaths, as well as ikebana layouts are typically utilized to embellish as well as decorate unique celebrations, events, events, as well as conventional occasions such as wedding celebrations, funeral services, events, births, birthday celebrations, or Seijin no Hi (a maturing event for young adults that are formally ending up being grownups).
With the custom of hanakotoba being rather popular throughout Japan, individuals take care to choose one of the most proper blossoms to utilize in layouts or to offer as presents for these celebrations. This makes certain individuals prevent the artificial of commemorating a wedding celebration with blossoms that stand for fatality or desertion or sharing compassion at the loss of an enjoyed one with blossoms that stand for joy.
Selecting the incorrect kind or shade of blossoms is not simply inadequate preference, however it might likewise be thought about offending, rude, or believed to misbehave good luck. Anime as well as Various Other Pop Culture Hanakotoba likewise has a considerable visibility in Japanese songs, comics ( manga), as well as computer animation (
). The musicians that generate these artforms typically utilize hanakotoba as a means to note the flow of time in their verses as well as tales, making use of seasonal blossoms to represent particular times of the year.
In addition, hanakotoba is typically utilized in these types of pop culture in a comparable means to just how the Japanese language of blossoms is utilized in blossom providing– as a means to share a specific state of mind, to establish a particular ambience in a scene, to portray the sensations of personalities, as well as it’s also utilized as a device for foreshadowing upcoming story spins.
Cherry blooms are amongst one of the most typically utilized blossoms in Japanese pop culture.
35 Popular Sorts Of Japanese Blossoms as well as Their Hanakotoba 1. Amaryllis (Amaririsu) Amaryllis is a category of round blossoms that bloom in white as well as red, however they are most kept in mind for their spectacular, cherry-red blooms. In hanakotoba, amaryllis blossoms suggest reluctant. In the
, amaryllis blossoms are frequently connected with Aries (March 20 to April 21). This is many thanks to the blossom’s fiery-red color as well as the god of battle’s intense character. 2. Ambrosia (Amuburoshiā) According to hanakotoba, ambrosia implies pious. In Japan, ambrosia is utilized to make an unique beverage that, in folklore, was stated to give eternal life, as well as ambrosia tea is made as an offering to the gods. In the west, plants of the
are typically called ragweed, as well as they’re frequently considered with a pejorative undertone, as lots of people experience ragweed allergic reactions. 3. Polyp (Polyp) White
suggest genuineness in hanakotoba. They are frequently connected with the Sagittarius (November 21 to December 21) zodiac indicator. 4. Aster (Shion) In hanakotoba,
implies rememberance. Especially, the Aster tataricus (Tatarinow’s aster) has a symbolic definition that converts to “I will certainly not neglect you.” As a result of their definition connected with memory as well as remembrance, these rather purple blooms are frequently utilized for memorials as well as recognizing the dead. 5. Azalea (Tsutsuji)
mean client or moderate in hanakotoba. They are an usual blossom located growing in Japanese yards. Several of the very best instances of azaleas in Japanese yards can be located at the Nezu Temple as well as Shiofune Kannon-ji Holy Place in Tokyo, as well as Mount Yamato Katsuragi in Nara. 6. Bluebell (Burūberu) In hanakotoba,
bluebell implies happy
The blossom likewise has the exact same definition in western society, as well as this commonness is most likely because of the “bowing” look of the plant’s dangling bell-shaped blooms. In Japan, bluebells can be talented as a “note” of many thanks after getting a present or support from a person. 7. Camellia (Tsubaki) In hanakotoba,
- camellia blossoms have various definitions
- based upon the shade of the blossom.
- Red– Normally, red camellias suggest “crazy” or “diing with poise.” For warriors, nevertheless, they signify a worthy fatality.
White– White camellias suggest waiting.
Pink or Yellow– Pink as well as yellow camellias suggest hoping for or missing out on a person.
It’s great to keep in mind, nevertheless, that camellias are not generally talented to anybody while they are damaged or sick because, as they shrivel, the blossoms “behead” themselves. For a person that is recovery, this does not send out a favorable message.
A very early or late-winter springtime blossom, the camellia is likewise thought about to be an advantageous icon throughout the Lunar New Year as well as is frequently integrated right into the celebrations. 8. Carnation (Kānēshon) In hanakotoba,
carnation blossoms suggest attraction, difference, as well as love
In Japan, red carnations are mostly connected with domestic love, making them an usual present for Mom’s Day. Carnations are thought about to be a conventional present for both guys as well as ladies. They are usually offered as presents to partners as well as close member of the family. 9. Cherry Bloom (Sakura)
run deep in Japanese society. In hanakotoba, they have a number of definitions, consisting of mild as well as kind. They likewise suggest “transcience of life” due to the fact that the yearly cycle of budding, thriving, as well as winter season inactivity are viewed as a depiction of life, fatality, as well as renewal via the periods.
Additionally, cherry blooms signify elegance as well as physical violence in Japan. The lifecycle of the cherry bloom has actually been contrasted to the brilliant yet vibrant life of samurai warriors. Cherry blooms were likewise utilized to embellish kamikaze pilots’ airplanes throughout the 2nd globe battle. 10. Chrysanthemum (Kigiku as well as Shiragiku)
- are an additional crucial blossom in Japanese society. Given that 9 is believed to be a fortunate number, on the 9th day of the 9th month every year (September 9th), Japan commemorates Kiku no Sekku (National Chrysanthemum Day or the Celebration of Joy).
- Although there are a number of various shades of chrysanthemums, in hanakotoba, 2 shades have actually formally assigned definitions:
Yellow (Kigiku)– Yellow chrysanthemum implies royal in hanakotoba.
White (Shiragiku)– White chrysanthemum implies fact in hanakotoba. 11. Daffodil (Suisen) In hanakotoba, the
daffodil implies regard
, making them a great present for senior citizens, superiors at the workplace, or anybody you desire to whom you desire to share sensations of regard. The blossom’s Japanese name, suisen, actually converts to “hermit by the water.” 12. Dahlia (Tenjikubotan) In hanakotoba,
dahlia blossoms suggest taste
They are the ideal present for anybody whose design you desire to enhance. This makes them an outstanding present for a housewarming celebration or anybody that operates in the layout, design, design, or perhaps cooking careers. 13. Edelweiss (Ēderuwaisu) In hanakotoba,
mean power or guts. This is likely a recommendation to the blossom’s capacity to grow as well as linger in its severe towering environment which lies in between the snowy elevations of concerning 6,000 as well as nearly 10,000 feet up. In Japan, edelweiss blossoms expand normally in the hilly locations of Hokkaidō. 14. Erica (Erika) Erica implies privacy in hanakotoba. This consists of plants from the
, typically described as health or heather plants in the western globe. They have wonderful branches of pink blooms that are generated in expansion, providing one a feeling of calmness as well as privacy. 15. Freesia (Furījia) In hanakotoba, freesia blossoms suggest premature or juvenile. Although they have a charming scent that is valued for usage in fragrances, candle lights, as well as soaps, providing them might be taken as a disrespect. They are likewise a sign of the start of springtime in Japan, as well as there is a whole celebration, the Hachijojima Freesia Celebration, which takes location in between March 20th as well as April 5th in their honor.
16. Gardenia (Kuchinashi)
In hanakotoba, gardenia
has fairly an attractive definition, secret love. This makes it the ideal present from a secret admirer or for a companion with whom your partnership is not yet public. In addition, gardenia is utilized in Taoist as well as Buddhist practices for the relaxing, tranquil scent the white blooms give off.
17. Hibiscus (Haibīsukasu) Hibiscus
implies mild in hanakotoba. These plants as well as blossoms are likewise frequently offered as a means to recognize site visitors in Japan, standing for the inviting as well as pleasant customizeds of the society.
18. Hydrangeas (Ajisai) Hydrangeas suggest satisfaction in hanakotoba. Japan is believed to be the beginning of hydrangea blossoms, therefore these wonderful collections of blossoms penetrate the society as well as the location. They usually begin thriving in June as well as stand for the stormy period.
19. Japanese Primrose (Sakurasou)
In hanakotoba, Japnese primrose
implies puppy love, chastity, as well as adoration. In Japanese society, these blossoms are viewed as totally favorable as well as are also believed to perhaps enhance an individual’s luckiness crazy, so this makes them a proper present for anyone on nearly any type of satisfied event.
20. Jasmine (Jasumin) Jasmine blooms suggest pleasant or elegant in hanakotoba. This definition as well as their wonderful scent make the blossoms or hedges great, free of charge presents for anybody you discover to be pleasant or elegant. In the zodiac, Jasmine is connected with Cancer cells (June 21 to July 22) 21. Lavender (Rabendā) In hanakotoba,
implies loyalty, as well as they are the ideal blossoms to offer anybody to whom you desire to share your loyalty or to request a person’s trust fund. Lavender is extra highly connected with the Mediterranean, however huge areas of lavender can be appreciated in Japan on the island of Hokkaidō
- 22. Lily Flowers (Shirayuri, Sayuri, as well as Oniyuri)
- Like specific various other blossoms, in hanakotoba,
- various shades of lilies have various definitions
, as well as these definitions are really various. It’s crucial that you pick the appropriate shade of lily if you’re providing them as a present, to guarantee you do not unintentionally share the incorrect message to your recipient.
White (Shirayuri)– In hanakotoba, white lilies suggest pureness as well as chastity.
Orange (Sayuri)– In hanakotoba, orange lilies suggest disgust as well as retribution. Tiger Lily (Oniyuri)– In hanakotoba, tiger lilies suggest riches.
In Japan, orange lilies are, possibly, the ideal present for a real opponent.
23. Magnolia (Magunoria) Magnolias suggest all-natural in hanakotoba. Magnolia trees generate huge, bowl-shaped, white blooms with a magnificent scent. In Japanese society, they are likewise signs of a love for nature, self-respect, the aristocracy, as well as determination. Magnolias likewise utilized to be typically consisted of in wedding arrangements in Japan, as a sign of the toughness of love.
24. Early Morning Magnificence (Asagao)
In hanakotoba, early morning splendor blossoms
suggest unyielding assurances. In Japanese society, the deep-blue, trumpet-shaped blooms as well as the leafy creeping plants of early morning magnificences are preferred layout concepts. The earliest depictions show up on concepts on scrolls, combs, followers, robes, as well as towels.
25. Peony (Botan) Peonies suggest fearlessness in hanakotoba, as well as they are typically connected with honor, fearlessness, as well as good luck. In Japan, the peony is frequently described as the king or queen of blossoms which may be a recommendation to the excellent look of huge, full-double peony ranges.
26. Plum Bloom (Ume)
In hanakotoba, plum blooms suggest commitment as well as sophistication. Plum trees (occasionally called Japanese apricots) bloom in the late springtime as well as very early winter season. Given that they occasionally also grow throughout severe, winter, they’re viewed as a sign of hope as well as an indication of winter season’s end.
27. Red Crawler Lily (Higanbana or Manjushage)
In hanakotoba, the red crawler lily has a couple of definitions, as well as all of them are rather depressing. They consist of desertion, never ever to reunite, as well as shed memory. The name manjushage converts to “blossom from the paradises,” as well as it is assumed to be an indication that a commemorative event will certainly happen quickly.
- 28. Roses (Benibara, Bara, Kiiroibara, as well as Kiiroibara)
- All over the globe, roses are blossoms of symbolic value. Equally as
- every shade of rose has its very own distinct definition
- in the western Language of Flowers, various shades of roses likewise have various definitions in Japanese hanakotoba. In typical in between these blossom languages created in contrary hemispheres is that roses are connected with various kinds or components of love.
Red (Benibara)– Love or crazy
implies chastity, pureness, or “much from the one he enjoys.” In Japanese society, the lotus’s importance penetrates much past hanakotoba. As it is, possibly, one of the most typical icon of pureness as well as likewise knowledge in regards to the Buddhist custom This symbolic definition recommendations the truth that the lotus rises from filthy, sloppy water in order to generate a strikingly gorgeous, white flower bloom. 30. Sunflower (Himawari) In hanakotoba, sunflowers suggest brilliance, regard, as well as enthusiastic love. In Japan,
sunflowers are likewise a sign of healing as well as hope
, as well as they are well known yearly throughout the Himawari Matsuri (Sunflower Celebration) in Kitanakagusuku, Okinawa.
31. Pleasant Pea (Suītopī)
In hanakotoba, pleasant pea blossoms suggest bye-bye. Pleasant peas are foreign to Japan, as well as in fact weren’t presented till the very early component of the 20th century, so the symbolic definition does not run unfathomable. While they do make a thoughtful bye-bye or going-away existing, they have a charming fragrance as well as are proper to consist of in practically any type of flower setup.
32. Tulip (Chūrippu)
In hanakotoba, red tulips suggest charity, trust fund, as well as appeal or popularity, as well as yellow tulips suggest discriminatory love. These definitions vary considerably from the western tulip importance
of everlasting love (red) as well as happiness (yellow). In Japan, providing red tulips coincides as desiring a person popularity as well as appeal, as well as a present of yellow tulips might be utilized as a thoughtful means– as long as they recognize with hanakotoba blossom definitions– of allowing a person whom you enjoy recognize that you comprehend they do not really feel the exact same (or vice versa).
33. Violet (Sumire) Violets suggest sincerity in hanakotoba. In Japan, they are expanded typically in yards as well as along wall surfaces, as well as violets are frequently offered as a present of recognition or love due to the fact that they share one’s sincere genuineness.
34. Wisteria (Fuji)
In hanakotoba, wisteria implies unfaltering as well as welcome. Wisteria creeping plants generate huge collections of purple blossoms, as well as they are an usual concept on kanzashi as well as robes. As soon as highly connected with the aristocracy in Japan given that just the top course were enabled to (or might manage to use the shade purple), Wisteria blossoms were likewise.
35. Zinnia (Hyakunichisou)
suggest commitment. In western societies, their symbolic definitions are likewise connected with commitment, as they stand for relationship, charming love, as well as get-togethers in between buddies– as well as you can not have relationship or love without commitment.
Hanakotoba Frequently Asked Questions:
What does hanakotoba signify?
Hanakotoba itself does not signify anything. Instead, it is the name of the Japanese language of blossoms, a custom that designates symbolic definitions to blossoms to ensure that they can be utilized to share feelings as well as messages.
What blossom stands for fatality in Japan?
The red crawler lily (higanbana or manjushage in Japanese) signifies fatality in Japan.
What does a black increased mean in Japan?
In Japan, black roses signify fatality. Black roses are globally connected with fatality as well as grieving.
What shades stink in Japan?
Black (kuro) is highly connected with fatality, grieving, devastation, as well as wickedness in Japan. Consequently, the shade black can be thought about offending in Japan– specifically if it is put on or talented throughout commemorative as well as wondrous celebrations.
What blossom signifies love in Japan?
In Japan as well as hanakotoba, red camellias signify charming love as well as red carnations signify domestic love. Red roses are likewise a sign of love or remaining in love.
What do blue roses suggest in Japan?
Blue roses have a symbolic definition that is global worldwide. Given that blue roses do not exist in nature as well as are unnaturally grown, in Japan as well as in other places, blue roses stand for attaining the difficult.
What does a yellow increased mean in Japan?
In hanakotoba, yellow roses signify envy. The shade yellow is likewise connected with joy, nature, as well as sunlight in Japan. Blossoms of this shade can likewise lug these definitions.
What do yellow tulips suggest in Japan?
In Japan as well as hanakotoba, yellow tulips signify unrequited love, helpless love, or discriminatory love.(*) Hanakotoba– The Last Word(*) Hanakotoba is an abundant as well as enjoyable custom to exercise as well as examine, as well as it includes an extra component of satisfaction as well as suggesting to the art of flower layout. When picking blossoms to commemorate a crucial occasion, to lionize throughout a sad event, or to embellish your office or home, it’s vital to think about the social context as well as to pick blossoms that are proper for the event or that can bring favorable symbolic definitions right into your life.(*) In Japan, individuals are typically most acquainted with the symbolic definitions appointed within the practices of hanakotoba. In the western globe, individuals are usually extra acquainted with the conventional symbolic definitions appointed by the Victorian language of blossoms. Prior to picking blossoms or a flower layout, put in the time to search for the symbolic definitions connected with the blooms that are consisted of to make sure you’re selecting a layout that sends out the appropriate message to your recipient or shares a proper view for the event.(*) | <urn:uuid:40c1bf03-be4b-44e4-ac78-f6c60e0d15b6> | CC-MAIN-2022-33 | https://my-plant.org/ultimate-overview-to-hanakotoba-the-japanese-language-of-flowers/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00496.warc.gz | en | 0.965558 | 5,731 | 2.953125 | 3 |
Classics in the History of Psychology
An internet resource developed by
Christopher D. Green
York University, Toronto, Ontario
(Return to Classics index)
THE RELATION OF STRENGTH OF STIMULUS TO RAPIDITY OF HABIT-FORMATION
Robert M. Yerkes and John D. Dodson (1908)
First published in Journal of Comparative Neurology and Psychology, 18, 459-482.
In connection with a study of various aspects of the modifiability of behavior in the dancing mouse a need for definite knowledge concerning the relation of strength of stimulus to rate of learning arose. It was for the purpose of obtaining this knowledge that we planned and executed the experiments which are now to be described. Our work was greatly facilitated by the advice and assistance of Doctor E. G. MARTIN, Professor G. W. PIERCE, and Professor A. E. KENNELLY, and we desire to express here both our indebtedness and our thanks for their generous services.
The habit whose formation we attempted to study quantitatively, with respect to the strength of the stimulus which favored its formation, may be described as the white-black discrimination habit. Of the mice which served as subjects in the investigation it was demanded that they choose and enter one of two boxes or passage-ways. One of the boxes was white; the other black. No matter what their relative positions, the subject was required to choose the white one. Attempts to enter the black box resulted in the receipt of a disagreeable electric shock. It was our task to discover (1) whether the strength of this electric stimulus influences the rapidity with which dancers acquire the habit of avoiding the black passage-way, and if so, (2) what particular strength of stimulus is most favorable to the acquisition of this habit.
As a detailed account of the important features of the white-black visual discrimination habit in the dancer has already been published, a brief description of our method of experimentation [p. 460] will suffice for the purposes of this paper. A sketch of the experiment box used by us in this investigation appears as fig. 1, and a ground plan of the box with its electric attachments, as fig. 2.
This apparatus consisted of a wooden box 94 cm. long; 30 cm. wide; and 11.5 cm. deep (inside measurements), which was divided into a nest-box, A, (fig. 2) an entrance chamber, B, and two electric boxes, W, W, together with alleys which connected these boxes with the nest-box. The doorways between the electric boxes and the alleys were 5 by 5 cm. On the floor of each electric box, as is shown in the figures, were the wires of an interrupted circuit [p. 461] which could be completed by the experimenter, by closing the key K, whenever the feet of a mouse rested upon any two adjacent wires in either of the boxes. In this circuit were an electric battery and a Porter inductorium. One of these electric boxes bore black cards, and the other white cards similarly arranged. Each box bore two cards. One was at the entrance on the outside of the box and the other on the inside, as fig. 1 indicates.
The latter consisted of three sections of which two constituted linings for the sides of the box and the third a cover for a portion of the open top of the box. In no case did these inside cards extend the entire length of the electric boxes. The white and black cards were readily interchangeable, and they never were left on the same electric box for more than four consecutive tests. The [p. 462] order in which they were shifted during twenty-five series of ten tests each, in addition to the preference series A and B, is given in table 1. In case a mouse required more than twenty-five series of tests (250 tests), the same set of changes was repeated, beginning with series 1. In the table the letters r and l refer to the position of the white cards; r indicates that they marked the electric box which was on the right of the mouse as it approached the entrances of the electric boxes from the nest-box; l indicates that it marked the left electric box.
The way in which this apparatus was used may be indicated by a brief description of our experimental procedure. A dancer was placed in the nest-box by the experimenter, and thence it was permitted to pass into the entrance chamber, B. The experimenter then placed a piece of cardboard between it and the door-way between A and B and gradually narrowed the space in which the animal could move about freely by moving the cardboard toward the electric boxes. This, without in any undesirable way interfering with the dancer's attempts to discriminate and choose correctly, greatly lessened the amount of random activity which preceded choice. When thus brought face to face with the entrances to the boxes the mouse soon attempted to enter one of them. If it happened to select the white box it was permitted to enter, pass through, and return to the nest-box; but if, instead, it started to enter the black box the experimenter by closing the key, upon which his finger constantly rested during the tests, caused it to receive an electric shock which as a rule forced a hasty retreat from the black passage-way and the renewal of attempts to discover by comparison which box should be entered.
Each of the forty mice experimented with was given ten tests every morning until it succeeded in choosing the white box correctly on three consecutive days, that is for thirty tests. A choice was recorded as wrong if the mouse started to enter the black box and received a shock; as right if, either directly or after running from one entrance to the other a number of times, it entered the white box. Whether it entered the white electric box or the black one, it was permitted to return to the nest-box by way of the white box before another test given. Escape to the nest-box by way of the black box was not permitted. A male and a female, which were housed in the same cage between experiments, were placed in the experiment box together and given their tests turn about [sic]
[p. 463] Almost all of the mice used were between six and eight weeks old at the beginning of their training. The exact age of each, together with its number, is stated in table 2.
This table shows also the general classification of our experiments. They naturally fall into three sets. These are designated by the roman numerals I, II, and III in the table, and will throughout the paper be referred to as the experiments of set I, set II and set III. As is suggested by the heading "condition of discrimination," at the top of the first vertical column of table 2, these sets of experiments differ from one another first of all as to condition of visual discrimination or, more explicitly stated, in the amount by which the two electric [p. 464] boxes differed from one another in brightness. For set I this difference was medium, in comparison with later conditions, and discrimination was therefore of medium difficultness. For set II the difference was great, and discrimination was easy. For set III the difference was slight, and discrimination was difficult. It is clear, then, that the series of words, medium, great, slight, in the table refers to the amount by which the electric boxes differed in brightness, and the series medium, easy, difficult, to the demand made upon the visual discriminating ability of the mice.
For the sake of obtaining results in this investigation which should be directly comparable with those of experiments on the modifiability of behavior in the dancer which have been conducted during the past three years, it was necessary for us to use the same general method of controlling the visual conditions of the experiment that had previously been used. This we decided to do, not-withstanding the fact that we had before us methods which were vastly superior to the old one with respect to the describability of conditions and the accuracy and ease of their control. To any experimenter who wishes to repeat this investigation with other animals we should recommend that, before recourse is had to the use of cardboards for the purpose of rendering the boxes distinguishable, thorough tests be made of the ability of the animal to discriminate when the boxes are rendered different in brightness by the use of a screen which excludes a measurable amount of light from one of them. We have discovered that the simplest and best method of arranging the conditions for such experiments with the dancer as are now to be described is to use two electric boxes which are alike in all respects and to control the amount of light which enters one of them from the top. It is easy to obtain satisfactory screens and to measure their transmitting capacity. We regret that the first use which we wished to make of our results in this investigation forced us to employ conditions which are relatively complicated and difficult to describe.
For the sake of the scientific completeness of our paper, however, and not because we wish to encourage anyone to make use of the same conditions, we shall now describe as accurately as we may the conditions of visual discrimination in the several sets of experiments.
The cards at the entrances to the electric boxes were the same in all of the experiments. Each card (the black and the white) [p. 465] was 11.5 cm in height and 5.4 cm. in width, with a hole 3.5 by 3.5 cm. in the middle of its lower edge as is shown in fig. 1. These entrance cards were held in place by small metal carriers at the edges of the electric boxes. The area of white surface exposed to the view of a mouse as it approached the entrances to the electric boxes was 49.85 sq. cm. and the same amount of black surface was exposed. The white cardboard reflected 10.5 times as much light as the black cardboard.
Special conditions of set I. The inside length of each electric box was 28.5 cm. the width 7 cm. and the depth 11.5 cm. The inside cards extended from the inner edge of the front of each box a distance of 13.5 cm toward the back of the box. Consequently there was exposed to the view of the mouse a surface 13.5 cm, by 11.5 cm. (the depth of the box and of the cardboard as well) on each side of the box. The section of cardboard at the top measured 13.5 cm in length by 6.5 cm. in width. The total area of the white (or black) cardboard exposed on the inside of an electric box was therefore 13.5 X 11.5 X 2 (the sides) + 13.5 X 6.5 (the top) = 398.25 sq. cm. If to this we add the area of the entrance card we obtain 448.10 sq. cm. as the amount of surface of cardboard carried by each electric box.
But another condition, in connection with the amount of cardboard present, determined the difference in the brightness of the boxes, namely, the amount of open space between the end of the inner cardboards and the end of the experiment box. The larger this opening the more light entered each box. In the case of the experiments of set I this uncovered portion of each electric box was 15 cm. long by 7 cm. wide; its area, therefore, was 105 sq. cm.
Special conditions of set II. Both the outer and the inner cardboards were precisely the same in form and arrangement as in the case of set I, but in order that discrimination might be rendered easier, and the time required for the acquisition of the habit thus shortened, a hole 8.7 cm. long by 3.9 cm. wide was cut in the middle or top section of the white cardboard. This greatly increased the amount of light in the white electric box. The difference in the brightness of the boxes was still further increased by a reduction of the space between the end of the cardboard and the end of the box from 15 cm. to 2 cm. or, in terms of area, from 105 sq. cm. to 14 sq. cm. This was accomplished by cutting 13 cm. from the rear end of the experiment box. For the experiments of set [p. 466] II the black box was much darker than it was for those of set I, whereas the white box was not markedly different in appearance.
Special conditions of set III. The experiments of this set were conducted with the visual conditions the same as in set II, except that there was no hole in the white cardboard over the electric box. This rendered the white box much darker than it was in the experiments of set II, consequently the two boxes differed less in brightness than in the case of set II, and discrimination was much more difficult than in the experiments of either of the other sets.
In the second column of table 2 the values of the several strengths of electrical stimuli used in the investigation are stated. To obtain our stimulus we used a storage cell, in connection with gravity batteries, and with the current from this operated a PORTER inductorium. The induced current from the secondary coil o- [sic] this apparatus was carried by the wires which constituted an interrupted circuit on the floor of the electric boxes. For the experiments of set I the strengths of the stimuli used were not accurately determined, for we had not at that time discovered a satisfactory means of measuring the induced current. These experiments therefore served as a preliminary investigation whose chief value lay in the suggestions which it furnished for the planning of later experiments. The experiments of sets II and III were made with a PORTER inductorium which we had calibrated, with the help of Dr. E. G. MARTIN of the Harvard Medical School, by a method which he has recently devised and described.
On the basis of the calibration measurements which we made by MARTIN'S method the curve of fig. 3 was plotted. From this curve it is possible to read directly in "units of stimulation" the value of the induced current which is yielded by a primary current of one ampere for any given position of the secondary coil. With the secondary coil at 0, for example, the value of the induced current is 350 units; with the secondary at 5.2 centimeters on the scale of the inductorium, its value is 155 units; and with the secondary at 10, its value is 12 units. The value of the induced current for a primary current greater or less than unity is obtained by multiplying the reading from the calibration curve by the value [p. 467] of the primary current. The primary current used for the experiments of sets II and III measured 1.2 amperes, hence the value of the stimulating current which was obtained when the secondary coil stood at 0 was 350 X 1.2 = 420 units of stimulation.
As conditions for the experiments of set I, we chose three strengths of stimuli which we designated as weak, medium, and strong. The weak stimulus was slightly above the threshold of stimulation for the dancers. Comparison of the results which it yielded with those obtained by the use of our calibrated inductorium enable us to state with a fair degree of certainty that its value was 125 ± 10 units of stimulation. The strong stimulus was decid- [p. 468] edly disagreeable to the experimenters and the mice reacted to it vigorously. Its value was subsequently ascertained to be 500 ± 50 units. For the medium stimulus we tried to select a value which should be about midway between these extremes. In this we succeeded better than we could have expected to, for comparison indicated that the value was 300 ± 25 units. Fortunately for the interpretation of this set of results, the exact value of the stimuli is not important.
By the use of our calibrated inductorium and the measurement of our primary current, we were able to determine satisfactorily the stimulating values of the several currents which were used in the experiments of sets II and III. The primary current of 1.2 amperes, which was employed, served to actuate the interrupter of the inductorium as well as to provide the stimulating current. The interruptions occurred at the rate of 65 ± 5 per second. We discovered at the outset of the work that it was not worth while to attempt to train the dancers with a stimulus whose value was much less than 135 units. We therefore selected this as our weakest stimulus. At the other extreme a stimulus of 420 units was as strong as we deemed it safe to employ. Between these two, three intermediate strengths were used in the case of set II, and two in the case of set III. Originally it had been our intention to make use of stimuli which varied from one another in value by 60 units of stimulation, beginning with 135 and increasing by steps of 60 through 195, 255, 315, 375 to as nearly 425 as possible. It proved to be needless to make tests with all of these.
We may now turn to the results of the experiments and the interpretation thereof. Before the beginning of its training each mouse was given two series of tests in which the electric shock was not used and return to the nest-box through either the white or the black box was permitted. These twenty tests (ten in series A and ten in series B) have been termed preference tests, for they served to reveal whatever initial tendency a dancer possessed to choose the white or the black box. On the day following preference series B, the regular daily training series were begun and they were continued without interruption until the dancer had succeeded in choosing correctly in every test on three consecutive days.
Results of the experiments of set I. The tests with the weak stimulus of set I were continued for twenty days, and up to that time only one of the four individuals in training (no. 128) had [p. 469] acquired a perfect habit. On the twentieth day it was evident that the stimulus was too weak to furnish an adequate motive for the avoidance of the black box and the experiments were discontinued.
A few words in explanation of the tables are needed at this point. In all of the tables of detailed results the method of arrangement which is illustrated by table 3 was employed. At the top of the table are the numbers of the mice which were trained under the conditions of stimulation named in the heading of the table.
The first vertical column gives the series numbers, beginning with the preference series A and B and continuing from 1 to the last series demanded by the experiment. In additional columns appear the number of errors made in each series of ten tests, day by day, by the several subjects of the experiments; the average number of errors made by the males in each series; the average number of errors made by the females; and, finally, the general [p. 470] average for both males and females. In table 3, for example, it appears that male no. 128 chose the black box in preference to the white 6 times in series A, 5 times in series B, 3 times in series 1, 6 times in series 2. After series 15 he made no errors during three consecutive series. His training was completed, therefore, on the eighteenth day, as the result of 180 tests. We may say, however, that only 150 tests were necessary for the establishment of a perfect habit, for the additional thirty tests, given after the fifteenth series, served merely to reveal the fact that he already possessed a perfect habit. In view of this consideration, we shall take as a measure of the rapidity of learning in these experiments the number of tests received by a mouse up to the point at which errors ceased for at least three consecutive series.
Precisely as the individuals of table 3 had been trained by the use of a weak stimulus, four other dancers were trained with a medium stimulus. The results appear in table 4. All of the subjects acquired a habit quickly. Comparison of these results with those obtained with the weak stimulus clearly indicates that the medium stimulus was much more favorable to the acquirement of the white-black visual discrimination habit.
In its results the strong stimulus proved to be similar to the weak stimulus. All of the mice in this case learned more slowly [p. 471] than did those which were trained with the medium strength of stimulus.
The general result of this preliminary set of experiments with three roughly measured strengths of stimulation was to indicate that neither a weak nor a strong electrical stimulus is as favorable to the acquisition of the white-black habit as is a medium stimulus.
Contrary to our expectations, this set of experiments did not prove that the rate of habit-formation increases with increase in the strength of the electric stimulus up to the point at which the shock becomes positively injurious. Instead an intermediate range of intensity of stimulation proved to be most favorable to the acquisition of a habit under the conditions of visual discrimination of this set of experiments.
[p. 472] In the light of these preliminary results we were able to plan a more exact and thoroughgoing examination of the relation of strength of stimulus to rapidity of learning. Inasmuch as the training under the conditions of set I required a great deal of time, we decided to shorten the necessary period of training by making the two electric boxes very different in brightness, and the discrimination correspondingly easy. This we did, as has already been explained, by decreasing the amount of light which entered the black box, while leaving the white box about the same. The influence of this change on the time of learning was very marked indeed.
With each of the five strengths of stimuli which were used in set II two pairs of mice were trained, as in the case of set I. The detailed results of these five groups of experiments are presented in tables 6 to 10. Casual examination of these tables reveals the fact that in general the rapidity of learning in this set of experiments increased as the strength of the stimulus increased. The
weakest stimulus (135 units) gave the slowest rate of learning; the strongest stimulus (420 units), the most rapid.
The results of the second set of experiments contradict those of the first set. What does this mean? It occurred to us that the apparent contradiction might be due to the fact that discrimination was much easier in the experiments of set II than in those of set I. To test this matter we planned to use in our third set of experiments a condition of visual discrimination which should be extremely difficult for the mice. The reader will bear in mind that for set [p. 475] II the difference in brightness of the electric boxes was great; that for set III it was slight; and for set I, intermediate or medium.
For the experiments of set III only one pair of dancers was trained with any given strength of stimulus. The results, however, are not less conclusive than those of the other sets of experiments because of the smaller number of individuals used. The data of tables 11 to 14 prove conclusively that our supposition was correct. The varying results of the three sets of experiments are explicable in terms of the conditions of visual discrimination.
In [p. 476] set III both the weak and the strong stimuli were less favorable to the acquirement of the habit than the intermediate stimulus of 195 units. It should be noted that our three sets of experiments indicate that the greater the brightness difference of the electric boxes the stronger the stimulus which is most favorable to habit-formation (within limits which have not been determined). Further discussion of the results and attempts to interpret them may be postponed until certain interesting general features of the work have been mentioned.
The behavior of the dancers varied with the strength of the stimulus to which they were subjected. They chose no less quickly in the case of the strong stimuli than in the case of the weak, but they were less careful in the former case and chose with less delib- [p. 477]
[p. 478] eration and certainty. Fig. 4 exhibits the characteristic differences in the curves of learning yielded by weak, medium, and strong stimuli. These three curves were plotted on the basis of the average number of errors for the mice which were trained in the experiments of set I. Curve W is based upon the data of the last column of table 3, curve M, upon the data in the last column of table 4; and curve S upon the data of the last column of table 5. In addition to exhibiting the fact that the medium stimulus yielded a perfect habit much more quickly than did either of the other stimuli, fig. 4 shows a noteworthy difference in the forms of the curves for the weak and the strong stimuli. Curve W (weak stimulus) is higher throughout its course than is curve S (strong stimulus). This means that fewer errors are made from the start under the condition of strong stimulation than under the condition of weak stimulation.
Although by actual measurement we have demonstrated marked difference in sensitiveness to the electric shock among our mice, we are convinced that these differences do not invalidate the conclusions which we are about to formulate in the light of the results that have been presented. Determination of the threshold electric stimulus for twenty male and twenty female dancers proved that the males respond to a stimulus which is about 10 per cent less than the smallest stimulus to which the females respond.
Table 15 contains the condensed results of our experiments. It gives, for each visual condition and strength of stimulus, the number of tests required by the various individuals for the acquisition of a perfect habit; the average number of tests required by the males, for any given visual and electrical conditions; the same for the females; and the general averages. Although the numbers of the mice are not inserted in the table they may readily be learned if anyone wishes to identify a particular individual, by referring to the tables of detailed results. Under set I, weak stimulus, for example, table 15 gives as the records of the two males used 150 and 200+ tests. By referring to table 3, we discover that male no. 128 acquired his habit as a result of 150 tests, whereas male no. 134 was imperfect at the end of 200 tests. To indicate the latter fact the plus sign is added in table 15. Of primary importance for the solution of the problem which we set out to study are the general averages in the last column of the table. From this series of averages we have constructed the curves of fig. 5. This figure [p. 479]
[p. 480] very clearly and briefly presents the chiefly significant results of our investigation of the relation of strength of electrical stimulus to rate of habit-formation, and it offers perfectly definite answers to the questions which were proposed for solution.
In this figure the ordinates represent stimulus values, and the abscissæ number of tests. The roman numerals I, II, III, designate, respectively, the curves for the results of set I, set II, and set III. Dots on the curves indicate the strengths of stimuli which were employed. Curve I for example, shows that a strength of stimulus of 300 units under the visual conditions of set I, yielded a perfect habit with 80 tests.
From the data of the various tables we draw the following conclusions:
1. In the case of the particular habit which we have studied, the rapidity of learning increases as the amount of difference in the brightness of the electric boxes between which the mouse is required to discriminate is increased. The limits within which this statement holds have not been determined. The higher the curves of fig. 5 stand from the base line, the larger the number of tests represented by them. Curve II is lowest, curve I comes next, and curve III is highest. It is to be noted that this is the order of increasing difficultness of discrimination in the three sets of experiments.
[p. 481] 2. The relation of the strength of electrical stimulus to rapidity of learning or habit-formation depends upon the difficultness of the habit, or, in the case of our experiments, upon the conditions of visual discrimination.
3. When the boxes which are to be discriminated between differ very greatly in brightness, and discrimination is easy, the rapidity of learning increases as the strength of the electrical stimulus is increased from the threshold of stimulation to the point of harmful intensity. This is indicated by curve II. Our results do not represent, in this instance, the point at which the rapidity of learning begins to decrease, for we did not care to subject our animals to injurious stimulation. We therefore present this conclusion tentatively, subject to correction in the light of future research. Of its correctness we feel confident because of the results which the other sets of experiments gave. The irregularity of curve II, in that it rises slightly for the strength 375, is due, doubtless, to the small numbers of animals used in the experiments. Had we trained ten mice with each strength of stimulus instead of four the curve probably would have fallen regularly.
4. When the boxes differ only slightly in brightness and discrimination is extremely difficult the rapidity of learning at first rapidly increases as the strength of the stimulus is increased from the threshold, but, beyond an intensity of stimulation which is soon reached, it begins to decrease. Both weak stimuli and strong stimuli result in slow habit-formation. A stimulus whose strength is nearer to the threshold than to the point of harmful stimulation is most favorable to the acquisition of a habit. Curve III verifies these statements. It shows that when discrimination was extremely difficult a stimulus of 195 units was more favorable than the weaker or the stronger stimuli which were used in this set of experiments.
5. As the difficultness of discrimination is increased the strength of that stimulus which is most favorable to habit-formation approaches the threshold. Curve II, curve I, curve III is the order of increasing difficultness of discrimination for our results, for it will be remembered that the experiments of set III were given under difficult conditions of discrimination; those of set I under medium conditions; and those of set II under easy conditions. As thus arranged the most favorable stimuli, so far as we may judge from our results, are 420, 300, and 195. This leads us to infer that an easily acquired habit, that is one which does not [p. 482] demand difficult sense discriminations or complex associations, may readily be formed under strong stimulation, whereas a difficult habit may be acquired readily only under relatively weak stimulation. That this fact is of great importance to students of animal behavior and animal psychology is obvious.
Attention should be called to the fact that since only three strengths of stimulus were used for the experiments of set I, it is possible that the most favorable strength of stimulation was not discovered. We freely admit this possibility, and we furthermore wish to emphasize the fact that our fifth conclusion is weakened slightly by this uncertainty. But it is only fair to add that previous experience with many conditions of discrimination and of stimulation, in connection with which more than two hundred dancers were trained, together with the results of comparison of this set of experiments with the other two sets, convinces us that the dancers would not be likely to learn much more rapidly under any other condition of stimulation than they did with a strength of 300 ± 25 units of stimulation.
Naturally we do not propose to rest the conclusions which have just been formulated upon our study of the mouse alone. We shall now repeat our experiments, in the light of the experience which has been gained, with other animals.
Yerkes, Robert M. The dancing mouse. New York: The Macmillan Company. See especially p. 92, et seq. 1908.
Martin, E. G. A quantitative study of faradic stimulation. I. The variable factors involved. Amer. Jour. of Physiol., vol. 22, pp. 61-74. 1908. II. The calibration of the inductorium for break shocks. Ibid., pp. 116-132. | <urn:uuid:b7aaefd6-7bbb-4ca9-b2d4-9bdc0c2c7346> | CC-MAIN-2022-33 | https://psychclassics.yorku.ca/Yerkes/Law/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00098.warc.gz | en | 0.96876 | 6,541 | 2.921875 | 3 |
Title IX History
“No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any educational program or activity receiving Federal financial assistance (Title IX of 20 U.S.C.A §168).”
What Is Title IX?
Enacted in 1972, Title IX is a federal civil rights law that applies to educational institutions that receive federal funding, including K-12 and post-secondary public, private, and charter schools in the United States. There are some exceptions, such as military schools and private religious schools.
Title IX prohibits these institutions from denying benefits or treating students differently based on their sex. It was passed to ensure all students had the same rights and abilities to learn and participate in educational programming, regardless of gender or sex.
Many people think Title IX is only about athletics, but this law prohibits discrimination in all educational programs and activities—athletics, academics, clubs, classes, and other school-sponsored activities and programs.
Title IX is clear about its goals, but speaks very little to the specific steps schools must take to prevent and address sex and gender discrimination. Therefore, since its passage, the U.S. Department of Education has issued numerous documents to explain to schools how to implement and enforce this important law. The Department of Education’s Office of Civil Rights is the official body that investigates schools to make sure that they comply with Title IX. The Office of Civil Rights can also make schools change their policies or procedures if they are not in compliance.
Title IX guidance and enforcement have been influenced by the larger social and political climates spanning close to five decades and ten U.S. presidents. Especially over the last ten years, presidential administrations have frequently updated or changed the guidance, interpretations, and formal rules of Title IX.
In 2011, the Obama Administration published a “Dear Colleague” letter providing non-binding guidance to institutions of higher education. This letter described institutions’ responsibilities in order comply with Title IX. The “Dear Colleague” document was important, because it helped unite many different regulations and policies that the Department of Education had issued over the last 40 years into one cogent set of procedures, which most U.S. colleges and universities adopted.
After taking office in 2017, the Trump Administration wanted to change some of the guidance given in the “Dear Colleague” letter. The Trump Administration, however, wanted their rules to be more binding. They pursued a formal rule-making process, a more time-consuming and complicated course of action, so that their new rules would be more legally binding than the guidance given in the “Dear Colleague” letter. These new rules were issued in May 2020, and took effect in mid-August 2020.
Some of these changes were more positive, and others more negative. Additionally, some of the Trump Administration’s new, legally binding rules are more flexible than others. However, the new grievance and adjudication process issued under the Trump Administration is very specific. In fact, some of the new regulations associated with this process may interfere with existing state law. This will make Title IX more complicated to enforce.
Several lawsuits were against the Trump Administration’s Title IX regulations. These challenges were brought by organizations including the American Civil Liberties Union, National Women’s Law Center, Victims Rights Law Center, and 18 states’ Attorneys General (including Pennsylvania) and D.C. The Pennsylvania Coalition Against Rape submitted a sworn statement as part of Pennsylvania Attorney General Josh Shapiro’s lawsuit.
The Victim Rights Law Center case resulted in a change to the Trump Administration’s regulations in July 2021. The federal judge in this case found one part of the Trump Administration’s regulations unconstitutional and sent this provision back to the Department of Education so it could be reconsidered. Judges have also upheld the regulations in court.
Since taking office, the Biden Administration has issued non-binding Title IX guidance and interpretation. However, if the Biden Administration’s guidance conflicts with the Trump Administration’s formal and legally binding regulations, the regulations are more legally powerful.
The Biden Administration is beginning the formal rules-making process, so that they can create legally binding regulations that would override the Trump Administration’s rules. However, this process takes time, and the proposed changes will not likely be announced until at least 2022, and may not take effect until 2023 or later. Until then, the Trump Administration’s Title IX regulations remain in effect.
In the meantime, the Biden Administration has created new guidance in areas of Title IX where the Trump Administration did not create formal rules. For instance, the Department of Education’s Office of Civil Rights issued a Notice of Interpretation stating that LGBTQ students will be protected from discrimination on the basis of gender identity or presentation and discrimination on the basis of sexual orientation under Title IX.
Title IX FAQ’s for Higher Ed
How do I know if my school is violating Title IX?
Your school has to do a lot of things in order to comply with Title IX. This includes, but isn’t limited to:
- Publicize a non-discrimination policy, the Title IX Coordinator’s contact information, and how to file a formal Title IX complaint
- Process cases in a reasonably prompt time frame, and with equitable treatment of the respondent and complainant
- Offer supportive measures to all victims who have made formal complaints, including those that choose not to pursue a resolution process
- Training staff as is necessary to maintain an effective Title IX Program
- Have unbiased and impartial Title IX staff with no conflicts of interest
- Ensure that Title IX regulations apply equally to both parties, including by taking action against selective enforcement of or biased outcomes associated with the Title IX Program
- Take action against policies or programs that are explicitly or in effect discriminatory, or that substantially heighten the risk of sexual harassment
If you think your school is violating Title IX, we’d suggest reaching out to an attorney. PCAR offers free and confidential legal services on Title IX through the Sexual Violence Legal Assistance Project. Please be aware that some time limits may apply when creating Title IX violation claims against your school.
What are climate reports under Title IX? What are good and bad signs in a campus climate report?
According to the Clery Act, schools must regularly conduct anonymous climate reports. These reports give a more accurate picture of sexual misconduct occurring on-campus than a tally of cases reported to the Title IX Office. That’s because only one in seven cases of sexual misconduct that occur on-campus are reported to the Title IX Office. These climate reports are important, because they give a better picture of the scale and type of problems a school community is facing. It’s hard to fix problems that you aren’t aware of or don’t understand.
Many people think that low levels of sexual misconduct in climate reports are a good sign. In fact, the opposite is usually true. Many schools consistently report that 0 instances of misconduct occur on their campus; this is never the case, and these reports instead show that the school is interested in ignoring sexual misconduct when it occurs. In fact, schools with higher levels of reported sexual misconduct may actually be safer, because students have a higher level of trust in the institution and are therefore willing to reveal instances of sexual misconduct, and because the school is highly motivated to obtain accurate measurements of sexual misconduct on campus.
What counts as sexual misconduct under Title IX?
Title IX protects again multiple kinds of discrimination. This includes:
- Sex and gender discrimination
- Gender identity and presentation discrimination
- Sexual orientation discrimination
- Discrimination based on marital, family, or parental status
- Retaliation for attempting to enforce one’s rights against discrimination through the Title IX process
Title IX also protects against sexual harassment. Sexual harassment can include:
- A hostile environment, meaning severe, pervasive, and objectively offensive conduct. Sometimes a single act of violence can qualify as a hostile environment, because the effect is pervasive.
- Quid pro quo (trading sexual activity for academic or other benefits provided by someone in authority)
- Gender-based crimes defined by the Clery Act.
Clery Act Offenses include:
- Sexual assault- includes the state’s definitions of rape, fondling, incest, and statutory rape.
- Stalking- which may also be considered sexual harassment, and covers a wide range of behaviors like hacking, revenge porn, etc.
- Domestic violence- based on the state’s definition. May include violence perpetrated by current or former spouses, intimate partners, co-parents, etc.
- Dating violence- explicit threats or acts of physical violence, including against a third party or against the perpetrator themselves (threats to commit suicide or self-harm, etc).
How do I report misconduct to the Title IX office?
The first thing you should do is review your school’s policy online, and follow their reporting procedure. If you’re not sure what to do, contact your school’s Title IX Coordinator to find out how to report. Your school probably has forms that they would like for you to fill out, and reporting according to their procedure will likely make the process more prompt and efficient.
Reporting sexual misconduct to your school’s Title IX Office is not supposed to be overly complicated. If you feel like your school is making you jump through lots of hoops in order to file your initial misconduct report, or that the process they are making you go through is unnecessarily difficult and time-consuming, you should consider contacting an attorney. Write down the basic who/what/when/ where of the incident in a letter addressed to the Coordinator of your school and sign it (emails and email signatures count!). The Sexual Violence Legal Assistance Project provides free, confidential legal counsel of Title IX cases. Your attorney can help make sure that your complaint is heard.
How will the Title IX office help me after I report misconduct?
Your school has to provide you with supportive measures once you report that you are a victim of assault, even if you choose not to pursue a formal resolution process. That’s true even if your school or classes are virtual.
These supportive measures are designed to make it possible for you to continue your education. This is true whether or not you continue to pursue a formal complaint process. Some supportive measures may even continue after a decision is made on your case.
Supportive measures aren’t allowed to unreasonably burden either you or the respondent (the person accused of sexual misconduct). That means that these measures can’t cost either party any fees or charges (including for procedures like changing dorms or switching classes), or have a punitive or disciplinary affect.
Supportive measures have to include “reasonably available” accommodations. That means schools can’t refuse to provide a supportive measure simply because they don’t want to set a new precedent for future cases. If the measure is reasonably available for your case, they have to offer it, regardless of the consequences for future cases.
Supportive measures have to ensure the safety of all parties. This means that the school should be trying to keep you and the respondent away from each other, prevent further harassment of you or the respondent, and protect you and the respondent from other events that would prevent you from accessing an education. This aspect of supportive measures can be very difficult for schools to uphold, because they are not allowed to violate student’s rights to freedom of speech, movement, or assembly.
The Title IX Coordinator has to consider your wishes when offering supportive measures. Sometimes, it won’t be possible to comply with all of your requests, but the office should try their best to work with you.
If the Title IX Coordinator finds that the respondent in your case poses an immediate and substantial risk to your and/or other’s safety, they may be able to put that person on administrative leave if they are a staff member, or on emergency removal/interim suspension if they are a student. However, the respondent has to be allowed to immediately appeal that decision.
The Title IX Coordinator will also explain to you how to undergo a formal investigation and adjudication process.
If your school’s Title IX office is not offering you supportive measures in line with these principles, you should contact a lawyer.
If I report misconduct, will the school tell anyone?
The school will investigate your report, which may mean that they will talk to some people who already knew about the misconduct or have relevant information to the case. However, all of the people who file or receive your report have to protect your confidentiality, as well as the confidentiality of the respondent and witnesses.
What happens after I report misconduct? What does a Title IX process look like?
After the Title IX Coordinator receives your formal complaint, they will work with you to offer supportive measures. They will also explain how the complaint process will look at your school. Many schools offer both an informal resolution process and a formal resolution process. Your coordinator may ask whether you’d rather pursue a formal or informal process (if you are a student and the respondent is a staff member, you have to pursue a formal process).
Informal processes look different at every school. If you choose to pursue an informal process, you can switch back to a formal process at any time before the informal process is complete. You can also usually defer to an informal resolution process at any point in the formal resolution process before a final decision is made.
If you choose to pursue a formal process, you are allowed to have two people present at every meeting: an advisor (like an attorney) and an emotional support person. If you can’t afford an advisor, the school has to pay for one for you. The Sexual Violence Legal Assistance Project also provides free, confidential legal assistance in Title IX cases.
First, the school will investigate your claim. Then, the school will adjudicate your claim; that means that they will look at the evidence found during the investigation, and decide whether the respondent was responsible for the conduct in your report. Finally, the school will determine any outcomes, like punishments or remedy measures that will continue once the complaint process is over.
In general, this process should be prompt. If you feel that your investigation or adjudication process is moving extremely slowly (significantly longer than two to three months), you may want to contact a lawyer, particularly if these delays are interfering with your education.
My school wants me to sign a non-disclosure agreement (NDA). Should I do that?
Some schools may ask you to sign a non-disclosure agreement as a part of the investigation or adjudication process. If your school asks you to do that, contact a lawyer before you decide whether or not to sign. The Family Educational Rights and Privacy Act (FERPA) already provides some privacy protections around Title IX cases; a non-disclosure agreement is not always necessary, and sometimes, not a good idea.
What does a Title IX investigation look like?
The Title IX Office will conduct a thorough investigation into the misconduct, which might take some time.
The investigation will involve finding evidence; to maintain impartiality, investigators are required to assume that the misconduct occurred, but that the respondent is not necessarily responsible. Please remember that your school’s Title IX office is responsible for conducting a thorough investigation; if it seems that they are making you responsible for investigating your case, contact a lawyer.
Once the investigator has collected all of the evidence, both you and the respondent have the opportunity to review all of this evidence and respond to it. If the investigation shows that your case if part of a pattern, your complaint may be consolidated with other, similar complaints. The investigator will then complete an investigation report.
I’ve asked for help from a religious leader, doctor, licensed therapist, or other privileged person to deal with my assault. Can the Title IX Office talk to that person as a part of their investigation?
If you have accessed help from religious leaders, doctors, licensed therapists, or other people that have a legal duty to maintain your confidence, the investigator is only allowed to talk to those people if you say that it’s ok. If you’re in a position where you need to make that decision, contact a lawyer.
What does a Title IX adjudication look like?
Adjudication takes place after the investigation, and looks different at every school. At most schools, the adjudication phase involves a live hearing. Live hearings can look very different at different schools, and sometimes take place through virtual technology, like Zoom. In many schools, during the hearing the school can present witnesses, evidence, etc. to an adjudicator(s) (an impartial third party, not the Title IX Coordinator). It’s also common for the Title IX Coordinator, your advisor, or the respondent’s advisor to ask you, the respondent, and relevant witnesses to answer questions. It’s important to note that the respondent, Title IX Coordinator, your advisor, and the respondent’s advisor are not allowed to ask about your past sexual behavior or predisposition (with a few small exceptions). Your school will probably have other rules about how hearings are conducted, and hearings at your school may look different than what’s described here.
What happens after my hearing?
Once the hearing is complete, two things will happen: the adjudicator(s) will make a decision and write a report. These processes can happen in a few minutes or over a few weeks, and one may happen before or after the other.
Generally, the adjudicator(s) will create a report summarizing the evidence that was introduced during adjudication.
The adjudicator(s) will also make a decision about your case. They will decide based on either “the preponderance of the evidence” or “clear and convincing evidence” standard whether the respondent is at fault for the misconduct alleged. The adjudicator will announce their decision and any outcomes (like possible remedies or disciplinary or supportive measures) to both you and the respondent simultaneously.
After the decision is made, the adjudicator(s) will write an outcome report that includes a summary of the allegations, procedural history of your case, facts that support their decision, how the school’s code of conduct applies, the rationale for their decision, and how to appeal.
Your school may want to schedule a follow-up appointment with you after the outcome report is published.
Who is responsible for carrying out the decisions made in Title IX Cases?
The Title IX Coordinator is responsible for implementing the outcomes of decisions of Title IX cases. They have to make sure to carry out any remedies that the adjudicator said you were to receive, in order to restore and preserve your access to your education and any related programs and activities. This can also make the Title IX Coordinator responsible for upholding disciplinary or punitive measures against the respondent.
Can I quit the Title IX complaint process after I start?
Most of the time, victims can decide not to pursue the Title IX complaint process once they’ve started, and the case is dismissed. However, a Title IX Coordinator may sometimes choose to continue a case over the victim’s wishes. This is especially likely if the respondent is a suspected serial perpetrator, or is likely to victimize other students. Coordinators can do this because Title IX exists to protect all students from discrimination and harassment, not just those students that have filed formal complaints.
Is my school’s Title IX Office allowed to dismiss my case?
Title IX Complaints can be dismissed for a few reasons, including:
- the incident in your report happened outside the US (for instance, while studying abroad)
- the victim is no longer in the school’s educational environment or activity. There are exceptions for this if the student wants to return, was a prospective student, or would like to pursue post-graduation educational programming or activities (like school-endorsed alumni events or associations).
- the respondent is no longer in the school’s educational environment or activity. Schools can dismiss a case for this reason, but don’t have to.
- the conduct described in the report does not fall under the purview of Title IX. If this is a case, there may be other ways to address the misconduct the victim experienced.
- the incident happened outside the educational environment. If this is the reason given, you may want to consult a lawyer. Title IX defines the educational environment very broadly, and includes incidents that occur in honors housing, informal activities on-campus (like pickup basketball), and many other situations.
- there is an inability to recover evidence about the event. This is very rare, and if this is the reason given, you may want to consult a lawyer.
When am I allowed to appeal the decision or dismissal of my case?
Appeals processes are different at different schools. However, most schools do have a window for filing appeals. In many institutions, once this window passes, you can no longer file an appeal. Your school should describe the appeals process to you and the respondent during the final decision report.
Both you and the respondent can appeal a dismissal or decision if you think that a procedural irregularity affected the institution’s decision or dismissal, if new evidence comes up that wasn’t available at the time of the decision or dismissal, or if the Title IX personnel had a conflict of interest or bias that affected the decision or dismissal of the case. If you’d like to pursue an appeal, you should contact a lawyer.
If you or the respondent decide to appeal, the other party will be notified in writing when the appeal is filed. If the respondent requests an appeal, make sure to ask for a copy of all related paperwork. A new decisionmaker will be chosen for the appeals process. You and the respondent will both be allowed to submit a written statement to this new decisionmaker supporting or challenging the original outcome. The appellate decisionmaker will then issue a written appellate decision simultaneously to both you and the respondent.
What does the school do with records related to Title IX cases?
Schools have to maintain all records related to a Title IX proceedings according to state record-keeping laws for seven years after the decision is reached. This includes all physical, electronic, video, and audio records, as well as transcripts. These records continue to be confidential.
Records will generally discuss supportive measures, informal resolution process if applicable, hearing and formal outcome if applicable, and appeals if applicable.
This guide should not be used as legal counsel or advice. Every school is different, and every case is unique. If you are undergoing the Title IX process and have questions or need help, free and confidential legal assistance is available through the Sexual Violence Legal Assistance Project. Please call (717)901-6784 or 1-800-692-7445 ext 190 or visit https://pcar.org/help-pa/sexual-violence-legal-assistance-project to learn more. | <urn:uuid:7517b0eb-37f6-43f4-b453-a218c8d91d08> | CC-MAIN-2022-33 | https://pcar.org/what-know-about-title-ix | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00098.warc.gz | en | 0.95412 | 4,773 | 3.859375 | 4 |
1 Alcohol and The Teenage Brain: Safest to keep them apart An Opinion Piece prepared by Professor Ian Hickie AM MD FRANZCP FASSA NHMRC Australian Medical Research Fellow Brain & Mind Research Institute University of Sydney Brain & Mind Research Institute: University of Sydney Financially supported by DrinkWise Australia
2 Brain & Mind Research Institute Monograph Financial Support for development of this report was provided by DrinkWise Australia Published by Brain & Mind Research Institute at the University of Sydney Building F, 94 Mallett Street Camperdown NSW Australia 2050 First published August 2009 ISBN: The correct citation for this publication is: Hickie, I.B., Whitwell B.G. (2009) Alcohol and The Teenage Brain: Safest to keep them apart, BMRI Monograph , Sydney: Brain & Mind Research Institute. Front cover: Gogtay, N. et al (2004). Dynamic Mapping of Human Cortical Development during Childhood Through Early Adulthood, Proceedings of the National Academy of Sciences of the USA, 101(21): , May [published online, May ] 2
3 Executive Summary Traditionally, the major components of brain development were believed to occur before birth and in early childhood. Consequently, there has always been a strong view that exposure to alcohol and other substances that are toxic to brain cells should be minimized during these periods. The most recent NHMRC guidelines (2009) have recently significantly reinforced this perspective. With the onset of puberty, most cultures have recognized that individuals move rapidly towards sexual maturity and associated adult responsibilities. Consistent with that major change in social roles, and its associated rites of passage, consumption of alcohol and other substances is encouraged or at least widely tolerated. Following the discovery of new highly sensitive brain imaging techniques in the 1990s, as well as key findings about the ways in which nerve cell connections are radically reshaped in the post-pubertal period, these traditional views are now undergoing significant re-evaluation. At this time, it is rapidly becoming clearer that alcohol and the teenage brain don t mix and that exposure to alcohol should be postponed and preferably avoided at least until the late adolescent or early adult years. Much of the clinical, neuroimaging and neuropsychological literature demonstrating the adverse effects of alcohol on the brain is based on adult rather than teenage subjects. The inferences concerning the likely toxic effects of alcohol on the adolescent brain also rely strongly on findings in developing animals rather direct observations in human studies. Those animal studies have tended to emphasise the long-term adverse cognitive and behavioural effects of alcohol and other drug exposures during the relevant adolescent periods of brain development. Traditionally, the more conservative academic position has highlighted the lack of a large number of long-term human studies and, hence, concluded that the potential adverse effects of early exposure to alcohol amongst teenagers and young adults should not be overstated. While this perspective is understandable, it needs to be balanced first by the emerging findings in human neuropsychological and neuroimaging studies. On balance, the available studies suggest that the adolescent brain is particularly sensitive to the negative effects of excessive or prolonged alcohol exposure, including the adverse effects of binge drinking. 3
4 Additionally, one needs to consider the large body of evidence of the degree of direct harm due to injury (including significant head injuries) that results from excessive risk-taking in young people who consume alcohol. This degree of risk-taking while intoxicated is likely to reflect the combination of the disinhibitory effects of alcohol (which are present at all ages due to dampening down of frontal lobe function) and the relative lack of development of the frontal lobes in adolescents. From this perspective, the risk of accidental injury due to excessive risk-taking and poor impulse control is particularly likely to be evident in younger teenagers who use alcohol. If one weighs up the available evidence concerning direct risks to brain development, short and long-term effects on cognitive and emotional development and risks of associated injury due to poor judgement and lack of inhibition, on balance, two conclusions now appear to be justified: 1. Alcohol should not be consumed by teenagers under the age of 18 years; And, 2. Alcohol use is best postponed for as long as possible in the late teenage and early adult years. The key emerging scientific issues that support this view are: The frontal lobes of the brain underpin those major adult functions related to complex thought and decision and inhibition of more childlike or impulsive behaviours. These parts of the brain undergo their final critical phase of development throughout adolescence and the early adult period. While there is considerable individual variation in this process, it appears to continue well into the third decade of life (age years) and may be particularly prolonged in young men; Key parts of the temporal lobe, including the amygdala and hippocampus, continue to undergo development during the adolescent period. The amygdala underpins the normal fear response while the hippocampus is an essential part of normal memory function; 4
5 The final phase of frontal lobe development occurs at the same time as the onset of all of the common and serious mental health problems. Seventy-five per cent of adult-type anxiety, depressive, psychotic and substance abuse related disorders commence before the age of 25 years; Alcohol has significant toxic effects on the cells of the central nervous system, and depending on dose and duration of exposure, is likely to result in serious short-term and long-term harm. Those harmful effects are most likely to be evident in areas in which the brain is still undergoing rapid development (i.e. frontal and temporal lobe structures); Alcohol, even in small doses, is associated with reduction in activity of the normal inhibitory brain processes. Given that such processes are less developed in teenagers and young adults, alcohol use is likely to be associated with greater levels of risk-taking behaviour than that seen in adults; Alcohol normally results in sedative effects as the level of consumption rises. It appears that teenagers and young adults are less sensitive to these sedating effects (due to higher levels of arousal) and are, therefore, likely to continue with risk-taking behaviours. As they also experience loss of control of fine motor skills, the chances of sustaining serious injuries (including head injuries) are increased; Exposure to significant levels of alcohol during the early and midadolescent period appears to be associated with increased rates of alcohol-related problems as an adult as well as a higher rate of common mental health problems such as anxiety and depression; Young people with first lifetime episodes of anxiety, depression or psychotic disorders who also consume significant amounts of alcohol are at increased risk of self-harm, attempted suicide, accidental injury as well as persistence or recurrence of their primary mental health problem. 5
6 Professor Ian Hickie AM, MD, FRANZCP, FASSA Executive Director, Brain & Mind Research Institute (BMRI), University of Sydney In October 2006, the Australian Financial Review included Professor Hickie in its list of the top 10 cultural influences. The specific comments noted his role as a long-term campaigner, the person who orchestrated the campaign that led to the COAG announcements ($4 billion dollars over five years. In October 2000 he was appointed as the inaugural CEO of beyondblue: the national depression initiative and from served as its Clinical Advisor. In 2003, he was appointed as the inaugural executive director of the flagship Brain and Mind Research Institute at the University of Sydney. In 2006, Professor Hickie received the Australian Honours Award of Member (AM) in the General Division; for services to medicine in the development of key national mental health initiatives and general practice services in both the public and non-government sectors. In 2007, he was appointed to the Prime Minister s Australian National Council on Drugs and has led the BMRI as a founding member of the new National Youth Mental Health Foundation ( headspace ). In 2007, Professor Hickie was elected as a Fellow of the Academy of the Social Sciences in Australia. From , Professor Hickie is one of the first round of new NHMRC 2008 Australian Fellows; recognising excellence in Australian Medical Research. His research, clinical and health services development work focuses on expansion of population-based mental health research and development of international mental health strategies. In July 2008 he was appointed to the Federal Health s Minister s new National Advisory Council on Mental Health. In May 2009 he became a member of the Common Approach to Assessment Referral and System Taskforce. Brain & Mind Research Institute Diseases of the brain and mind, including substance abuse, clinical depression and dementia now account for more than 40 percent of all illness. These diseases are devastating for those affected, their families, and for society, costing the Australian economy an estimated $30 billion each year. The BMRI brings together patients, support groups and front-line carers with scientists and clinicians working in neurosciences and brain research, providing hope for those affected. 6 The Brain & Mind Research Institute has a fundamental commitment to the long-neglected areas of brain and mind disorders, pursuing genuine partnerships with the wider community as a vital part of our research and activities. The synergies that arise from the sharing of unique facilities and the active intermixing of senior researchers from the basic and clinical neurosciences promotes new research discoveries into disorders of the brain and mind, providing hope for those individuals and families whose lives are devastated by these conditions."
7 Alcohol and The Teenage Brain: Safest to keep them apart Executive Summary Prof Ian Hickie Brain & Mind Research Institute 1.0 Adolescent Brain Development and Related Changes in Cognitive Function and Social Behaviour Critical Changes in Brain Structure Critical changes in Cognition and Social Behaviour A critical period of heightened vulnerability Onset of mental health problems during the adolescent period Damaging Effects of Alcohol on the Teenage Brain Toxic effects of alcohol on the brain Alcohol exposure during the teenage period Longer-term brain effects of teenage alcohol exposure Avoidance of Alcohol following overuse Behavioural effects of early alcohol exposure 32 References 33 7
8 1.0 Adolescent Brain Development and Related Changes in Cognitive Function and Social Behaviour 1.1 Critical Changes in Brain Structure When most scientists talk about brain development, they usually emphasise the importance of the fetal period and early childhood. This is understandable as the basic brain structure is laid out in utero and then the most intense period of growth in connections between brain cells occurs in early childhood (see Paus et al 2008; Bennett 2008). This process of basic wiring underpins the acquisition of simple motor and sensory functions and language, as well as those aspects of behavioural and emotional control that are central to normal development in the pre-pubertal years. Figure: Gogtay, N. et al (2004). Dynamic mapping of human cortical development during childhood through early adulthood.
9 Brain development then undergoes a critical final phase after puberty. During this later phase, the brain shifts from simply acquiring new connections between nerve cells to pruning those same connections. This occurs largely on the basis of learning and experience and leads to the establishment of the most efficient pathways for performing those more complex forms of thought and behaviour that characterize adulthood (see Gogtay et al. 2004). This process means that the thinking part of the brain (the grey matter) actually shrinks in size but increases its productivity. Figure: Paus T, et al (2008) Why do many psychiatric disorders emerge during adolescence? 9
10 Figure: Bennett, M (2008) Dual constraints on synapse formation and regression in schizophrenia: neuregulin, neuroligin, dysbindin, DISC1, MuSK and agrin At week 20 all neurons in the foetal brain are present. For the remaining weeks of gestation the foetal brain develops synaptic connections at a rate of over 50 million connections per second. At birth this rate reduces to 1 million synapses per second. As part of this process of improving the efficiency of the brain, the cabling system (the white matter) also undergoes its final phase of development (i.e. myelination). This results in enhanced communication between the key thinking regions of the cerebral cortex (see Fryer et al 2008). Of key relevance is the notion that better cabling is part of the way that the more adult parts of the brain (frontal and temporal lobes) increasingly exert their influence over the more primitive, instinctual or impulsive parts of the brain. 10 This last phase of brain development extends well beyond the early teenage years and is now known to continue into the third decade of life. This developmental period may start later (along with puberty) in boys and still be active into the mid-20s. Interestingly, we are sexually mature (i.e. able to reproduce) long before our brain reaches its fully mature state! While the whole brain is affected by these various grey and white matter processes, some regions are changing in more fundamental ways than others. Two critical areas the frontal lobes and the temporal lobes (the latter includes the amygdala and hippocampus), are profoundly remodeled at this time (see Gogtay et al 2004.)
11 The frontal lobes of the brain underpin those major adult functions related to complex thought and decision-making. They are also critical to the progessive inhibition of more child-like or impulsive behaviours (see Table 2: Brown et al. 2009; Casey et al. 2008). Therefore, for humans to function in complex interpersonal or information-rich environments well-developed frontal lobes are essential. It is the size and sophistication of our frontal lobes that most differentiates humans from other primates. Figure: Components of Executive functions and sample behaviours. From Brown S, et al ( 2008) A developmental perspective on alcohol and youths 16 to 20 years of age. The circuitry of the frontal part of the brain has major links to other regions (subcortical nuclei; temporal lobe structures) related to emotion, memory, complex motor behaviours and goal-directed learning (see Circuit refs). Our understanding of other behaviours linked to impulsivity, additive behaviours and other abnormal patterns of learning has advanced alongside the discovery of these critical circuits (see Crews & Boettiger, 2009; Balleine & O Doherty, in press). Disruptions of these fronto-striatal circuits is likely to be critical to the onset and maintenance of addictive disorders. 11
12 Figure: Crews F, Boettiger C (2009) Impulsivity, frontal lobes and risk for addiction. Key parts of the temporal lobe, including the amygdala and hippocampus, continue to undergo development during the adolescent period. The amygdala underpins the normal fear response, while the hippocampus is an essential part of normal memory function. The hippocampus typically shrinks in size later life alongside illnesses like dementia (such as Alzheimer s disease) or late-life depression (see Hickie et al. 2005). It is also extremely sensitive to alcohol and is reduced in size in adults with established alcohol-related disorders. 1.2 Critical changes in Cognition and Social Behaviour 12 The teenage years are often seen to be very challenging from an emotional and behavioural perspective. In reality, the development of more adult-like moods, thought processes, identity and interpersonal relationships is a continuous process across the adolescent period. The general direction is from more immature and child-like thought processes, and relative lack of impulse control, to more mature, considered and thoughtful actions. This movement in behaviour is dependent on the continuing and active development of the underlying brain structures, particularly those in the frontal and temporal lobes (as described above).
13 There are, however, real challenges in behaviour to be considered. Immediately, following puberty more rapid changes in mood and more intense negative feelings may become apparent. Greater sensitivity to negative interactions with peers and a more fragile sense of self may be evident. Issues related to body image and other gender identity and sexual orientation issues may emerge. An individual s capacity to utilize new emotional and cognitive processes will be affected by a range of internal (rate of brain development, particularly frontal lobe development) and external processes (e.g. positive or negative social and educational experiences). A key aspect of adolescent behaviour is the movement away from safe family environments and close kin relationships. The drive to greater noveltyseeking and a broader social network is a normal and desirable aspect of development. Experimenting with new environments, new relationships and first-ever sexual experiences all pose new challenges. While these challenges are associated with increased anxiety, they are also associated with an increased sense of mastery (when transacted successfully) and personal pleasure. The key cognitive and behavioural challenges throughout this period are the capacity for thoughtful reflection, learning from experience and planning future events with a view to judging the likely consequences of new actions. In each case, weighing up the potential risks is critical. Taking time to integrate information relative to the desire to act impulsively is a key consideration. Figure: Silveri M,et al (2008) Relationship between white matter volume and cognitive performance during adolescence: effects of age, sex and risk for drug use. 13
14 Throughout the adolescent period, very specific advances in key cognitive functions affecting attention, working memory, visuospatial capacity, motor speed and coordination, abstract reasoning, decision-making, planning for the future and the capacity to make accurate social judgements become evident. However, these complex functions do not develop at the same rate in all people. There are also significant differences between young men and young women. Consequently, some of these very important adult capacities may be relatively underdeveloped in some individuals in their late teens or early 20s. Other aspects of normal physiology, notably the 24-hour circadian cycle which determines the sleep-wake cycle, are strongly influenced by frontal lobe systems. During adolescence this system is shifted, resulting generally in increased wakefulness in the evening, delayed onset of sleep and later morning wakening. This increased alterness during the evening may offset the sedative effects of alcohol and other drugs. 1.3 A critical period of heightened vulnerability Given the profound and sensitive nature of the processes related to brain, cognitive and emotional development taking place during adolescence, the human and animal research literature has begun to explore the concepts that: adolescence is a period of heightened vulnerability to any environmental insult; and brain insults during the adolescent period may have more profound long-term effects than insults that occur earlier in development or later in adult life. The concept of heightened vulnerability rests primarily on the new biological evidence of the rapidity and extent of brain change taking place at this time. Therefore any injury (e.g. brain trauma or hypoxia) or other toxic insult (e.g. alcohol, other drugs, infection) at this time is likely to disrupt a wide range of key brain functions. The long-term ramifications of those changes are likely to be profound as some of the brains most important integrative functions may be permanently disrupted. 14 Further this view is also one that integrates new knowledge of the underlying biological processes with current evidence about the nature of the environmental risks that young people are likely to face. For example, as noted above, changes in sleep wake cycle during this period (i.e. being more awake at night) increases the chance that teenagers will ingest larger amounts of alcohol in the evening before becoming sleepy. Compared to adults, this interaction increases their chance of dose-dependent alcoholrelated damage. Similarly, current patterns of binge drinking result in very high blood alcohol levels that appear (from comparable animal experiments) to be particularly likely to cause damage to sensitive regions of the brain (notably the hippocampus in the temporal lobe and the white matter connecting tracks).
15 In essence; The teenage years are those associated with the final critical period of normal brain development and, when transacted appropriately, result in the development of adult cognitive functions including mood-regulation, reduced impulsiveness, accurate social judgements and complex planning capacities; From a behavioural perspective, an increase in noveltyseeking and social exploration outside prior family and kin relationships is expected but is also associated with a relative lack of development of impulse control, mood regulation and consideration of the long-term implications of risk-taking behaviour; Given the scope and extent of reorganization of brain structure and functions during this period, the brain may be more sensitive to specific insults than at earlier childhood or later adult periods; Damage to the brain during this critical development period appears to have long-lasting consequences on those higher order cognitive and emotional functions that are essential for maximum occupational and social function as an adult. 15
16 2.0 Onset of mental health problems during the adolescent period Seventy-five per cent of adult-type anxiety, depressive, psychotic and substance abuse related disorders commence before the age of 25 years. While prior to puberty a range of neurodevelopmental or other emotional problems are evident (childhood anxiety, conduct disorders, specific learning difficulties, attention-deficit and hyperactivity and autistic-spectrum disorders) the onset of adolescence is associated with a sharp increase in the rate of common forms of anxiety and depression (see Victorian Disease Burden Study; Paus et al 2008). Additionally, the more severe forms of psychotic disorder (first-episode psychosis, schizophrenia, bipolar disorder) often show their first signs during the mid and later adolescent periods. Those individuals who had developed child-onset disorders with associated social skill or educational difficulties also tend to face a new range of challenges as teenagers. Figure: Victorian Burden of Disease Study: Incident YLD rates per 1000 population by mental disorder
17 Impulse-control disorders Substance-use disorders Anxiety disorders Mood disorders Schizophrenia Age of onset (years) Figure: Paus T, Keshavan M, Giedd J (2008) Why do many psychiatric disorders emerge during adolescence? Many of the more adult-like mood disorders emerge at the same time as the frontal and temporal lobes are progressing rapidly through their own program of reorganization (driven by synaptic pruning and myelination of white matter tracts). What has also become clear in recent years is that severe mood disorders and psychotic disorders also have the potential to cause damage to critical brain structures in those same frontal and temporal lobes (see Lorenzetti et al. 2009; Wood et al, 2009). Of particular note, is the reduction in hippocampal size in depression (see Hickie et al. 2005) and the changes in the frontal lobe regions that regulate mood. The hippocampus is located in the temporal lobe and is responsible for many aspects of shortterm memory function. It is the structure which shrinks in dementias such as Alzheimer s disease and is known to be very sensitive to damage from high or persistent levels of alcohol in adults. Figure: Brain structural abnormalities in major depression
18 The mechanisms underpinning such adverse effects are the subject of very active research and appear to include a reduction in nerve growth factors (particularly brain-derived neurotrophic factor [BDNF]) and possibly impaired neurogenesis (i.e. generation of new nerve cells, particularly in the hippocampus). The degree of damage appears to reflect both the severity and duration of the mental disorder as well as the length of time the disorder remains untreated. Many of the current medical treatments for depression and more severe psychotic illnesses actually result in increased levels of BDNF and may, therefore, reverse the loss of brain tissue seen in these illnesses (see Hickie et al. 2005). Figure: Hickie I.B. et al (2005) Reduced hippocampal volumes and memory loss in patients with early- and lateonset depression. 18 The early onset of alcohol and other substance misuse problems in the teenage years, before the onset of other anxiety or mood problems, appears to increase the chances that a young person will go onto develop another major mental health problem in the later adolescent or early adult period. This could be because they share common genetic or environmental risk factors or that the earlier use of substances is a direct cause of later difficulties. It is likely that use of brain-toxic substances early in the adolescent period has the potential to interfere with normal frontal and temporal lobe development and, thereby, put an individual at increased risk of later anxiety or mood-related mental health problems.
19 Unfortunately, use of formal health services for management of common mental health or alcohol or substance-abuse related problems by young people is unusual. Only 13% of young men and 30% of young women with mental health problems access mental health care in any 12 month period (Australian National Survey, 2007 see table). When faced with tough times, it is evident that young people with mental health problems are more likely to use alcohol and other drugs as part of the way they cope with everyday problems. These maladaptive coping strategies not only increase the chance of poor outcomes including self-harm and injury but may increase the chances of causing further damage to critical brain structures during this critical developmental period. Figures above & below: National survey of mental health and wellbeing: summary of results
20 In essence; Young people with first lifetime episodes of anxiety, depression or psychotic disorders who also consume significant amounts of alcohol are at increased risk of selfharm, attempted suicide, accidental injury as well as persistence or recurrence of their primary mental health problem; Young people who consumer brain-toxic substances early in their teenage years are at increased risk of developing major mental health problems later in the later adolescent or early adult years; and, Young people with major mental health problems and alcohol or other drug-related disorders have two sets of problems that are likely to have long-term adverse effects on their brain development. 20
21 3.0 Damaging Effects of Alcohol on the Teenage Brain 3.1 Toxic effects of alcohol on the brain The damaging effects of alcohol on brain structure and function have been studied for many years in humans and animals. The typical considerations include short-term effects of intoxication (which are strongly associated with changes in function but may not, in adults, be as strongly associated with changes in brain structure) as well as the longer-term damaging effects of alcohol on specific brain structures. Short-term effects of intoxication Alcohol has immediate effects on brain function soon after ingestion. With increasing dose, one will see predictable effects on arousal, motor coordination, impulsivity and judgement. These effects can be correlated with alcohol s known effects on key brain regions. Importantly, alcohol dampens down the inhibitory effects of the frontal lobes and the fear responses generated by the fronto-temporal lobe circuits. Consequently, with increasing levels of intoxication one will see increased impulsiveness and risk-taking. As blood alcohol rises increasingly poor judgement and lack of consideration for the likely consequences of one s actions become apparent. Alcohol normal causes a predictable decrease in reaction time and motor coordination. As blood alcohol rises there is also an increase in the level of sedation. In adults, alcohol will tend to cause drowsiness with associated loss of attention and motor coordination. Higher levels of alcohol intoxication can lead to unconsciousness and depress even very basic physiological processes such as the drive to breathe. Of particular relevance in the alcohol field, is the phenomenon of blackouts. This is where a person cannot recall key aspects of their behaviour in the period immediately following a bout of intoxication. In essence, it indicates that the brain processes that code short-term memory (located in the hippocampus and related temporal lobe structures) were seriously disrupted when the person was intoxicated. It is believed that this phenomenon is indicative of at least short and, potentially, longer-term damage to the hippocampus. 21
22 Longer-term changes in brain structure and function Human brain imaging and neuropsychological studies of adults with alcoholism have clearly shown that key brain structures are reduced in size and impaired in function (see NIAAA website for further information and images). The frontal lobes (controlling complex planning and impulse control), temporal lobes (regulating memory and fear responses) and the cerebellum (regulating motor coordination) are particularly sensitive to the adverse effects of alcohol. Figure: Comparison of two female subjects who had volumetric MRIs created in a GE 1.5 Tesla MRI machine 22 In teenage and adult humans, it can be difficult to tease out those adverse effects directly related to alcohol from those related to other drugs, concurrent medical health problems, related nutritional deficiencies or earlier injuries. Consequently, animal experiments have been very important in demonstrating the direct toxic effects of alcohol. These toxic effects can be group as those that directly impact on: i) the integrity of brain cells - causing cell shrinkage or death neurodegeneration; ii) brain cell connections, causing a direct reduction in synapses; and, iii) the normal regeneration of brain cells leading to impaired neurogenesis. | <urn:uuid:3d7b6498-f176-4df3-b8bf-6cabeed5cc71> | CC-MAIN-2017-51 | http://docplayer.net/274629-Alcohol-and-the-teenage-brain-safest-to-keep-them-apart.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948563083.64/warc/CC-MAIN-20171215021156-20171215041156-00728.warc.gz | en | 0.937237 | 5,780 | 3.234375 | 3 |
A History of the County of Middlesex: Volume 10, Hackney. Originally published by Victoria County History, London, 1995.
This free content was digitised by double rekeying. All rights reserved.
In 1294 the bishop of London claimed view of frankpledge, infangthief, outfangthief, the assize of bread and of ale, fugitives' goods, tumbril, pillory, gallows, and fines in Hackney, as part of his manor of Stepney. (fn. 1) Separate bailiffs accounted for Hackney and Stepney by the 1380s (fn. 2) but courts for the whole manor, sometimes with pleas for Hackney entered separately, were still held at Stepney in the 16th century. (fn. 3) The bishop was paid 16s. as the common fine from Hackney in 1349. (fn. 4)
Proceedings for Hackney are recorded on Stepney court rolls for 1349, 1442, 1509, and 1581-2 and in books for Stepney for 1654-64. (fn. 5) Two officers, perhaps constables, were elected for Hackney in 1509, when two chief pledges were elected for Shacklewell and one each for Clapton and Homerton. (fn. 6) A general court baron for Hackney, held immediately after a Stepney court in December 1581, was concerned largely with copyholds which had changed hands during the previous year; it also proposed two names for the choice of bailiff or collector. (fn. 7) The next court for Hackney, after a view of frankpledge at Stepney in April 1582, chose 2 constables, 2 aletasters, and 6 chief pledges or headboroughs: one and a deputy for Clapton, 2 for Mare Street, Well Street, and Grove Street, and 2 for Kingsland, Newington, Shacklewell, and Dalston. Three common drivers were chosen two days later. (fn. 8) In 1641 the steward of Hackney, who was also steward of Stepney, summoned the copyholders to meet in Hackney (fn. 9) and in 1642 courts at Stepney were held for Hackney alone in April and October. The first dealt with ditches, the assize of bread, and the elections of a constable and 2 aletasters for the parish and of a headborough for each of 7 wards: Kingsland and Dalston, Newington and Shacklewell, Church Street and part of Clapton, Clapton, the upper end of Homerton, Mare Street, and the lower end of Homerton. (fn. 10) By 1654, although the manors still had the same lord, Hackney courts were held at Homerton; much less frequent than those for Stepney, they consisted of a general court baron in April and December and a view in April, with one special court in 1655 and three in 1656. (fn. 11)
Separate courts were held for the Hackney manors of Lordshold, Kingshold, and Grumbolds, despite the acquisition of all three by the Tyssen family. Court books or draft court books exist for Lordshold for 1658-1940 (fn. 12) and for Kingshold for 1666-1936, (fn. 13) with minutes and extracts. (fn. 14) It had been claimed in 1331 that the Templars had possessed pleas and perquisites of court for what became Kingshold manor; the Hospitallers, fined in 1511 for default at the bishop's law day, had held a court at Hackney in the early 16th century, when rolls had allegedly been lost. (fn. 15) For Grumbolds there are extracts for 1486-1741; later records include a minute book to 1925. (fn. 16) In 1711 the lord received 83 quitrents for Lordshold, 21 for Kingshold, and 5 for Grumbolds. (fn. 17) Uncertainty was such that a Kingshold transaction of 1798 was wrongly entered under Lordshold. (fn. 18)
The busiest court, that of Lordshold, consisted of a view of frankpledge, followed by a court baron, in April and sometimes special courts. (fn. 19) The view was held at first usually at Homerton, where the Coach and Horses was the meeting place in 1752, and later at Kingsland, at the King's Arms by 1753 (fn. 20) and at the Tyssen Arms from 1815. (fn. 21) After 1845, no longer called a view, the court met at the Manor rooms until 1885 or later; (fn. 22) enfranchisements (fn. 23) and property transactions were done in lawyers' chambers until 1924. For Kingshold a view was likewise followed by a court baron, in Church Street and perhaps from 1666 at the Green Man, named as the usual meeting place in 1723; it was held at the Cock in the late 18th century, later at the Mermaid, the Tyssen Arms, the Manor rooms, and finally in London. (fn. 24) Grumbolds courts, annual c. 1500 but later less regular, also met latterly at the Manor rooms. (fn. 25) Probably all three manors had a single steward, normally a lawyer. Stewards exploited the family's absence before and after the term of J. R. Daniel-Tyssen from 1829 until 1852: Thomas Tebbutt and his son were involved in William Rhodes's building schemes (fn. 26) and Charles Cheston, son and successor of Chester Cheston, ruined Lord Amherst of Hackney by embezzlement. (fn. 27)
Until 1840 or later (fn. 28) the Lordshold court appointed 2 constables, 2 aletasters, and normally 8 headboroughs; it also suggested 2 names for the choice of a reeve and appointed 6 or 7 common drivers. A magistrate was excused serving as headborough in 1718, partly because the headborough's was an inferior office. (fn. 29) Aletasters were active in the late 17th century and continued to present the use of false weights and measures in 1740. (fn. 30) Reeves and drivers were substantial landowners and were still being appointed in 1885. (fn. 31) Kingshold courts appointed a constable, an aletaster, and 2 headboroughs until 1841 or later but no officers by 1845. (fn. 32)
Manorial and parochial authority overlapped. The common drivers reported in 1605 to a large meeting of inhabitants, which then passed a resolution on the commons, as did the vestry in 1614 and later. Parishioners claimed to be upholding ancient customs, which were set out by agreement between the lord and the copyholders in 1617. (fn. 33) The vestry instructed the constables and headboroughs about the poor in 1618, before it had its own beadle, (fn. 34) and again in 1701 and, after offering payment, in 1712; it barred manorial officers from serving as beadle in 1781. (fn. 35) In addition to safeguarding the commons, the Lordshold court in its turn gave orders about the stocks and whipping post, which the parish had failed to repair, in 1744. (fn. 36) The steward denied intentional infringement of parochial privileges in 1804, after the vestry's protest at not having been informed of inclosures, and the vestry promised in 1806 to keep better records, after the court's complaint that their inadequacy made it difficult to appoint officers. The vestry disclaimed any connexion with manorial officers when asked to meet parliamentary election expenses in 1833. (fn. 37)
PARISH GOVERNMENT TO 1837.
Hackney, where parish meetings were recorded from 1581, (fn. 38) had an unusual dual form of government from 1613, when a select vestry was instituted by a faculty from the bishop of London. In most parishes a select vestry tolerated open meetings only to add occasional weight to its own decisions. In Hackney, perhaps because it attracted so many rich merchants, the parish officers 'and other inhabitants' continued to meet every few months and shared power with the 'gentlemen of the vestry', for whom the faculty was reissued in 1679. Both bodies were merged in an open vestry in 1833, (fn. 39) by which time they had surrendered responsibility for the poor and for lighting and watching to trustees under Acts of 1763 and 1810; separate vestries for South and West Hackney had also been created by the subdivision of the rectory in 1831. Hackney vestry's continued influence through the election of parish officers and others as trustees was diminished by the establishment of the poor-law union in 1837. (fn. 40)
In 1547 Church House of c. 1520 was said to have been built for meetings on the king's, the church's, or parochial business. Presumably it was so used until taken for the free school c. 1616. (fn. 41) Officers recorded from 1554 were 2 churchwardens and 4 laici or sidesmen, (fn. 42) and 2 surveyors of the highways, 4 surveyors of the poor, and 2 collectors for the poor from 1581. One churchwarden was elected in 1583, the second one presumably being named by the vicar. (fn. 43) A steady source of income which came to form the 'unappropriated funds' was foreshadowed in 1590, when the vicar and 15 others excused Thomas Audley from parish offices in return for money towards repair of the church. (fn. 44) Inhabitants were first listed by district for the collection of church rates in 1605. (fn. 45)
The faculty of 1613 was requested by the vicar and others after trouble from 'the meanest sort being greater in number'. It appointed the rector, vicar, assistant curate, churchwardens, and 32 named parishioners, or any 10 of them, to meet as vestrymen at the church. (fn. 46) Vacancies thereafter were filled by co-option. In the 1620s the vestry usually appointed annually 2 churchwardens, 4 sidesmen, 2 surveyors, and 2 collectors; later in the century overseers, rather than collectors, and 2 sidesmen were chosen. At least 4 of the original vestrymen and in 1628 both churchwardens signed with a cross. (fn. 47) There was a parish clerk before 1625, when the vicar's installation of his own nominee led to an action in King's Bench which upheld the traditional right of election claimed for the parish and exercised by the vestry. (fn. 48) In 1711 the parish clerk was also given the office of vestry clerk, apparently a new post whose duties were again defined in 1756. (fn. 49) A sexton was paid an increased salary in 1632 and one was succeeded by his wife as sextoness in 1690; (fn. 50) the office was lucrative enough to be shared in 1744 and entailed the employment of pew openers in 1759. (fn. 51) A beadle was to be appointed in 1657 'for the preventing of multiplying of the poor' and again in 1671; his duties were reviewed in 1694, when all new lodgers were to be reported. (fn. 52) Two beadles were paid in 1732-3 and again, as home and out beadles, with partly differing functions until 1771, in 1753; (fn. 53) there were three beadles by 1810. (fn. 54) Searchers, to examine corpses, were originally appointed by magistrates but from c. 1727 by the vestry until they were discontinued in 1836. (fn. 55) A verger was first appointed in 1799. (fn. 56) Most salaried offices, like those of the early schoolmasters and of lecturers and others connected with the church, (fn. 57) were renewed annually; in 1730 they were those of vestry clerk, beadle, organist, sextoness, clock minder, organ minder, organ bellows blower, churchyard keeper, and midwife. Holders of all the offices save that of organ minder were reappointed in 1760, by which date 6 bearers were also chosen. (fn. 58) Records include a summary minute book for 1581-1613, vestry minutes from 1613, (fn. 59) parishioners' meetings minutes for 1762-1824, (fn. 60) churchwardens' accounts, some with overseers' accounts, from 1732, (fn. 61) poor rate books from 1716, (fn. 62) churchwardens' rate books from 1743 and statute labour books from 1720, with gaps, (fn. 63) and lamp and watch rate books from 1764. (fn. 64)
The vestry met at Easter, for appointments and audits, (fn. 65) and also irregularly: 4 times in all in 1620, 9 in 1660, 16 in 1700, 4 in 1740 and 1770, 8 in 1810, and 3 in 1820. Attendances ranged from 7 to 22 in 1660, with an average of nearly 16 which varied little thereafter. In 1712 absentees were to be asked to attend and in 1719 it was agreed that if numbers should fall below 13 a churchwarden and 4 others might prepare proposals for the next vestry; (fn. 66) nine meetings were dissolved between 1729 and 1753 for lack of a quorum. It was planned in 1732 and 1759 to summon one at least every two months and in 1790, without success, to observe fixed dates in June and August. (fn. 67) The chair was normally taken by the vicar or his curate. A suggestion that the vestry should meet at the 'parish house' (Church House) rather than its room at the church was rejected in 1781. (fn. 68) Church House, in use in 1795, was replaced in 1802 by the building later called the old town hall. (fn. 69)
Wider parish meetings obtruded, as in 1723 when the vestry insisted on its right to choose a lecturer, although the general public might afterwards voice its opinion. On legal advice, the right was conceded to all who paid poor rates. (fn. 70) Such parishioners were sometimes present in the vestry, as in 1700 when 'others' were noted after the named attenders. (fn. 71) In 1725 a separate book was reserved for general meetings and in 1739 the select vestry forced several outsiders to withdraw. (fn. 72) The vicar and parish officers attended the parish meetings, of which there were 11 in 1763 and 7 in 1770. The parish meetings submitted names to the magistrates for appointment as highway surveyors and were concerned particularly with the poor, although all matters of parish interest were discussed. (fn. 73) Petitioners for an inquiry into the leasing of Lammas lands were accused of treating a session of the select vestry as a public meeting in 1804. A new local Act, to create more vestrymen, was sought in 1813. The vestry claimed that parochial rates and expenditure had always been effectually controlled by parish meetings, when it finally admitted an additional 49 inhabitants in 1833. (fn. 74) The merger resulted from legal opinions that the bishop's faculty, a copy of which had been withheld by the clerk, was an unsafe foundation for a select vestry. (fn. 75)
In 1581 the collectors for the poor raised money to bring up a fatherless child and in 1598 they made 37 payments, including one to the 'poor house'. (fn. 76) Pensioners were to attend church twice a week in 1620 and were to number not more than 15 in 1628, when a separate book for poor rates was to be bought. (fn. 77) Some pensions were paid for looking after the young or the sick. (fn. 78) The poor's stock was separated in 1628 from the church stock and consisted of the income from parish lands which had been acquired through charitable gifts and which were leased out by the vestry; money in the church box was added. (fn. 79) When the magistrates decided that Hackney could afford to contribute to relief in Stepney in 1676, the vestry claimed that it was already burdened with extraordinary poor. In 1708 bread was distributed to up to 74 people, 'as was usual in this parish', whether or not the amount was covered by gifts. (fn. 80) In 1710 badging was to be strictly enforced on all paupers except Henry Rowe. (fn. 81)
Responsibility by 1741 had devolved upon a workhouse committee, which fixed the poor rate and was answerable to the parish meetings rather than the vestry. (fn. 82) An Act of 1763 committed the poor to a board of trustees, being the vicar, parish officers, and anyone eligible for office, including those who had compounded; any five of them could fix the poor rate. (fn. 83) The early meetings of the trustees, rarely numbering more than 12, were held weekly in the vestry room. Separate rates were introduced, for the poor and for lighting and watching, and five collectors were appointed in 1764; an initial sum was raised by the promise of annuities secured on the rates. (fn. 84) An Act of 1810 allowed all householders rated for the poor at £40 a year to act as co-vestrymen, sharing the vestry's responsibility for relief although not in other fields. It also increased the number of trustees, who might be vestrymen or co-vestrymen, from 53 to 72, and provided for them to form 12 committees of six, which would meet weekly in rotation, and to delimit six districts: Clapton, Homerton, Church Street, Mare Street, Kingsland, and Newington. (fn. 85) The enlarged board of trustees in 1811 met 17 times at the parish house, with an average attendance of 28. (fn. 86) It was elected annually after the opening of the vestry in 1833. (fn. 87)
Revenue for the poor in 1628 was £14 10s. from charities' lands and £2 10s. from £50 stock. (fn. 88) The poor rate, increasingly important as the payers multiplied, raised £120 in 1669 and £326 in 1710. (fn. 89) In 1720 nearly two thirds of expenditure was on monthly payments to 52 pensioners, some with children, a tenth was on nursing, and the rest on children's clothing. (fn. 90) The cost of maintaining the poor was £1,725 17s. 5d. in 1775-6, (fn. 91) when the rate was 2s. in the £, (fn. 92) and an average of £2,376 8s. 5d. for the three years to Easter 1785. (fn. 93) It was £5,158 in 1803, over £13,000 in 1813 and 1821, and slightly less in 1831; the rise, more uneven than that of the population, produced an expenditure per head of 15s. 8d. in 1813 and less than half of that amount in 1831. (fn. 94) Nearly £14,349 was levied but only £8,849 spent on the poor in 1834-5. (fn. 95)
A workhouse where a child was to be sent in 1709 was presumably outside the parish. (fn. 96) In 1732 rented premises were repaired as a workhouse. (fn. 97) A house on the south side of Homerton's high street was leased from the Milborne family in 1741 and in 1761; the parish officers assigned the lease in 1764 to the new trustees for the poor (fn. 98) and in 1769 lent them money to buy the site. (fn. 99) The workhouse management committee met weekly in the 1740s and 1750s, (fn. 100) when the number of inmates ranged from 41 to 74. (fn. 101) At first the committee arranged quarterly contracts for supplies but the poor were farmed by 1755 and in 1764; (fn. 102) direct management was resumed in 1765. (fn. 103) One of the six overseers was to attend on every weekday at the workhouse under the Act of 1810. (fn. 104) Accommodation was for 220 in 1775-6 (fn. 105) and expensive enlargement was carried out in 1810-11 and again in 1813. (fn. 106) Stricter discipline and more profitable work were sought in 1811 but many rules were not kept by 1822. (fn. 107) The parish claimed to manage a model workhouse in 1831, when it held 102 men and 153 women, housed separately, 80 boys, and 60 girls; work was provided there and a few inmates were farmed out. In addition outdoor relief was paid to 398 pensioners and for 35 children to be nursed. (fn. 108) The buildings apparently had no special accommodation for the religious services which were held and the schooling which was recommended in 1815. (fn. 109)
LOCAL GOVERNMENT AFTER 1837.
Although the body of trustees continued until 1899, (fn. 110) the Poor Law Amendment Act, 1834, vested practical responsibility for the poor from 1837 in Hackney union until the London Government Act, 1929, substituted the L.C.C. in 1930. The union combined the old parishes of Hackney and Stoke Newington; (fn. 111) initially, Hackney contributed seven eighths of the annual cost (fn. 112) and elected 13 guardians (20 by 1872) to Stoke Newington's 5. (fn. 113) Weekly meetings were held at the parish house and later at the town hall and at the workhouse. (fn. 114) The old workhouse was replaced by a building begun in 1838 and finished by 1842, (fn. 115) which the trustees sold to the guardians in 1845, (fn. 116) when further building had to be done. (fn. 117) The premises in 1849 included a range along the high street, in front of women's wards and a small infirmary to the west and men's wards, with a stone yard, to the east; farther south stood a chapel of 1848 seating 500 and schools, behind which the grounds stretched to beyond the new railway line. (fn. 118) The schools were criticized in 1854, when attended by 45 boys and 79 girls, but had improved by 1857, when the numbers had risen by 50. (fn. 119) The union maintained 459 indoor and 2,034 outdoor poor, 42.6 for every 1,000 inhabitants, in 1850; the proportion fell to 28.9 for every 1,000 inhabitants in 1860 but was 55 in 1870. (fn. 120) Although some buildings were adapted and others added for the infirmary, later Hackney hospital, the workhouse continued to receive both the able bodied and the infirm and was certified for 1,090 inmates in 1885. (fn. 121) As Homerton central institution it was certified for 1,404 in 1930, when the guardians derived most of their income from Hackney M.B. as overseers and when they also had nearby branch homes, besides one for children at Ongar (Essex). (fn. 122)
From 1833 the trustees and the enlarged vestry (fn. 123) were still seen as unrepresentative by the Hackney Magazine, which publicized their proceedings. (fn. 124) The vestry in 1836 set up a 20-member highways board, soon renewed under a new Act, (fn. 125) and in 1837 accepted the continuance of the trustees' lamp board. (fn. 126) It also protested at the high property qualification for election as guardian and in 1841 opposed the rates sought by the Tower Hamlets commissioners of sewers. (fn. 127) Meetings were normally chaired by the rector or a churchwarden and spent much time over repeated and rising demands to abandon church rates.
A new administrative vestry, for the whole parish but with more limited responsibilities, was installed under the Metropolis Local Management Act, 1855. The Act replaced the metropolitan commissioners of sewers, successors to the Tower Hamlets commissioners, with the Hackney district of the Metropolitan Board of Works (M.B.W.); the district, which included Stoke Newington, returned one member to the M.B.W. (fn. 128) The new vestry superseded the three church vestries for all but church purposes. It met, erratically, less than once a month. In addition to the rector and churchwardens, it consisted of 119 vestrymen, of whom one third was elected annually, representing the seven wards of Stamford Hill, Homerton, Dalston, De Beauvoir Town, Hackney, South Hackney, and West Hackney. (fn. 129) It chose the district board and, after some doubts about their continued existence, the trustees of the poor. (fn. 130) Resenting the link with Stoke Newington and the division and vagueness of its own powers, the vestry criticized the bookkeeping of the former highway and lighting boards and unsuccessfully sought to control those parochial charities which had been apportioned to South and West Hackney. (fn. 131) It appointed a fire engine committee, as did the trustees, and a finance committee. (fn. 132) Through a joint committee also representing the trustees and the district board, it was responsible for building a new town hall. (fn. 133)
Hackney district board, meeting weekly from 1855 at the town hall, consisted of 51 members for the eight Hackney wards and 5 for Stoke Newington. At first it was often chaired by J. R. Daniel-Tyssen and represented on the M.B.W. by George Offor, an earlier opponent of church rates. The board appointed general purposes and finance committees and superseded the highway and lighting boards. Officers included a clerk, a medical officer of health, a surveyor, and an inspector of nuisances. (fn. 134) From 1856 the trustees of the poor met twice a year to make a parish or poor trust rate, chiefly for the guardians, the Metropolitan Police, and the fire engines, and separate general, lighting, and sewers' rates for the district board of works. Many other meetings dealt with appeals against assessments. The trustees' delays in meeting financial calls forced the guardians to postpone the settling of bills in 1856. (fn. 135)
The district board of works was dissolved in 1894. (fn. 136) No longer linked with Stoke Newington except in the poor-law union, Hackney was administered again by the vestry, which maintained the district board's officers and worked, as the board had done, through committees; (fn. 137) it called itself a corporate body and was quick to seek a transfer of powers from the trustees of the poor, since many vestrymen were also trustees. (fn. 138) Both vestry and trustees were superseded by Hackney metropolitan borough council under the London Government Act, 1899, which also introduced a single rate. (fn. 139)
Hackney metropolitan borough council was first elected in 1900 and consisted of a mayor, 10 aldermen, and 60 councillors representing 8 wards which remained unchanged until 1936: Stamford Hill, Clapton Park, Homerton, the Downs, Kingsland, Hackney, South Hackney, and West Hackney. (fn. 140) In 1903 the town hall was the meeting place of the council twice monthly; in addition to the town clerk, treasurer, and solicitor, there were departments for the accountant, the engineer and surveyor, public health, electricity, and libraries. (fn. 141) The borough received a grant of arms in 1924. (fn. 142) From 1936 there were 8 aldermen and 48 councillors for 16 wards. Most of the wards were altered and renamed in 1955 but there were still 16 in 1965 when, under the London Government Act, 1963, the metropolitan borough was joined with those of Stoke Newington and Shoreditch to form the London Borough of Hackney. (fn. 143) The new borough had 20 wards in 1971 and 23, of which 15 lay in Hackney, by 1978. (fn. 144)
The first town hall, which in 1802 had replaced Church House, (fn. 145) remained in use until 1866. Rooms were then leased to the M.B.W., the guardians, who disputed ownership with the vestry, and several provident societies, and public meetings might still be held there. (fn. 146) A plain two- storeyed block of four bays, the central two slightly projecting, it was given a stone cladding in 1900, with a pediment, balustrades, and more elaborate doorway. Part was occupied from 1899 by the London City & Midland Bank, which remained there as the Midland Bank in 1991. (fn. 147) The second town hall, begun in 1864, was opened in 1866 in the centre of the rectangular space called Hackney Grove. (fn. 148) Designed in the 'French- Italian' style by Hammack & Lambert and faced with Portland stone, it was of two storeys over a basement and consisted of a five-bayed central block, balustraded and with a Doric porch, projecting beyond single-bay wings. (fn. 149) The estimated building costs were greatly exceeded. (fn. 150) Extensive alterations by Gordon, Lowther, & Gunton, opened in 1898, included wider two-storeyed wings producing an ornate frontage of 11 bays. (fn. 151) A third town hall was begun in 1934, finished in 1936, and opened in 1937, replacing houses behind the second one, of which the site thereafter formed a garden. 'Conventional but not showy', the building was designed by Lanchester & Lodge and faced in Portland stone; it was flat-roofed and four-storeyed, with a front of nine bays, the central five slightly projecting. From 1965 it was the municipal centre of Hackney L.B. (fn. 152) The building in 1991 retained its unaltered interiors in the Art Deco style. (fn. 153)
Conservatives outnumbered Liberals on the first borough council, elected in 1900. As Municipal Reformers they averted Progressive control in 1906, by allying with Independents (Ratepayers' Association), and took overall control in 1912. (fn. 154) Labour, which in 1900 had unsuccessfully run 9 candidates in Homerton, the most radical ward, narrowly took control in 1919 but lost every seat to an alliance of Municipal Reformers and Progressives in 1922 and 1925. (fn. 155) It regained a majority only in 1934 but kept it thereafter, both on the metropolitan borough council and, except in 1968, on its successor. One Communist was elected in 1945 and two were elected in 1949. (fn. 156) Apart from Springfield, all the wards in the former borough elected Labour members to Hackney L.B. in 1990. (fn. 157) The turnout in municipal elections was close to London's average until the Second World War but lower thereafter. (fn. 158)
Two parliamentary seats were allotted to Hackney by the Representation of the People Act, 1867. (fn. 159) Liberals were always returned until the constituencies of North, Central, and South Hackney were created in 1885. (fn. 160) Hackney North returned a Conservative or Unionist until 1945, except in 1906. Hackney Central voted Conservative until 1900, then Liberal until 1923, Conservative again in 1924 and 1931, and Labour in 1929 and 1935. Hackney South generally returned a Liberal, with Conservatives only in 1895, 1900, at a by-election in 1922, and 1931; it was the first to vote Labour, in 1923, as it did again in 1929 and 1935. All M.P.s were elected as Labour from 1945, the boundaries being redrawn to form the two seats of Hackney Central and of Hackney North and Stoke Newington in 1955. Hackney South and Shoreditch formed a third constituency in the 1970s but Hackney Central was divided between the other two Hackney seats in the 1980s. The M.P. for Hackney South, who had joined the Social Democrat party, was defeated in 1983. (fn. 161) Members included Sir Charles Reed (d. 1881), chairman of the London school board, and his successor Henry Fawcett (d. 1884), the 'member for India', and for South Hackney Sir Charles Russell from 1885 until 1894, when he became Lord Russell of Killowen and lord chief justice. (fn. 162) The financier Horatio Bottomley (d. 1933) represented South Hackney from 1906, despite local opposition from his own party, and as an independent from 1918 until his imprisonment in 1922. (fn. 163) Herbert Stanley Morrison (d. 1965), later Lord Morrison of Lambeth, was co-opted as mayor of Hackney in 1919 and began his parliamentary career as M.P. for Hackney South in 1923. (fn. 164)
From 1889 Hackney's three parliamentary seats each returned two members to the L.C.C. (fn. 165) | <urn:uuid:3cf3e45d-7cac-4f14-9a07-c9772d82e43a> | CC-MAIN-2022-33 | https://www.british-history.ac.uk/vch/middx/vol10/pp101-107 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00097.warc.gz | en | 0.97922 | 7,173 | 2.59375 | 3 |
Volume 18, Number 11—November 2012
Lack of Evidence for Zoonotic Transmission of Schmallenberg Virus
The emergence of Schmallenberg virus (SBV), a novel orthobunyavirus, in ruminants in Europe triggered a joint veterinary and public health response to address the possible consequences to human health. Use of a risk profiling algorithm enabled the conclusion that the risk for zoonotic transmission of SBV could not be excluded completely. Self-reported health problems were monitored, and a serologic study was initiated among persons living and/or working on SBV-affected farms. In the study set-up, we addressed the vector and direct transmission routes for putative zoonotic transfer. In total, 69 sheep farms, 4 goat farms, and 50 cattle farms were included. No evidence for SBV-neutralizing antibodies was found in serum of 301 participants. The lack of evidence for zoonotic transmission from either syndromic illness monitoring or serologic testing of presumably highly exposed persons suggests that the public health risk for SBV, given the current situation, is absent or extremely low.
In November 2011, scientists in Germany identified novel viral sequences in serum from cattle affected by a febrile syndrome that was reported during August–September 2011 in Germany and the Netherlands. Clinical signs included decreased milk production and diarrhea. The virus, named Schmallenberg virus (SBV), was isolated from blood of affected cattle, and similar clinical manifestations were observed in experimentally infected calves (1). In the Netherlands, SBV was detected retrospectively in serum from affected cattle in December 2011 (2).
Since the end of November 2011, an unusually high number of ovine and bovine congenital malformations were reported in the Netherlands. The main macroscopic findings included arthrogryposis; torticollis; scoliosis; brachygnathia inferior; hydranencephaly; and hypoplasia of cerebrum, cerebellum, and spinal cord. SBV genome was detected in the brain of malformed lambs and calves (3–5). These findings, together with detection of SBV RNA in multiple types of samples, e.g., amniotic fluid, meconium, and placenta remains from diseased lambs and calves, strongly pointed to SBV as the causative agent of the clinical manifestations (6). The teratogenic effects in ruminants are hypothesized to reflect virus circulation in late summer/early autumn 2011, leading to intrauterine infection with SBV during a specific period of gestation (4).
In June 2012, seven additional European countries (Belgium, Denmark, France, Italy, Luxemburg, Spain, and the United Kingdom) confirmed SBV in ruminants, accumulating to a total of 3,745 PCR-confirmed infected animal holdings (4,7). In the Netherlands 1,670 holdings were suspected to be affected by SBV on the basis of births of animals with malformations typical of SBV infection, of which 350 were confirmed by PCR as of June 12, 2012. The holdings with confirmed SBV comprise 237 cattle, 107 sheep, and 6 goat farms (8).
SBV has been identified as most related to Sathuperi virus, and for the small and large segments, Shamonda virus segments show the highest sequence identity. All those viruses are members of the Simbu serogroup, family Bunyaviridae, genus Orthobunyavirus, and known as arthropod-borne viruses that can cause illness in ruminants (9). The orthobunyaviruses comprise ≈170 virus isolates, assigned to 48 distinct species, arranged in 18 serogroups, including the Simbu serogroup. Serogroups within the genus are based on cross–hemagglutination-inhibition and antibody neutralization relationships. Phylogenetic relationships are consistent with the results of serologic relationships (10–12).
Because the family Bunyaviridae contains several medically relevant zoonotic viruses, of which Crimean-Congo hemorrhagic fever virus, Rift Valley fever virus, Sin Nombre virus, and sandfly fever Naples virus are examples, the emergence of SBV triggered a joint veterinary and public health response in the Netherlands to address the possible consequences to human health. We present the public health risk ascertainment of the emergence of SBV in ruminants in the Netherlands and most likely other European countries were SBV has emerged.
Profiling Risks to Humans
We used a standard in-house checklist for profiling the risk to human health of novel emerging viruses to assess the public health risks for SBV. This checklist comprised 10 items: 1) situation assessment; 2) review of taxonomic position of the newly identified virus; 3) review of human health risks associated with closely related viruses; 4) review of epidemiology of related viruses (transmission cycle, reservoirs, and vectors); 5) review of clinical manifestations in humans of related viruses (including kinetics of immune response and shedding); 6) assessment of potential for human exposure and identification of related risk factors; 7) assessment of human diagnostics; 8) design of a literature/evidence-based testing algorithm; and 10) conclusions and recommendations.
Virus and Validation Serum
An SBV strain, isolated from SBV reverse transcription PCR–positive, homogenized brain tissue of a malformed lamb in the Netherlands, was obtained from the Central Veterinary Institute (Lelystad, the Netherlands). Putative cross-reacting orthobunyaviruses circulating in Europe, Batai virus (13), Tahyna virus (14), and Inkoo virus (15), were obtained from the Bernhard Nocht Institute for Tropical Medicine (Hamburg, Germany). All viruses were propagated and titrated (50% tissue culture infectious dose [TCID50]) in continuous African green monkey kidney cells (Vero E6, ATCC CRL-1586). SBV-positive control serum from a ewe that had given birth to an SBV PCR-positive lamb was obtained from the Animal Health Service (AHS), and positive serum sample from an experimentally infected ewe was obtained from the Central Veterinary Institute.
Well-defined negative and positive human serum cohorts were not available because SBV is a novel emerging virus with unknown zoonotic potential. Therefore, we validated the virus neutralization test (VNT) using presumed seronegative serum from 1) 56 patients without travel history submitted to the National Institute for Public Health and the Environment during February 28, 2007–February 25, 2008, for routine diagnostic testing for Bordetella pertussis; 2) 73 inhabitants of municipalities with known SBV activity in 2011 that had been collected during August 15, 2010–October 15, 2010, for routine screening; and 3) 93 veterinary students collected in 2006 and 2008. Serum from 92 veterinary students sampled during 2011 and from 73 inhabitants of municipalities with known SBV activity collected during August 15, 2011–October 15, 2011, for routine screening were considered to represent community samples from possibly exposed populations and were added to the validation panel. Anonymized use of serum from the National Institute for Public Health and the Environment was covered by the rules of the code of conduct for proper use of human tissue of the Dutch Federation of Medical Scientific Associations. The cohort study of the veterinary students included screening for zoonotic infections and was approved by the Medical Ethical Committee of the University Medical Centre Utrecht.
For VNT, Vero E6 cells were seeded in 96-well plates and incubated overnight at 37°C with 5% CO2 until the cells were ≈80%–90% confluent. Serum was heated for 30 min at 56°C to inactivate complement before use. Serum was serially diluted in 2-fold steps in minimum essential medium (GIBCO/Life Technologies, Bleiswijk, the Netherlands). We added 100 TCID50 of virus to the diluted serum (volume of 60 µL each). To rule out the presence of other cytopathic effect–inducing factors, serum dilutions also were added to control wells to which no virus was added. After incubation at 37°C in 5% CO2 for 1 h, 100 µL of the virus-plus-serum mixture, no virus-serum controls, and a virus dilution control were added to the Vero E6 cells and incubated for 3 d at 37°C. Assays were performed in duplicate. Cells were monitored for cytopathic effect after 3 days.
Monitoring of Health Symptoms
Persons in close contact with affected animals or their birth materials in whos fever developed (>38°C) within 2 weeks after exposure were asked to contact the regional public health service (PHS) for evaluation and assessment of the need for follow-up. This request was made through an email-based alert system hosted by the AHS and farmers association to veterinarians. The alert system prompted veterinarians to inform farmers on SBV-affected holdings. When a relation between reported fever and SBV was considered possible, a short questionnaire was filled in by study participants, and serum was tested by real-time PCR (as described in ) and VNT to diagnose a possible SBV infection.
Design of Serologic Study in Persons with High Probability of Exposure
A serologic survey was designed to determine the presence of SBV antibodies in serum from persons living and working on farms where SBV had been highly suspected on the basis of pathologic findings consistent with typical SBV-induced malformations in calves or lambs, most confirmed by PCR and/or serology. The target cohort, consisting of adult (>18 years of age) farmers, farm residents, farm employees, and veterinarians who had been exposed to affected herds, were invited to participate by donating a serum sample and filling in a questionnaire. A total of 240 affected animal holdings were approached through direct mailing by the AHS. Employees of the regional PHS visited the affected farms and collected serum samples and questionnaires. The veterinarians were collectively contacted to be sampled at a national conference after a preannouncement of the purpose of the study.
The questionnaire addressed demographics, the animal species involved, the type and level of exposure (birth materials, feces, milk or other products, insects), protective equipment used during work, general health, (recent) health complaints, and presence of wounds on hands. The study protocol, information material, and questionnaires were assessed by the Medical Ethical Committee of the University Medical Centre Utrecht and approved (METC no. 12–106).
On the basis of a literature review of seroprevalence studies in regions with known orthobunyavirus outbreaks, a seroprevalence of 2% was established as the lower bound in an affected human population (N. Cleton, unpub. data; 16–19). In this scenario with 2% seroprevalence, testing of, for example, 200 exposed persons would give a probability of 98.24% to detect >1 seropositive persons (Table 1).
Profiling the Human Risks for SBV
Human Disease in Related Viruses
The literature indicates that zoonotic transmission of SBV could not be completely ruled out. The taxonomic position of SBV had been identified as family Bunyaviridae, genus Orthobunyavirus, Simbu serogroup (1). At least 30 orthobunyaviruses have been associated with human disease. Virologic or serologic evidence for zoonotic infection has been found for several viruses within the Simbu serogroup, including viruses considered to be primarily livestock pathogens (Aino and Shuni virus; Table 2). Among the many reasons for vigilance was the lack of full characterization of SBV. Genetic reassortment between orthobunyaviruses within the same serogroups has led to emergence of new viruses, occasionally with increased pathogenicity and potentially with changes in host range (21,36–40).
Modes of Transmission
The related Shamonda, Sathuperi, Aino, and Akabane viruses are transmitted mainly by biting midges (23; 41 in Technical Appendix), and the epidemiology of the infection in animals and the first detections of SBV genome in Culicoides spp. midges in Belgium, Denmark, and Italy suggested vector-borne spread as a mode of transmission for SBV as well (1,2; 42–44 in Technical Appendix). In addition, the birth defects in lambs and calves increased the need for assistance from veterinarians during parturition, and high loads of viral RNA were detected in birth materials of sheep and cattle (6). Therefore, if SBV is zoonotic, transmission could have occurred to persons who could have been exposed to infected vectors (residents, farmers, veterinarians) and/or through direct contact with animals that had congenital malformations or with birth material, e.g., during assistance at deliveries (farmers, veterinarians). A testing algorithm was designed (Figure). Professionals were advised to respect common hygiene measures for veterinarian-assisted deliveries and handling of affected newborn ruminants. Pregnant women were advised not to assist at ruminant deliveries.
Validation of VNT
Because the viremic phase in orthobunyavirus infections typically is short, we chose to use serologic testing by VNT to evaluate an immunologic response in exposed persons (1,21). For assay validation, possible cross-reacting zoonotic viruses circulating in Europe were identified. Zoonotic viruses in the Simbu serogroup are not known to circulate in Europe, but related orthobunyaviruses that may infect humans are Batai virus (BATV), Tahyna virus (TAHV), and Inkoo virus (INKV) (Table 2). No cross-neutralization was observed when the SBV-positive control serum was tested against 100 TCID50 of BATV, INKV, and TAHV, whereas the homologous titer was 512 (data not shown). The reverse experiment could not be conducted because of a lack of reference reagents. A control cohort of 222 serum samples, presumed negative on the basis of collection data before 2011, were all negative in the VNT (data not shown). Another validation cohort of 165 serum samples, possibly positive on the basis of collection data in 2011 and putative exposure through residence and professional activities, were all negative as well (data not shown).
Monitoring of Symptoms
Symptoms that could be attributed to a putative infection with SBV were determined on the basis of an inventory made of syndromes related to human infection with closely related viruses of the Simbu group, i.e., Oropouche virus and Iquitos virus (Table 2). These viruses typically cause a febrile illness accompanied by chills, general malaise, headache, anorexia, muscle and joint pain, muscle weakness, and vomiting. Symptoms of meningitis or a rash occasionally develop. The reported diseases generally are self-limiting (20,21).
Because the range of symptoms described was diverse, we decided to monitor patients who suited our case definition: febrile disease >38°C within 2 weeks after contact with malformed calves or lambs or their birthing materials (in the absence of the supposed vector during the winter season). The 2-week period was based on the known incubation period for Oropouche virus in humans, typically 4–8 days (20). Eight cases were reported by the PHS during January 1–April 15, 2012. Four of these were excluded because they did not meet the case definition. The remaining 4 cases were tested by PCR and VNT (for 3 cases only because only vesicle fluid was available for 1 study participant). None of the tested suspected case-persons showed evidence of an SBV infection.
In addition, no unusual trends were noted during or since summer 2011 in the existing routine surveillances for neurologic illness, gastroenteritis, and influenza-like illness at the Netherlands Centre for Infectious Disease Control (H. van der Avoort, E. Duizer, and A. Meijer, pers. comm.).
Serology in High-Exposure Groups
To enable evidence-based risk profiling, serologic surveillance was initiated in persons residing at locations with proven SBV circulation and professionals in close contact with infected animals and their birth materials. In this study set-up, we addressed the vector and the direct transmission routes for putative zoonotic transfer.
The study comprised 301 participants. Of these, 192 worked or lived on farms with laboratory-confirmed SBV circulation in animals, 42 persons worked or lived on farms where animals were being raised and where SBV infection was highly suspected, and 67 were veterinarians who had been in contact with malformed animals (Table 3, Table 4). These 123 farms consisted of 69 sheep, 4 goat, and 50 cattle farms that had animals with typical SBV malformations (no other pathogens were circulating in the Netherlands that cause congenital malformations, including arthrogryposis), of which most were PCR and/or VNT confirmed (83%; Table 4). SBV-specific antibodies were detected in livestock serum at 97.7% (83/85) of the farms for which serum was available (Table 4). Overall, 229 participants specifically reported direct exposure to newborn calves, lambs, and/or birth materials from SBV-infected herds; these participants comprised 179 farmers, and 50 veterinarians (39 of whom were exposed while assisting with deliveries at farms and 11 during postmortem examination of malformed newborns at the AHS). A total of 150 participants reported insect bites on SBV-infected farm(s), exposing them potentially to SBV during the vector season (Table 3).
None of the 301 participants showed serologic evidence of SBV infection in the VNT, whereas a titer of neutralizing antibodies was high in the ovine control serum. In a scenario of 2% seroprevalence, testing of 301 persons would have led to a probability of 99.77% to detect >1 seropositive persons (97.93% on the basis of 192 persons with laboratory-confirmed exposure; Table 1). Nevertheless, sporadic infections cannot be excluded entirely.
The Netherlands has an integrated structure for human–animal risk analysis and response to zoonoses, established after the massive Q fever outbreak in 2007–2010. The continuous emergence of zoonotic viruses from livestock reservoirs, with examples of Nipah virus, Japanese encephalitis virus, highly pathogenic avian influenza A (H7N7) and A (H5N1) viruses, and coronaviruses, underscores the relevance of the One Health approach in assessing the risks for novel emerging pathogens, such as SBV (45–49 in Technical Appendix). The emergence of SBV in 2011 was a test case for this collaborative approach to risk assessment. Information, protocols, and samples were shared rapidly, facilitating a quick public health response.
On the basis of the findings of an in-house risk-assessment algorithm, we concluded that zoonotic transmission of the virus could not be excluded, triggering the study described here. We found no evidence for infection by serology, but ruling out zoonotic infections with high certainty is not simple, particularly in a complex situation with >1 possible mode of transmission.
If zoonotic, transmission of SBV could have occurred through vector-borne transmission during the period of high vector density in summer and fall 2011. The level of exposure to SBV by arthropods depends on the vector capacity of the residing vectors. Vector capacity is a measure of the efficiency of vector-borne disease transmission comprising vector competence, susceptible host density, vector host feeding preferences, vector survival rate, vector density, and vector feeding rates (50 in Technical Appendix). In this study, we found no evidence for human SBV infection, despite the high infection rate of sheep and cattle in the same localities (up to 100% within-herd seroprevalence (51 in Technical Appendix) and the high level of reported insect bites during work on SBV-infected farms. From the high infection rates in ruminants, we conclude that the capacity of residing vectors to transmit SBV to cattle and sheep was high, indicating that vector-competence, vector densities, and vector survival rates were sufficient for SBV transmission. Therefore, the absence of SBV antibodies in humans implies that humans are not susceptible to SBV infection but only under the assumption that the vectors of SBV have host feeding preference for humans. Research into the host preferences of identified SBV vector species and, if proven anthropophilic, their feeding rates could clarify this issue.
If vector transmission would have been a route for zoonotic transmission leading to 2% seroprevalence in exposed persons, i.e., persons reporting insect bites on SBV-infected farms, in this study the probability of detecting at least 1 of such seropositive persons would have been 99.77%. However, this calculation is based on an assumed test specificity and sensitivity of 100%. A high specificity was justified on the basis of the negative results with the 387 control serum and the absence of neutralizing capacity of an SBV-positive ovine serum sample to INKV, BATV, and TAHV. Because SBV is a novel pathogen, no well-defined seropositive human serum cohorts were available to assay the analytical sensitivity of our test. However, even with sensitivity as low as 90%, the probability of detecting at least 1 seropositive person still would have been 99.69% (data not shown).
The second possible exposure could occur through contact with affected animals and/or birth materials. The congenital malformations in lambs and calves with SBV infection are such that increased assistance during delivery was needed from farmers and veterinarians. Direct exposure to newborn ruminants and/or birth materials was reported in 76% of the study participants. If contact during delivery would have been an active route for zoonotic transmission, leading to 2% seroprevalence in exposed persons, the probability of detecting at least 1 of such seropositive persons would have been 99.02%.
A third option is that exposure to newborns and their birth materials has a higher risk for infection if exposed persons had blood contact with the affected materials (e.g., by hand wounds). Sixty percent of participants reported small wounds on hands; thus, the probability of detecting such seropositives would have been high (i.e., 97.37% with 2% seroprevalence). In addition, 2 persons in the syndromic monitoring reported needlestick incidents, again without any evidence for infection through antibody testing.
The absence of evidence for direct transmission of SBV from ruminants to humans is in line with observations for other Simbu serogroup viruses (Akabane and Shamonda) infecting livestock (Table 2). Moreover, a serologic survey of 60 sheep farmers with sheep husbandry in the SBV epizootic area in Germany yielded no evidence for human SBV infection. However, of these farmers, only 48 had contact with lambs with SBV characteristic malformations, whereas SBV was laboratory confirmed in the livestock of only 36 participants (52 in Technical Appendix), but the level of exposure through contact with affected animals and/or birth material is difficult to quantify (4). In the Netherlands, SBV RNA has been detected in the brains of malformed animals on 18.6% of reported cattle farms and on 30.6% of reported sheep farms (8), and high loads of viral RNA have been detected in some placentas and in birth fluids.
Current data suggest that infections might have been cleared by the time of delivery, particularly in cattle, which have longer gestations. Furthermore, finding RNA in birth materials does not give any information about the actual presence of infectious virus particles in these materials. Attempts to isolate viruses from such specimens have met with little success, and further research is needed to address the issue of infectivity of birth materials. This lack of virus isolation implies that the number of persons in this study directly exposed to infectious virus particles from affected animals and/or birth material might be lower than assumed on the basis of the number of participants reporting this exposure. Nevertheless, the lack of seropositive samples indicates that the risk for infection through contact with contaminated materials, regardless of whether they contain infectious virus particles, is minimal. Therefore, given the high seroprevalence of SBV in affected herds (51 in Technical Appendix), the lack of any evidence for zoonotic transmission from either the syndromic monitoring or this serologic study suggests that the public health risk for SBV given the current situation is absent or extremely low.
Dr Reusken is a virologist working as an investigator of vector-borne and zoonotic viral diseases at the Netherlands Center for Infectious Disease Control. Her research interests include the role of wildlife and arthropods in the epidemiology of emerging infectious diseases.
We thank Jet Mars, Petra Kock, Marieta Braks, Natalie Cleton, Yvonne van Duynhoven, Kitty Maassen, Barbara Schimmer, Annelies Albrecht, Mohamed Uaftouh, and employees of the regional PHSs for their contributions to the work in this article.
- Hoffmann B, Scheuch M, Hoper D, Jungblut R, Holsteg M, Schirrmeier H, Novel orthobunyavirus in cattle, Europe, 2011. Emerg Infect Dis. 2012;18:469–72.
- Muskens J, Smolenaars AJ, van der Poel WH, Mars MH, van Wuijckhuise L, Holzhauer M, Diarrhea and loss of production on Dutch dairy farms caused by the Schmallenberg virus [in Dutch]. Tijdschr Diergeneeskd. 2012;137:112–5.
- van den Brom R, Luttikholt SJ, Lievaart-Peterson K, Peperkamp NH, Mars MH, van der Poel WH, Epizootic of ovine congenital malformations associated with Schmallenberg virus infection. Tijdschr Diergeneeskd. 2012;137:106–11.
- European Food Safety Authority. Scientific report of EFSA. “Schmallenberg” virus: analysis of the epidemiological data and assessment of impact. EFSA Journal. 2012;10:2768–857 [cited 2012 Aug 22]. http://www.efsa.europa.eu/en/efsajournal/doc/2768.pdf
- Gariglinay MM, Hoffmann B, Dive M, Sartelet A, Bayrou C, Cassart D, Schmallenberg virus in calf born at term with porencephaly, Belgium. Emerg Infect Dis. 2012;18:1005–6.
- Bilk S, Schulze C, Fischer M, Beer M, Hlinak A, Hoffmann B, Organ distribution of Schmallenberg virus RNA in malformed newborns. Vet Microbiol. 2012;159:236–8.
- ProMED-Mail. Schmallenberg virus–Europe (45): Denmark, serological evidence. 2012 [cited 2012 Jun 16]. http://www.promedmail.org/, archive no. 20120605.1157269.
- Food and Product Safety Authority (nVWA). Aantallen meldingen Schmallenbergvirus per provincie. 2012 [cited 2012 Jun 16]. http://www.vwa.nl/onderwerpen/dierziekten/dossier/schmallenbergvirus
- Yanase T, Kato T, Aizawa M, Shuto Y, Shirafuji H, Yamakawa M, Genetic reassortment between Sathuperi and Shamonda viruses of the genus Orthobunyavirus in nature: implications for their genetic relationship to Schmallenberg virus. Arch Virol. 2012;157:1611–6.
- Calisher CH, editor. History, classification and taxonomy of viruses in the family Bunyaviridae. New York: Plenum Press; 1996.
- Kinney RM, Calisher CH. Antigenic relationships among Simbu serogroup (Bunyaviridae) viruses. Am J Trop Med Hyg. 1981;30:1307–18.
- Saeed MF, Li L, Wang H, Weaver SC, Barrett AD. Phylogeny of the Simbu serogroup of the genus Bunyavirus. J Gen Virol. 2001;82:2173–81.
- Jöst H, Bialonski A, Schmetz C, Günther S, Becker N, Schmidt-Chanasit J. Isolation and phylogenetic analysis of Batai virus, Germany. Am J Trop Med Hyg. 2011;84:241–3.
- Bardos V, Danielova V. The Tahyna virus—a virus isolated from mosquitoes in Czechoslovakia. J Hyg Epidemiol Microbiol Immunol. 1959;3:264–76.
- Brummer-Korvenkontio M, Saikku P, Korhonen P, Ulmanen I, Reunala T, Karvonen J. Arboviruses in Finland. IV. Isolation and characterization of Inkoo virus, a Finnish representative of the California group. Am J Trop Med Hyg. 1973;22:404–13.
- Grimstad PR, Barrett CL, Humphrey RL, Sinsko MJ. Serologic evidence for widespread infection with La Crosse and St. Louis encephalitis viruses in the Indiana human population. Am J Epidemiol. 1984;119:913–30.
- Azevedo RS, Nunes MR, Chiang JO, Bensabath G, Vasconcelos HB, Pinto AY, Reemergence of Oropouche fever, northern Brazil. Emerg Infect Dis. 2007;13:912–5.
- Vasconcelos HB, Azevedo RS, Casseb SM, Nunes-Neto JP, Chiang JO, Cantuaria PC, Oropouche fever epidemic in northern Brazil: epidemiology and molecular characterization of isolates. J Clin Virol. 2009;44:129–33.
- Kinney RM. Bwamba and Pongola virus. In: The encyclopedia of arthropod-transmitted infections. Service MW, editor. Wallingford (UK): CAB International; 2001.
- Pinheiro FP, Travassos da Rosa AP, Travassos da Rosa JF, Ishak R, Freitas RB, Gomes ML, Oropouche virus. I. A review of clinical, epidemiological, and ecological findings. Am J Trop Med Hyg. 1981;30:149–60.
- Aguilar PV, Barrett AD, Saeed MF, Watts DM, Russell K, Guevara C, Iquitos virus: a novel reassortant Orthobunyavirus associated with human illness in Peru. PLoS Negl Trop Dis. 2011;5:e1315.
- Boughton CR, Hawkes RA, Naim HM. Arbovirus infection in humans in NSW: seroprevalence and pathogenicity of certain Australian bunyaviruses. Aust N Z J Med. 1990;20:51–5.
- Ali H, Ali AA, Atta MS, Cepica A. Common, emerging, vector-borne and infrequent abortogenic virus infections of cattle. Transbound Emerg Dis. 2012;59:11–25.
- Shamonda virus (SHAV); Arbocat virus ID 436. 2012 [cited 2012 Jun 13]. http://wwwn.cdc.gov/arbocat/catalog-listing.asp?VirusID=436&SI=1
- Yanase T, Maeda K, Kato T, Nyuta S, Kamata H, Yamakawa M, The resurgence of Shamonda virus, an African Simbu group virus of the genus Orthobunyavirus, in Japan. Arch Virol. 2005;150:361–9.
- Fukuyoshi S, Takehara Y, Takahashi K, Mori R. The incidence of antibody to Aino virus in animals and humans in Fukuoka. Jpn J Med Sci Biol. 1981;34:41–3.
- Causey OR, Kemp GE, Causey CE, Lee VH. Isolations of Simbu-group viruses in Ibadan, Nigeria 1964–69, including the new types Sango, Shamonda, Sabo and Shuni. Ann Trop Med Parasitol. 1972;66:357–62.
- Moore DL, Causey OR, Carey DE, Reddy S, Cooke AR, Akinkugbe FM, Arthropod-borne viral infections of man in Nigeria, 1964–1970. Ann Trop Med Parasitol. 1975;69:49–64.
- van Eeden C, Williams JH, Gerdes TG, van Wilpe E, Viljoen A, Swanepoel R, Shuni virus as cause of neurologic disease in horses. Emerg Infect Dis. 2012;18:318–21.
- Yanase T, Fukutomi T, Yoshida K, Kato T, Ohashi S, Yamakawa M, The emergence in Japan of Sathuperi virus, a tropical Simbu serogroup virus of the genus Orthobunyavirus. Arch Virol. 2004;149:1007–13.
- Calisher CH. Medically important arboviruses of the United States and Canada. Clin Microbiol Rev. 1994;7:89–116.
- Hubálek Z. Mosquito-borne viruses in Europe. Parasitol Res. 2008;103(Suppl 1):S29–43.
- Elliott RM. Emerging viruses: the Bunyaviridae. Mol Med. 1997;3:572–7.
- Calisher CH, Sever JL. Are North American Bunyamwera serogroup viruses etiologic agents of human congenital defects of the central nervous system? Emerg Infect Dis. 1995;1:147–51.
- Campbell GL, Mataczynski JD, Reisdorf ES, Powell JW, Martin DA, Lambert AJ, Second human case of Cache Valley virus disease. Emerg Infect Dis. 2006;12:854–6.
- Bowen MD, Trappier SG, Sanchez AJ, Meyer RF, Goldsmith CS, Zaki SR, A reassortant bunyavirus isolated from acute hemorrhagic fever cases in Kenya and Somalia. Virology. 2001;291:185–90.
- Briese T, Bird B, Kapoor V, Nichol ST, Lipkin WI. Batai and Ngari viruses: M segment reassortment and association with severe febrile disease outbreaks in East Africa. J Virol. 2006;80:5627–30.
- Gerrard SR, Li L, Barrett AD, Nichol ST. Ngari virus is a Bunyamwera virus reassortant that can be associated with large outbreaks of hemorrhagic fever in Africa. J Virol. 2004;78:8922–6.
- Saeed MF, Wang H, Suderman M, Beasley DW, Travassos da Rosa A, Li L, Jatobal virus is a reassortant containing the small RNA of Oropouche virus. Virus Res. 2001;77:25–30.
- Yanase T, Aizawa M, Kato T, Yamakawa M, Shirafuji H, Tsuda T. Genetic characterization of Aino and Peaton virus field isolates reveals a genetic reassortment between these viruses in nature. Virus Res. 2010;153:1–7.
TablesCite This Article
1These authors contributed equally to this article. | <urn:uuid:4af7ed07-fbd9-4838-b992-8e3f1e895769> | CC-MAIN-2022-33 | https://wwwnc-origin.cdc.gov/eid/article/18/11/12-0650_article | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00497.warc.gz | en | 0.918352 | 7,615 | 2.921875 | 3 |
How do you know that this is a new species?
The unusual combination of characters that we see in the Homo naledi skulls and skeletons is unlike anything that we have seen in any other early hominin species. It shares some features with australopiths (like Sediba, Lucy, Mrs. Ples and the Taung Child), some features with Homo (the genus that includes Humans, Neanderthals and some other extinct species such as erectus), and shows some features that are unique to it, thus it represents something entirely new to science.
How do you know it belongs in the genus Homo?
The brain of H. naledi is small; similar to what we see in australopiths, but the shape of the skull is most similar to specimens of Homo. For instance, it has distinct brow ridges, weak postorbital constriction (narrowing of the cranium behind the orbits), widely spaced temporal lines (attachments for chewing muscles), and a gracile set of jaws with small teeth, alongside a whole host of other anatomical details that make it appear most similar to specimens of Homo. Also, the legs, feet and hands have several features that are similar to Homo.
Where does H. naledi fit within the human lineage?
This is a more difficult question, since our understanding of the human lineage has changed in recent years owing to a large number of new fossil discoveries like sediba and Ardipithecus ramidus. However, given that H. naledi shares some characters with australopiths, and other characters with species of early Homosuch as H. habilis and H. erectus, it is possible that this new species may be rooted in the initial origin and diversification of the genus Homo. At the same time, H. naledi shares characters that are otherwise encountered only in H. sapiens. As a result, our team has proposed the testable hypothesis that the common ancestor of H. naledi, H. erectus, and H. sapiens shared humanlike manipulable capabilities and terrestrial bipedality, with hands and feet like H. naledi, an australopith-like pelvis and the H. erectus-like aspects of cranial morphology that are found in H. naledi. Future fossil discoveries in the Dinaledi Chamber and elsewhere will certainly help us to test this hypothesis.
Why is the combination of features in H. naledi unusual or unexpected?
Until recently, most anthropologists believed that brain size and tool use emerged together with smaller tooth size, higher-quality diet, larger body size and long legs. In this view, transformations in the body in early Homo were tied to changes in behaviour that influenced diet and the brain. H. naledi shows that these relationships are not what anthropologists expected. It has small teeth and hands that seem to have been effective for tool making but also a small brain. It has long legs and humanlike feet but also a shoulder and fingers that seem effective for climbing. The features that were supposed to go together are not found together in H. naledi, and that creates an interesting puzzle that will force us to review our present models of the origins of our genus.
How can we be sure that the different features are not just variation among different individuals?
So far, the team has recovered remains of at least 15 individuals from the Dinaledi Chamber. This number is determined from the repetition of teeth from individuals of the same and different ages. Across the skeleton, nearly all body parts have been recovered from multiple individuals. Surprisingly, these are extremely similar to each other in almost every case. The distinctive features of H. naledi are found in every part that the team has found in the chamber, often multiple times.
Could H. naledi be a pathological modern human?
Unlike the often contested “hobbits”, Homo floresienses from the island of Flores in Indonesia, we have discovered many individuals that all share the same unusual features. This is not what one would expect to find in pathological individuals, which would vary from individual to individual.
What ages of individuals are represented in the Dinaledi Chamber?
Approximate ages for each of the individuals can be established from the teeth found in the collection. The youngest individual died near or at the time of birth, the oldest was an old adult individual with extremely worn teeth. Out of the 15 individuals found so far, eight were children of various ages and five definitely adults, with two either young adults or older adolescents.
What should we make of recent claims of an especially early appearance for the genus Homo?
Given the unusual combination of australopith-like and Homo-like characters encountered in H. naledi, and also in other species such as Au. sediba, it is becoming increasingly apparent that fragmentary and/or isolated fossil remains are unlikely to be accurate guides to the identity of early Homo. Without knowing what the rest of the skull and skeleton looked like, attempts to identify something like the earliest Homo based on fragments of jaws, or in some cases isolated teeth, are likely to be unrealistic. Had we found only small pieces of H. naledi, instead of relatively complete skeletons, we might have erroneously attributed them to one or another species instead of correctly seeing the overall pattern.
What happens if H. naledi is very old? Or very young?
If it turns out that H. naledi is old, say older than around 2-million-years, it would represent the earliest appearance of Homo that is based on more than just an isolated fragment. On the other hand, if it turns out that H. naledi is young, say less than 1-million-yearsold, it would demonstrate that several different types of ancient humans all existed at the same time in southern Africa, including an especially small-brained form like H. naledi. Given its primitive skeletal adaptations, this might have profound implications for the development of the African archaeological record. It would also have profound implications for our understanding the origins of complex behaviours previously thought to arise only with the origins of hominins not very different from our own species as recently as 350,000 years ago.
Can H. naledi shed any light on that other recent, controversial fossil species, H. floresiensis?
Although at present we cannot speculate on any evolutionary linkage to H. floresiensis, what H. naledi does do is demonstrate that indeed there were other species of small-brained, primitive-looking humans in existence in the past that nonetheless shared some quite humanlike features. If nothing else, H. floresiens is no longer stands out as such an anomaly.
Do these fossils prove that humans originated in South Africa?
While a South African origin for humans is certainly possible, we may never be able to prove where humans originated. We are limited to finding fossils where they were preserved, not necessarily across the whole area that human ancestors existed when they were alive. Here’s what we know: H. naledi existed in South Africa, and left some spectacular traces of their passing that we are only now coming to grips with.
What are some of the broader implications of H. naledi?
It is clear that we have missed some key transitional forms in the fossil record, as H. naledi represents an unexpected combination of australopith-like and human-like features that, until now, was entirely unknown to science. This serves to highlight our ignorance about our own genus across the span of the African continent. There are obviously many unknown fossil species yet to be discovered. In addition, we must recognise that some species of ancient humans exhibited very human-like behaviors, which in turn will have profound implications for the archaeological record.
What are some of the broader implications of H. naledi?
Unlike other cave deposits in the Cradle the fossils are not found in direct association with fossils from other animals making it impossible to provide a faunal age. There are also few flowstones that can be directly linked to the fossils, and those that exist are contaminated with clays making them hard to date. On top of that, the fossils are contained in soft sediments and they are partly re-worked and re-deposited making it difficult to establish their primary stratigraphic position. Taken together, this makes it hard to obtain a definitive date for the fossils.
How old are the fossils? And why have they not yet been dated?
We have tried three approaches that have failed to give dates for the actual fossils, and are currently working on further attempts. Because of the uniqueness of the fossils and the situation in which they are found, we only want to publish age limits for them when we are absolutely sure that they are right. We do not want to cause confusion over the age.
How were the fossils found?
Two cavers, Rick Hunter and Steven Tucker, when probing a narrow fracture system to the back of the cave system found the entrance into the Dinaledi Chamber and discovered the fossils. When they showed pictures of the fossils to Pedro Boshoff another caver and geologist, he recognised the fossils as potentially significant and alerted Professor Lee Berger to the find. Further detailed investigations followed, which quickly demonstrated the significance of the find.
How many Hominin fossils are there in the Dinaledi Chamber?
We don’t know. So far we have recovered 1550 separate bones and bone fragments from the floor surface of the cave chamber, and from a one small excavation near a bone concentration in the back of the chamber. Based on duplicate bone elements it can be shown that these bones belong to at least 15 separate individuals. Shallow probes in other parts of the Dinaledi Chamber suggest that there are bones across the chamber floor and we therefore expect to find many more bones from many more individuals as excavations continue.
Why are there no other fossils apart from the hominins?
The cave chamber in which the fossils occur is very inaccessible now and has always been very inaccessible. To get into the chamber involves a steep climb up a sharp lime stone block called “the Dragon’s Back”, and a drop down a narrow crack. All this has to be done deep inside the cave, in the dark zone in the total absence of light. No other large animals, apart from H. naledi ever found their way this deep into the cave.
Apart from the current complex route into the Dinaledi Chamber, has there ever been a more direct route into the cave?
Our mapping indicates that the roof to the Dinaledi Chamber consists of a chert horizon which is unbroken. Likewise, on surface, above the cave there is no indication of a direct vertical entrance way into the chamber. In other words, we have found no evidence that there is, or ever was a more direct entrance into the Dinaledi Chamber.
How do you know that there were no other entranceways into the Dinaledi Chamber?
The sediments in the Dinaledi Chamber are different from the sediments in other chambers in the cave in a number of important ways: they are fine-grained and mud-rich, and contain no coarse-grained clastic components; they are relatively poor in quartz; and they are chemically distinct and derived almost completely from the cave itself. These sedimentary characteristics indicate that the Dinaledi Chamber was isolated from the earth surface and from other chambers in the Rising Star cave. The way the sediments are distributed in the Dinaledi Chamber, with fossil-bearing units accumulating below the current entry point indicate that this was always the entry point into the chamber, even at the time the fossils entered the chamber.
Is the environment of the Dinaledi Chamber special in some way?
The hominin-bearing sediments in the Dinaledi Chamber are very different from hominin-bearing deposits in Sterkfontein, Swartkrans or Malapa for instance. Unlike these other well-known deposits the Dinaledi deposits occur in largely unconsolidated soft clays. The hominin bones were never fully fossilised within hard ‘breccia’. Instead they are embedded in soft, rubbly deposits that largely consist of mud-clasts that have not been fully lithified. This is an unusual geological setting, and is very distinct.
Do the hominin fossils occur as complete skeletons?
Remains are currently found in partly articulated, disarticulated and fragmentary states. This includes delicately articulated remains of hands and feet. This suggests that bodies entered the cave whole but disarticulated after deposition as a result of reworking of the sediments in which the fossils were originally deposited. Sedimentological evidence suggests that bodies were brought into or dropped into the cave chamber, landing in muddy sediment on the floor. Whole bodies probably ended up within the muddy sediments of the Dinaledi Chamber, but over time, sediments drained out of the chamber through holes in the chamber floor. As a result, some of the fossils were redistributed across the chamber floor, and ended up lying as dispersed fragments across the floor.
Did all the fossil hominins die at the same time, and was there some sort of catastrophe?
Fossil parts are found in different parts of the stratigraphy, suggesting that the fossils entered the cave over an extended period of time, and therefore we think that they did not die during a single catastrophic event.
Why can’t the fossils have been brought in from surface by flowing water or mud, like in the case of Malapa Site?
The sediment matter in the Dinaledi Chamber (primarily very fine clay) is (so far) unique in the cave as it has been derived almost completely from the cave itself, and there is no evidence of any water or mud flowing into the cave with enough force to transport bones. Apart from this, some parts of the hominin skeletons were found with the bones either in partial articulation or in close anatomical association, which suggests that parts of the bodies were only partially decomposed at the time of deposition; i.e. bodies entered the chamber whole.
Why can’t the fossils have been brought in from surface by predators or scavengers?
None of the bones that have been recovered from the cave show evidence of bite marks made by predators or other large animals. We also have found no fossil remains of any predator species in association with the hominins. It would also be unlikely that predator animals would take their prey deep into the cave all the way into the Dinaledi Chamber deep into the dark zone, and that they would only take in cadavers of H. naledi.
How did the hominins find their way into the Dinaledi Chamber?
This is a puzzling question. Our geological investigation indicates that the Dinaledi Chamber was always in the dark zone, and the route to get there was probably very complex involving navigating difficult terrain. This suggests that they may have used fire to guide them into the cave.
Why are there so many hominins in the Dinaledi Chamber?
This is the big question. Our investigations show that the bodies came in whole. They were probably deposited over a period of time and entered the chamber using the same entrance as today. So far we have found no evidence on any the bones for any form of trauma as a result of a fall, or due to predators. So far we have also not found any evidence of cannibalism like cut-marks, like on some of the hominin assemblages in European caves. All this is very hard to explain and suggests that at some point H. naledi entered the cave on purpose to deposit bodies in the Dinaledi Chamber. This hypothesis is hard to prove definitively and it will require further work. As excavations proceed and the stratigraphy of the deposit is better exposed, and more bones are recovered we will be able to better answer this question.
Why does the team include such a large number of early-career scientists?
Training and involving a new generation of scientists is an integral part of the project. The Rising Star Expedition began this tradition with the involvement of extraordinarily skilled excavators underground and young scientists aboveground. With so many hominin fossil specimens, the Dinaledi fossil collection presented a unique opportunity to involve a broad array of specialists with experience studying different parts of the anatomy of fossil hominins. Bringing this group of scientists together to do the work at Wits here in South Africa created many exciting synergies between people who might not have worked together otherwise.
When will more new research on H. naledi appear?
Describing such a large collection of fossils is a huge job, and our team of experts has been examining every body part. Detailed descriptions of the articulated hand and foot remains, the most complete ever discovered, will appear shortly. Additional papers on other parts of the skeleton and the biology of H. naledi have been written by our team and are now in the process of peer review.
Why is the Homo naledi foot so important?
It’s important for two reasons. Firstly H. naledi is a brand new species of fossil hominin, and its foot is a crucial part of the discovery as it tells us something about how it moved around. Second, walking upright is one of the defining features of the human lineage, and as feet are the only structure that make contact with the ground in bipeds, they can tell us a lot about our ancestors’ way of moving.
What parts of the foot and ankle have been recovered from the Dinaledi hominins?
The excavation team was able to recover at least one specimen of almost every single bone in the foot. There are more than 100 foot bones in the current H. naledi sample; including a nearly complete foot that is missing only a few bones. It is one of the most complete feet known in the hominin fossil record (there are also partial feet from at least 2 other adults and 2 children).
Does the Dinaledi foot look more like a human foot or a chimpanzee foot?
Overall, they look much more like human than chimpanzee feet. The joints between the bones looked like ours, and they likely had similar ranges of motion. The middle part of their foot was likely stiff while walking, whereas in chimps it is far more flexible. Their big toes were in-line with the rest of the foot, unlike the grasping, opposable big toe in chimps. H. naledi’s toes could also bend up (dorsiflex) as much as ours can, which is critical to the “toe-off” phase of the human walking cycle. However, the Dinaledi feet were not entirely human-like. They likely had minimally developed longitudinal foot arches (i.e., flatter feet), which is uncommon (but not unknown) in living people. Their toes were also slightly curved –not as much as a chimp’s toes –but more than in humans.
Did Homo naledi walk like we do?
Mainly, but not exactly. This is for two reasons. Firstly, although very like our own, the H. naledi foot does have a few features that are not entirely human-like. Its toes would have been slightly more curved and it may have had a lower arch than the average human. The second reason is that the way a creature moves does not just depend on its foot. One has to look at the rest of the skeleton as well. When we look at the whole skeleton of H. naledi, we see a creature that walked upright, but was also comfortable climbing in the trees a little bit.
How much smaller were the Dinaledi hominins’ feet?
We have a very good idea of how small the most complete foot is because we have its entire length, from heel to the tip of the big toe. We suspect it’s a female foot because we have two size groups –a bigger group (presumably male) and a smaller group (presumably female) –and the most complete foot is in the smaller group. If this Dinaledi lady went shopping for shoes in South Africa, the United Kingdom, or the United States, then she’d look for the smallest adult size possible.
Why is the H. naledi hand skeleton an important find?
This fossil human hand is important for several reasons, but especially because it is so complete. There are 27 bones in each human hand and this fossil hand preserves all of the bones of the right hand, except for one wrist bone called the pisiform. These bones were found partially articulated (i.e., joined together)—an extremely rare event in the hominin fossil record—meaning that they belonged to one individual. Within this single hand, there is a mix of features that has never been seen before in any other hominin species. The wrist bones, particularly those on the thumb-side of the hand, show several adaptive features for tool-related behaviours that are consistently found only in modern humans and Neanderthals whereas the finger bones are more curved than most australopiths. Thus, this one hand suggests that even after the hominin hand had become well-adapted for complex manipulation, some hominins were still spending large amounts of their time climbing. ii)What are the implications of the H. naledi hand for human evolution? One of the most contentious debates in human evolution is whether early hominins spent significant amounts of time climbing or were strictly walking on two feet on the ground all the time. H. naledi sheds important new light on this debate. The presence of such strongly curved fingers (rather than the straight fingers of humans/Neanderthals) suggests that its fingers were curved for a reason: H. naledi regularly used its hands for climbing. Furthermore, depending on how old (geologically) the H. naledi remains turn out to be, there will be important implications for interpreting the South African archaeological record, who made the various stone tools that have been found, and what anatomical adaptations were necessary to craft these implements.
Are other H. naledi hand bones found at the site in addition to this one hand?
Yes, over 150 hand bones have been found in the Dinaledi Chamber so far. These include bones from both adults and juveniles. They vary slightly in size, with some being a little smaller (presumably female) and others being slightly larger (presumably male), but they all show the same anatomical features found in Hand 1.
How common are relatively complete hand skeletons on the hominin fossil record?
Nearly complete hand skeletons are rare in the hominin fossil record but are known for Neanderthals (~60, 000 years ago), Australopithecus sediba (1.98 million years ago), and Ardipithecus ramidus (4.4 million years ago). There is also a “composite” hand of Australopithecus afarensis (Lucy’s species), that is composed of bones from multiple individuals and localities. Much of the early hominin fossil record (i.e., australopiths and early Homo) consists of isolated hand bones that cannot be associated to a particular individual or species. The OH7 hand of H. naledi (the “handy man”) (1.75 million years old) discovered in the early 1960s is well-known as it was described as the first “tool-maker”; however, only a few wrist and finger bones are preserved.
How does the hand of H. naledi compare with the Homo habilis “handy man” hand fossils?
The H. habilis hand preserves three of the eight wrist bones in the human hand, but these fossils are fragmentary. However, based on what is preserved, the anatomy is different than in H. naledi suggesting that H. habilis was more like australopiths and did not have the same suite of features related to tool-use or tool-making that is shared among H. naledi, Neanderthals and humans. However, the fingers of the H. habilis hand are also curved like those of H. naledi, and suggest that both species spent a substantial amount of time climbing.
Have stone tools been found in the Dinaledi Chamber of the Rising Star cave system?
No, stone tools have not yet been found in association with the H. naledi fossils. Future discoveries will hopefully help answer this question. | <urn:uuid:567326d8-0676-4a74-8545-1d1c192ec506> | CC-MAIN-2022-33 | https://ewn.co.za/Features/Naledi/FAQ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00098.warc.gz | en | 0.961049 | 5,118 | 3.515625 | 4 |
Most people did not even know what melatonin was when we told them about our project, so how should they know about correlated diseases?
Our human practices goal was to learn more about it and to share our newly gained knowledge with the world.
We also met other science enthusiasts to spread the word of synthetic biology and the importance of science in society.
Please click on the icons to read the respective article.
Integrated Human Practices
Without the advice of helpful experts, our project would not have taken shape the way it did.
In this section, the influences of other scientists on the development of Melasense are retraced.
Please click on one of the buttons to read the respective article.
European Drug Approval
Interview with Leon Bongers and Tania Mattila about melatonin as medication and its approval.
We talked with Leon Bongers and Tania Mattila from the Dutch Medicines Evaluation Board (MEB) about the process of approval
for Melatonin in the Netherlands and in the European Union.
We learned that the availability of melatonin could only change through pressure of pharmaceutical companies. They also dicussed with us the chances and risks of melatonin supplementation.
We were very surprised that there is such a strict regulation of the availability of melatonin in our country. Circadin is a prescription medication containing melatonin, which has been approved in the whole European Union from the age of 55.
In Germany, in contrast to other countries, melatonin products as dietary supplements are strictly regulated, while in our neighbour country the Netherlands, they are freely available.
We learned from the IGJ (Inspectie Gezondheidszorg en Jeugd), the responsible institution for the approval of dietary supplements in the Netherlands, that a melatonin-containing product is only considered a medicine if it contains at least 0,3 mg of melatonin and claims to cure a disease.
Our team member Katrin interviewed clinical assessor Tania Mattila and Leon Bongers, the Senior Regulatory Project Leader of Pharmacotherapeutic Group I, both from the MEB. Leon Bongers is responsible for the approval of melatonin as a medicine in the Netherlands.
He and Mrs. Mattila explained to us that in the Netherlands it is assumed that the population only takes melatonin after consultation with a doctor and that the number of studies on melatonin for short-term use is sufficient since there are no worrying side effects when taken correctly, but there are not enough studies on melatonin for a longer period. In addition, we learned that it is apparently not enough to raise general awareness to the positive effects of melatonin in the society.
An amendment to the law in Germany could only be achieved by pressure from the pharmaceutical industry. Then, melatonin could be available in Germany for humans under fifty-five, too.
To learn more about the process of drug approval in the EU and to inform yourself about the risks and when it is reasonable to supplement melatonin, please read our interview below.
Katrin: Have you heard about the correlation between Melatonin and neurological disorders, like depression, Parkinson's,
Alzheimer's disease and fibromyalgia?
Leon Bongers: We know that a melatonin disbalance can play a role in many disorders like the ones that you mentioned. But I think there is too little data available concerning the effects on giving melatonin to correct the imbalance in this disorders, there haven't been specific claims for melatonin disorders. So it hasn't played a part in the authorization application that we have had so far.
Katrin: We think it would be interesting to know more about the process of the medical authorization in the CBG and also how you work together with the EU.
Leon Bongers: The MEB and most regulatory authorities in Europe only work by paper. The tests are being done by a pharmaceutical company that often relies completely on papers and literature. A company submits to us a dossier with detail on the manufacturing, indication, and safety of the product. Then the application is assessed by different specialized assessment groups, for example, clinical assessors like Taina, but there is also a non-clinical assessment. The assessors make reports which are scheduled for a board meeting. The MEB consists mainly of two parts, the board with about 17 highly educated professors who are making the final decision on an application, and the drafters of the report, me as a regulator et cetera who are doing the work and prepare the reports for the board. There the assessment is discussed and we mostly draw a list of questions to the company which they have to answer. After their reply, we do a reassessment and a final decision is taken.
But a lot of decisions are made in the EU. I just talked about the national procedure, that's when a company decides to make an application in only one member state, like for melatonin supplementation. But for example, Circadin, as a product that is registered in all member states of the European Union, has a centralized procedure. It means that an application is made in all member states at the same time and an assessment is done by one member state. They draw out a report which is being discussed on a European level in a specific meeting at the EMA [European Medicines Agency]. A company also has the option to make the applications only in a few member states, then one member state will make reports which are circulated to the other member states and are asked what they think of the assessments.
Katrin: You told us that the EMA and, for example, the MEB have different medicines that they evaluate, so they don't interfere. But if the MEB authorized a medicine, could the EMA object to it?
Leon Bongers: Normally the EMA or another member state is not even aware of such a decision. We issue marketing authorizations to a company. But it might happen for example when a member state in Europe considers that a product does more harm than that it is efficacious. Then the member state has the opportunity to refer a commission in Europe, the CHMP, the European Committee for Medicinal Products for Human Use. The CHMP advises the European Commission which can take the final decision on whether a medicinal product can be used for a certain indication.
Katrin: Could you say something about the risks of taking melatonin as a supplement without a physician's advice, or are there also benefits?
Taina Mattila: In principle, there is no specific risk in short-term use. But for example, the long-term risks with children are not completely known, for example for maturation. And I think when you take melatonin in the wrong way or in a wrong time it can also influence your own circadian rhythm. In that way, it can be harmful.
Taking melatonin for a sleep problem might deprive yourself of an effective treatment that a physician could be able to give to you. Soft measures for improving sleep could be nightly rituals and other non-medicinal measures. Basically, the benefits of melatonin have really conclusively in my personal opinion been only shown in jet lag and to some extent to the indication of Circadin at the moment. So if you ask about short-term safety risks we are not that worried, but we are with using it for a long time and with indications where it might not have any kind of effects.
Katrin: I read that a side effect of taking melatonin is blood in the urine. Do you know where that comes from?
Taina Mattila: Unfortunately I don't know. But of course melatonin is a hormone, that is involved in a lot of different systems in the body and has the potential to affect them.
Katrin: Circadin is only allowed from the age of 55 years onwards. When the company wanted it to be allowed, they only asked to allow it for the age of 55 years onwards. Can you imagine why they took this limit and not for example and didn't allow it for all adults?
Taina Mattila: The main large trial was performed in patients over 55 years of age. So the data for lower age groups was not available to such an extent that it could be approvable. There was a lot of discussion in the CHMP and in the end, it was approved with a majority decision. You should also know that the indication is specifically for insomnia characterized by poor sleep quality. So it is not an overall insomnia indication and it is limited to age.
Katrin: Is there a discussion about lowering the age limit?
Leon Bongers: In general one can say that the age limit will not shift as long as the company is not asking for it because a change or an application is made by a company. And as long as a company does nothing, the age limit will not change. So they have to substantiate why the age limit could be lower. But I think they have no chance of lowering it in normal healthy people.
Taina Mattila: They would be asked to show some new data for it, so I don't think the company is interested in doing studies for that.
Katrin: I recently read that Germany said that that there are not so many studies, and also there are no long-term studies. What can you tell me about it?
Taina Mattila: There are many studies being done, some positive, some negative.
In most kinds of sleep disorders, the efficacy is inconclusive. It is unclear whether the effects are clinically relevant. So, for example, a patient falls asleep 10 minutes faster than he does normally, does it have an effect on the patient's wellbeing or functioning? What is the threshold how much faster you have to fall asleep in order to improve your quality of life? What I've seen so far is that in jetlag there is the strongest evidence of efficacy.
Katrin: Did you hear that taking Melatonin could be also beneficial for shift workers and have you heard of other benefits?
Taina Mattila: On shiftwork, there is not enough evidence for efficacy based on what I have seen in the literature.
Leon Bongers: There is EFSA, that's the European authority for food products and supplements. They have decided on the indication of the reduction of sleep onset [the time you are still awake after going to bed]. It seems that it is indeed reduced by melatonin. But so far the MEB has not accepted the sleep onset time reduction as an indication for a medicinal product and therefore it cannot be registered for this indication. But there was a lot of discussion in the MEB. So far we have not accepted it as an indication for melatonin as a medicinal product. But it's the most important reason to use it as a supplement.
Interview with a local radio station
Bringing synthetic biology, iGEM and our project to the public eye is a big part of human practices. On August 1st we met with a local radio station, the "Hochschulradio", in Aachen and had an interview with them. We had the opportunity to present our project und talk about the importance of melatonin to a wide audience. Furthermore we talked about iGEM and what it means to us.
Aachen Engineering Award
Informing public personae about iGEM
On September 7, 2018, we met Emmanuelle Marie Charpentier at the Aachen Engineering Award. She impressed and motivated us very much with her work and her character. It was not only the first time that a woman, but also a person with a biological and chemical background was awarded this prize. This has given us additional confidence for the future in the role of biosciences and equality. Besides that, we briefly presented our project and iGEM to Klaus Radermacher, Head of Medical Technology at RWTH University, and to the new Rector Ulrich Rüdiger.
After a long period of researching, we finally met most of the German iGEM-teams in June at Marburg.
We not only got to know many inspiring people, but also had the opportunity to discuss our projects with them.
Especially, the sharing of experiences and failures would help us later on in our journey.
Furthermore, various scientists and organizations joined the meetup and presented to us their experiences and advice. We learned how to handle steps backwards and how they can improve a scientist's work.
A big thanks to the iGEM team Marburg for organizing and hosting this great event!
Only one month after the meetup in Marburg we went to the European meetup
in Munich where we had a great time. We had the chance to meet the German teams again as well as meeting even
more awesome young researchers, in particularly, the iGEM team of Utrecht with whom we started a collaboration afterwards.
In addition, we met a bunch of scientists and entrepreneurs who shared their wisdom with us. We listened to talks about current topics in the field of synthetic biology and learned new skills at workshops.
We thank iGEM LMU and TU Munich for the organization of this thrilling weekend!
meeting prospective researchers
"Jugend Forscht" encourages and inspires scientifically talented young people under the age of 21.
Like the iGEM competition, young students develop a project and throughout the
competition meet up with entrepreneurs, scientists and other qualified people to improve their idea.
It is Germany’s greatest competition in promoting young scientists.
In May, a group of participants visited our laboratories and discussed their ideas with us. Additionally, we presented our project and Marco, one of our advisors, gave them a tour through our laboratories.
Seeing young people who have this level of interest in sciences and new approaches to current issues was an inspiring and reassuring experience. We wish them success in the competition!
Teaching the basics of microbiology
To meet the challenges of the future, biologic applications are going to be more and more relevant. Thus, it is essential to make this work field interesting for young people. With a school project, we tried to make the field of synthetic biology more appealing for them.
Since we were very surprised about the major role of melatonin in the human body and the little familiarity in the public, we took this opportunity to inform our audience.
At the beginning of September, we went to 10th-grade students at the Herzogenrath Highschool to show them the fundamentals of lab work. We carried out the lessons on three different days.
At the beginning of the first day, we presented the schedule for the upcoming one and a half weeks. After a small introduction about us, we talk about safety in a laboratory, working sterile and different media types for the cultivation of microorganisms. We split the class into groups of four students. First, the students weighed out the different components to prepare a YPD medium in Petri dishes. We autoclaved the media and explained the physical principles of autoclavation. In the end, the pupil poured the YPD medium into the Petri dishes.
On the second day, the students inoculated the Petri dishes. Every group had four Petri dishes: one plate which was exposed to the air, another which was used to test the effect of ampicillin on the growth of microorganisms, and another two where they could test the contamination of different surfaces. Furthermore, we explained the functionality of antibiotics. The plates were incubated at 30°C.
Seven days later we evaluated the plates and talked about the different organisms that grew on the Petri dishes. To get a closer look, we used a light microscope to analyze how their morphology was multifaceted. After the students finished their drawings, we presented our project. The students were very interested in the topic and we continued discussing about synthetic biology and the different fields of research.
German Academic Exchange Service
On 20th September a polish student group visited the RWTH with the help of the DAAD (German Academic Exchange Service). We had the possibility to present our project along with other scientists from the Abbt Schwaneberg Institute. We talked about what iGEM is in general, the challenges and opportunities an iGEM project gives and what our project this year is particularly about. The students were very interested and had many different questions.
March for Science
movement for evidence-based politics and free research
On April 14th, we went to Cologne to attend the "March for Science".
With self-made signs and banners our goal was to draw attention to the importance of science in our society.
Fake-news, alternative-facts and populism raise fear and antipathy against research. To deal with those problems, speeches on science were given. Speakers included science-journalist Ranga Yogeshwar and criminal-biologist Mark Benecke among others.
After this inspiring event, we used the possibility to get to know the iGEM-teams from Duesseldorf and Bielefeld and exchanged ideas and visions about our projects.
Integrated Human Practices
Finding the idea
The first steps towards a certain project idea
In February 2018 we started brainstorming to find a project that could potentially transform people’s lives.
Ranging from a car fueled by hydrogen produced by cyanobacteria, over the decrease of the detection-time of slowly growing pathogenic bacteria we had plenty of ideas. We saw the greatest potential in the simplification of the diagnosis of a melatonin underproduction since the current measurement takes up to six weeks.
One of our team members, Biel Badia Roigé, had already experienced this issue. The stunning findings in the past decades revealed a strong correlation with numerous diseases, so our biosensor could speed up the diagnosis of multiple diseases. The idea of a faster, cheaper and more versatile method of melatonin-measurement also resonated with our PI’s Dr. Wiechert, Dr. Schwaneberg and Dr. Bank. Our first milestone was achieved.
Getting insights from a medical doctor
Prof. Dr. Groezinger is a senior physician in the psychiatric polyclinic.
Some of his research fields are sleep physiology and sleep medicine as well as sleep and affective disorders.
He also is a member of the European Sleep Research Society.
We visited him to learn about the use of melatonin measurement as a diagnostic marker in different diseases. We were shocked to hear that the University Hospital Aachen - being one of the largest clinics in Europe - does not perform any melatonin measurements. Samples for melatonin measurement have to be shipped to a medical laboratory 73 km away, to the Stein Labor in Moenchengladbach.
He explained to us how a melatonin sample is taken: traditionally, saliva samples are used, since there is a linear correlation of melatonin concentration in serum and saliva. The melatonin concentration found in saliva is 30% the concentration found in serum. Patients have to collect the samples themselves at home. At 2 a.m. they have to interrupt their sleep to give a saliva sample into the tubes they were provided with.
Taking just one sample is very inaccurate to deliver a substantiated diagnosis, but the financial restrictions are a great barrier for medical professionals for analyzing multiple samples. As we got to know later from a leading scientist in the field of melatonin research, Dr. Dario Acuña, for a reliable diagnosis, six samples, given with a two hours break in between each sample, are necessary. From Prof. Dr. Groezingers point of view, the high cost of an adequate number of melatonin measurements is the reason for the rare usage of this diagnostic method. Therefore the internal laboratory of the University Hospital Aachen does not gather a sufficient amount of samples to make melatonin measurements cost-efficient - the most widely used assay for melatonin measurement is the Enzyme-Linked Immunosorbent Assay (ELISA), and it is mostly available in 96-well strip plates.
For further infromation on different laboratory methods for melatonin measurement read our article about Dr. Dario Acuña.
Juelich Research Center
presenting and discussing ideas at the institute for bio and geo sciences
After weeks of research on the detection of melatonin in the human body, we came up with three possible mechanisms to use in our biosensor.
- using the MT1 membrane receptor with its G-protein coupled cascade
- using beta-arrestine binding to the activated MT1 receptor
- using the transcription factor RZR
On May 7th, we went to Juelich Research Center to discuss those ideas with experts,
to see which approach is most suitable in our project and to work out what difficulties we would have to face.
After a warm welcome of Michael Osthege, a former iGEM-Team member of RWTH Aachen in 2014 and 2015, and some basic chat about how to organize the team, we went to see Dr. S. Binder, founder of SenseUp, who gave insights into his StartUp. Afterwards, we presented our ideas to Prof. M. Pohl (biocatalysis and biosensors), Jun.-Prof. D. Kohlheyer (Microscale Bioengineering), Dr. Drepper from Institute for Molecular Enzyme Technology IMET (Bacterial Photobiotechnology) and Dr. J. Marienhagen (Synthetic Cell-factories).
With their advice, we figured out that using the cascade would be too time-consuming and unspecific. The fewer components, the better because intermediate steps work as a black-box for us. We would not be able to see if other substances interfered with our receptor in between. Furthermore, if we did not have a signal at all, we would have serious difficulties to localize the steps, where problems could occur. Because of this, it would not be specific enough and evidence that the detected signal came from melatonin would not be given. In addition, we learned that using living cells for quantitative measurement could be tricky. We decided to focus on beta-arrestin binding and on RZR and to abolish the idea of a G-protein coupled cascade.
Using luciferase under the control of the Estrogen Response Element as reporter
During our first months of research we noticed a research group in the Aalto University (Helsinki),
who developed a yeast-cell based assay for the detection of different analytes, i. e. Estrogen, using different nuclear receptors.
We decided, that the system would be suitable for our project, as we also planned to use the specific nuclear receptor RZR (Retinoid-related orphan receptor-beta) for the detection of melatonin.
After contacting the research group, they directly send us their modified strain (BMA64-1A), so that we could use the integrated firefly luciferase gene as a reporter under the control of the Estrogen Response Element (ERE).
Status quo of melatonin measurements
While working on our biological approach using living cells to detect melatonin in saliva samples,
the next logical step was to gather the opinion of the people that are ultimately going to perform the measurements -
Dr. med. Josef van Helden leads the department of endocrinological measurements in the Stein Labor (German for laboratory) in Moenchengladbach and was kind enough to have an interview with us.
Since we were concerned about whether tubes, in which saliva samples are given, need a special coating, we got Dr. Josef van Helden’s opinion in this. In his lab, salivettes are primarily used, considering they are easy to use for the patient (a fiber roll is placed inside the mouth to soak it with saliva, and then it's placed back inside the tube). But the tubes itself does not have a special coating, because melatonin does not react with polyethylene.
For a melatonin measurement serum, saliva or urine can be used. Since saliva delivers reliable measurement results and is most convenient.
Concerning the transport conditions, patients have to make sure, that their saliva sample is transported under 10°C. If saliva samples are saved at a temperature under -20°C, they can be stored for a long period of time.
The most commonly used method for melatonin measurements is Enzyme-Linked Immunosorbent Assay (ELISA). High-Performance Liquid Chromatography (HPCL) and Gas chromatography-mass spectrometry (GC-MS) are two further methods that are used for melatonin measurements, but since they are costly they are not the preferred choice. We got further insides about the advantages and disadvantages of the sensitivity of these methods from Dr. Dario Acuña. ELISA is mostly available in 96-well strip plates. If fewer samples are analyzed than a kit offers, it is inherently linked to a higher cost per sample. Thus, laboratories wait till 96 samples are gathered. In case this takes a long time, the patient has a long waiting time till the results of their measurement are available. Especially in the case of melatonin, this is a relevant point.
We also were interested in whether these laboratories use living cells for their measurements. Some measurements are performed with living cells, so our biological approach could indeed be used in medical laboratories.
Regarding Dr. Josef van Helden, the best solution for a melatonin measurement would be a device that could be placed in the doctor's office. The point of care diagnostic is a growing market, but there is no device yet, that allows to instantaneous measurements of melatonin.
Prof. Dr. Dario Acuña
Talking with the expert in melatonin research
Prof. Dr. Dario Acuña works at the University of Granada in Spain and is the leading scientist in the field of melatonin research.
He was the first researcher who investigated melatonin and its effects in the 1980s. During our research,
we read many of his scientific papers. Therefore, we wanted to interview him.
Due to the interview, we decided to use saliva samples for melatonin measurement. Moreover, it marked the start of our hardware project.
First of all, we discussed current melatonin measurement methods. The most common methods are ELISA, HPLC and UPLC combined with mass spectrometry. Depending on the sample fluid (blood, saliva or urine) each method has its benefits and drawbacks. Whereas ELISA works perfectly for melatonin metabolites in urine, it is not as specific in blood or saliva. There again, HPCL is often used for high levels of melatonin in blood. These high levels, however, only occur rarely. Dr. Acuña stated that mass spectrometry is the most accurate technique but is yet very expensive compared to ELISA and HPLC. Mass spectrometry is not a standard detection method and therefore only few laboratories can use this sensitive technique.
Accordingly, we asked about the best fluid to measure the melatonin level. He explained that saliva would be the best one, as it contains about 30% of the melatonin concentration in blood. Using saliva instead of blood is better for the patients as it is non-invasive. Urine only contains metabolites of melatonin and is not as accurate.
Furthermore, we learned about the process of measuring the melatonin levels. Usually, Dr. Acuna takes six samples of saliva of one patient and uses five assays of ELISA. It takes about one to two days from taking the samples to having the results. The price for one sample measured with mass spectrometry is around 90-100€. With 5-6 samples per patient, a melatonin measurement costs 600-800€ for only one patient.
Additionally, we asked Dr. Acuña about the correlation between melatonin and different diseases like Alzheimer’s and Parkinson’s.
He stated that for example, patients with Alzheimer’s disease have low levels of melatonin.
Melatonin can be used in those cases as a therapeutically drug. Dr. Acuña told us that there are different influences of melatonin on the course of the disease.
In the early stages of the loss of cognitive abilities, a dose of around 50-60 mg melatonin can help to stop the progression of it.
Besides, Dr. Acuña sees a desperate need for a more specific method. Except for mass spectrometry, the current techniques are standard scientific methods but have a lack of sensitivity. He believes that a device which is placed directly in the doctor’s office would redefine melatonin measurement.
Finally, we asked him about the future of melatonin measurement. He stated that society is gaining more awareness of the importance of melatonin. It is entering the drug market and is used more often for patients. Therefore, he demands a better measurement method.
Our talk with Prof. Dr. Acuña had a deep impact on our project. After the interview, we decided to measure the melatonin level in saliva as it is more accurate than urine and non-invasive compared to blood. Furthermore, we decided to engineer a point-of-care-diagnostic to speed up the measurement process. This was the very beginning of our hardware project.
Prof. Dr. Wiechert
Introduction to the SPR method for a cell-free biosensor
After talking to Dr. Acuna, we realized that a cell-free solution would be even better for a point-of-care-diagnostic device.
Therefore, we met with Prof. Dr. Wiechert, who is an expert on hardware solution.
He introduced us to the analysis technique SPR (Surface Plasmon Resonance). During our research, we concluded that LSPR (Localized Surface Plasmon Resonance) is even more suitable for our hardware device. To learn more about this technique, please look at the hardware page.
Prof. Dr. Wiechert explained that only a few companies use LSPR and that their products are expensive (about 200,000€ to 300,000€).
Furthermore, we discussed other approaches like the Molecular Beacon Method. But he advised us not to use it due to its complexity.
Besides, he introduced us to Prof. Bott who helped us in the next step.
Meeting with Prof. Dr. Bott in Juelich
Choosing RZR as a receptor for our hardware device
After getting the SPR idea from Professor Wiechert we visited Professor Michael Bott in the Juelich Reasearch Center, who leads the systemic microbiology there. He told us more about this technology and we discussed with him different melatonin receptors to use. We learned that working with the MT1 receptor should be avoided because it is a membranous protein. So for anchoring it, we would need to create an artificial membrane, which would be a tricky task. Professor Bott suggested using preferably our other melatonin receptor, the transcription factor RZR.
Mr. Scholz and Dr. Merget
In August we started looking for partners to build our gold nanostructure.
We had a lot of luck to talk with experts in this field from the Institute for Semiconductors (IHT)
and the Institute of Integrated Photonics (IPH): Dr. Florian Merget of the IPH and Stefan Scholz of the IHT.
We met multiple times to talk about our concept.
At first, our hardware idea was to build an optic fiber based LSPR System. In this setup, the gold nano structure is placed on top of a small optic fiber and the light is guided from the lamp to the nano structure by the glass fiber. Some light is reflected back by Plasmon Resonance, guided through the fiber and then analyzed in the spectrometer. The advantage is that the optical handling and alignment is really easy. No moving parts and lenses that need to be in focus are needed. That is why we at first preffered this method.
After presenting this idea, we got to talk about how commercial chips like those inside a computer, smartphone or car are produced. Dr. Merget and Mr. Scholz had a lot of insight to this. They explained that in principle it would be possible to structure Au nanoparticles on top of a small 0.14 mm fiber with experimental electron-beam processing (e-beam). But this would prohibit using normal commercial litography systems like UV-litography which are state of the art right now.
All commercial litography systems are optimized to work with thin wafers.
The space inside these machines prohibits placing long fibers in them.
The vast share of the cost of producing nano structures lies in the price of the machines.
If standard semiconductor machines are used like UV-litography,
then the price per chip in mass production can be cut to a couple of dollars.
During the discussion, we changed our setup from placing the Au nano structure on top of the fiber to placing it on standard glass wafers. This greatly reduces the cost of the wafer and markets the wafer as a disposable part, that you change once you want to measure a different molecule.
In another meeting, Mr. Scholz also gave us a tour of their clean room and machines. It is really impressive what kind of effort goes into making those small nano structures. Just the machines for clean non-ionized water are as big as two student dorms.
Hardware Optimization: From Reflection to Transmission
We met with Dr. Mourran, group leader for Thermoplasmonics of Nanoparticles at the DWI Leibniz Institute for Interactive Materials.
His group uses Plasmon Resonance to liquify hydrogels.
He is an expert in the application of Plasmon Resonance and the different setups available.
With his input we changed our approach from reflection to transmission.
The reason we approached him, was to talk about a possible partnership to manufacture the Au nano particles. He was quite intrigued, but could not help us with the manufacturing, as they did not have the necessary machinery. But he could lend us his expertise. Firstly, he taught us a lot about the actual physics, what Plasmon Resonance is and how they use it. With his applied knowledge in using Plasmons for thermal purposes, we learned how all the different parameters influence Plasmon Resonance.
Secondly, during intense discussion we further evolved our setup. Up until that point, we wanted to analyze the reflected light with the spectrometer. This light is scattered by Plasmon Resonance in a 90° angle from the wafer. It is a clear signal, but it is very weak. Furthermore, this means that the incident light beam, that excites the Plasmons, and the reflected light beam are running inverse parallel. Because of this, a beam splitter is needed. As our spectrometer is not extremely sensitive, Dr. Mourran advised that we should consider a transmission setup. We followed his advice as this also simplified the setup. The beam splitter is not needed in this case, and this results in one component less that needs to be perfectly adjusted.
Searching for Fano Resonance through Simulation
Dr. Mouran recommended us to talk to another expert in this field,
who could give us some quantifiable data at which wavelength the Plasmon Resonance occurs.
He put us in contact with Dr. Chigrin, group leader for Theory of complex materials at the DWI Leibniz Institute for
After a few minutes into the presentation of our proposed nanogold structure layout, Dr. Chigrin immediately started talking about how we could develop such a sensor without running simulations first. We were a bit confused as plasmon resonance occurs in nanoparticles always, the resonance wavelength changes depending on size and shape. Then we realized that we were talking about two different phenomena.
Dr. Chigrin thought that we wanted to use the phenomena of Fano resonance in our sensor, because we had a periodic hexagon grid of discs with 160 nm diameter and 400 nm distance from each other. We had never heard about this and were curious. Fano resonance is an interference phenomenon. It occurs if the plasmon resonance wavelength meets the Bragg condition. This could greatly enhance the peak of our signal (figure 1).
The zone in which the Bragg condition is met, is quite small and dependent on a lot of factors. A college of Dr. Chigrin, Sebastian Meyer, did a simulation search for the distances where the bragg condition is met. You can read more about Fano resonance and the simulation results, that Sebastian Meyer did, in our wiki.
Suggestions for further improvements
Taking all the feedback into account, our project does not have to end just yet.
We could engineer some improvements to our technique to make it market-ready.
Before we can sell our product to doctor’s offices, we must make some adjustments to the device itself as well as to the production process. Presumably, we will provide only a few practices with our hardware at first to get feedback from users before launching our product. Afterwards, we will be able to write an user-friendly software for the most pleasant user-experience. It should be as easy to handle as possible to ensure the frequently use of Melasense.
Furthermore, our technique is applicable to more hormones, for example estrogen and cholesterol. We could engineer a multi-hormone measurement method as a point-of-care-diagnostic to not only redefine melatonin measurement, but also the measurement of a bunch of similar hormones.
In the future, the measurement of those hormones could be integrated in the regular health check to raise awareness of the importance of hormone balance.
As our final goal, we want to engineer a bracelet with an implemented chip which could measure the melatonin level of patients overnight multiple times to quantify the results and to take point-of-care-diagnostic to the next level. | <urn:uuid:49688604-29ea-4698-893f-e06c4b349ae4> | CC-MAIN-2022-33 | https://2018.igem.org/Team:Aachen/Human_Practices | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00296.warc.gz | en | 0.952311 | 7,930 | 2.65625 | 3 |
The “Turkish Contingent” in the Crimean War
– and the career of Captain George Pasley, R.A.
One sometimes sees medal groups to British officers (but rarely to NCOs and lower ranks) who served during the Crimean War in what is variously referred to as the Turkish Contingent, the Anglo-Turkish Contingent, the British-Turkish Contingent, the Ottoman Contingent or the Anglo-Ottoman Contingent.
During the course of Britain’s war against Russia, 1854-56, the Royal Navy dominated the various naval campaigns which were waged against Russia around the globe, with French, Ottoman and Sardinian naval forces very much playing a subordinate role. However, the land campaign in the Crimea was a different matter. Here, French and Ottoman forces provided the largest contingents fighting “before Sebastopol” and elsewhere with the smaller British army playing a more subordinate role.
It was widely felt amongst the military and political authorities in the UK that Britain needed to field larger land forces than she actually had available – both to be able to exert more power in the tactical management of the present campaign and a possible later move into southern Russia, and with an eye to the future, to secure a greater say for Britain in any post-war settlement.
So was born, in December 1854, the idea of raising an “Anglo-Turkish Contingent” as a supposedly simple and effective way to increase Britain’s land forces in the Crimea – just as Britain had raised Swiss mercenaries and a German Contingent for service there. This force – which one officer said was simply one of the pawns upon the board of a great campaign – would be raised from a combination of veteran Turkish Regular forces supplied by the Sultan, augmented by extensive conscription and recruitment from within the Ottoman Empire – Croats, Montenegrins, Albanians, Serbs etc. Most of the funding would come from Britain but some would also be provided by the Sultan – with whom the idea was not very popular, since it removed from his control and use some of his existing regiments and placed Ottoman personnel under foreign command. But under the circumstances, he could hardly refuse and the establishment of the Contingent was finalised in January 1855. Some elements of French opinion were less than enthusiastic: The Turkish Contingent is nothing else, according to [the influential newspaper] Le Nord, than a body of Spahis [Sepoys], by whose aid we intend make ourselves masters of Turkey.
One British officer who served in the Contingent, Luther Vaughan of the 5th Punjab Infantry, commented:
The idea of taking a large body of Turkish troops into our pay, and officering them with Englishmen, was excellent. It had begun to be recognized in England that the English and French commanders at Sebastopol had made a mistake, in that, instead of making the most of the fine fighting qualities of the Turkish soldiers (which they had well displayed in the Danube campaign), they had condemned those of them who were with the army of the Crimea to serve as beasts of burden to the rest of the army… the sturdy Turkish infantry, than which I venture to think there is not a better in Europe, had not been thought worthy to fight in their own quarrel alongside of French and English battalions.
The Contingent would be largely commanded by British officers, most noticeably those at higher levels but any “above the rank of Sergeant”. In May 1855, it was placed under Major General Robert Hussey Vivian (1802-1887), a respected Indian army officer who had seen extensive service in India and had been Adjutant General of the Madras Army. The Chief of the Staff was an officer of some mark — Major-General J. Michel, who had served with credit at the head of his regiment (the 6th Warwickshire) in our early South African wars. The duties of Chief of the Staff were not understood in those days as they are now and my recollection of General Michel is that he was rather in the position of a simple second in command to General Vivian. [Vaughan]
The Contingent infantry was divided into two Divisions – Major-General (afterwards Sir Arthur) Cuninghame, commanded the first Infantry division, while the second was commanded by General Neil, of the Madras Army, who went on to distinguish himself in the early days of the Indian Mutiny and was killed in Havelock’s attempt to relieve Lucknow in 1857. No more competent officer than … Colonel Edward Wetherall, could have been found for head of the Quarter-Master-General’s department; and indeed all the higher officers of the Contingent had been carefully selected. Of the English officers generally it may be said, that they threw themselves heartily into their work.
The majority of the officers had been chosen by March 1855. Most were drawn from volunteers in the East India Company’s army as it was widely (and wrongly) believed that, since they commanded Sepoys, they would be more effective in the management of “foreign” troops: such officers [on leave] in the United Kingdom of the Company’s service as are willing to act in command of the Turkish Contingent; the applications have been very numerous—something about 300. The number required for the present – about 120 – have been selected from the most intelligent [!!] A number of these, along with General Vivian, were presented to the Queen at a Levee on 14th March 1855 just before their departure for the East.
No fewer than eighty-eight British NCOs, all in the rank of Sergt. Major, were employed, largely as drill and musketry instructors, as were some civilian workers such as clerks, storekeepers, wheelwrights, smiths, farriers and medical staff.
Comparatively few officers were drawn from specifically British units. Of course, since the vast majority of the selected officers did not speak the language of their men, their military origin was not really very important and, as it turned out, command via interpreter was neither easy or efficient – and became next to impossible if the interpreters deserted!
It was evident that the men placed under the command of General Vivian were old soldiers and that, if our Englishmen could but understand one word they said, the most perfect friendship and cordiality would exist between them. On parade, however, it was obvious from the first that considerable obstacles must be got over. The Interpreter could not give the British word of command its equivalent signification in Turkish and when the order “left shoulder forward” was to be performed, the result did not answer expectation …. All these inconveniences may, however, be overcome, but it will require great care and no little prudence to obtain a satisfactory result. Much, no doubt, may be expected from regular pay, food and clothing, attentions to which the regular Turkish soldiers are by no means accustomed. The siege of Silistria [Danubian Provinces] has shown how well the Ottoman will fight when led by British officers. These soldiers will fight when they are brought face to face with the enemy. But the real difficulty lies not there, but in bringing Turks to obedience of daily orders issued by men whom they have not been accustomed to reverence.
The exact number of troops which eventually formed the Contingent (with artillery and cavalry – including the Osmanli artillery and the already-established “Osmanli Irregular Cavalry” or “Beatson’s Horse” and the “Polish Cossacks”) is difficult to establish, but 20,000 – 25,000 men is often suggested and 20,000 was quoted at the time of formation; one report suggests that 30,000 men, including its artillery and cavalry arms, were actually deployed on active service late in 1855.
The whole project was difficult from the start. Many of the Regular Turkish troops ordered into the Contingent (15,000 men) were by no means happy with their transfer, despite more regular pay and rations, and coming under the control of foreign officers and they frequently proved to be reluctant and recalcitrant. Discipline and commitment were initially major problems, not at all helped by the fact that many of the conscripts and reservists drawn from various parts of the Ottoman Empire (about 5,000) were at unhappy to be in service in the first place and placed under foreign command. But as one British officer remarked:
The [Turkish troops] had not the smart appearance of which Western nations think so much, but their physique was excellent, being for the most part that of healthy agricultural peasants. We found them simple, docile, and patient under hardship in a high degree. Our excellent and plentiful rations delighted them, and they appreciated the regular receipt of their pay without any of the petty pilferings from which they had suffered at the hands of their Turkish paymasters. With these advantages it is not wonderful if they soon fell into habits of respect and obedience to their English officers. The English commandants, on their part, were content, in the best-commanded regiments, to waive a too minute interference with the drill and discipline of their regiments, which, as said above, they left as far as possible in the hands of the ” Bimbashees,” or Turkish commandants, and the inferior Turkish officers.
Having set up the Contingent at the behest of the British, the Ottoman authorities and the British high command seem to have had no clear idea of what to do with it and one gains the impression that their use was proposed from place to place in some sort of effort to find them a role. It was at first suggested that they should be deployed en masse to the Danubian provinces (Moldavia and Wallachia) where the Turks were fighting a major campaign against the Russians, or at Eupatoria, north of Sebastopol, as part of the Ottoman force which was eventually in garrison there, or even sent to the Turkish eastern front at Kars, but nothing came of these understandable suggestions. Then there were proposals to send part of the Contingent to the Gallipoli peninsula on garrison duty and some elements of the Contingent, largely the Osmanli irregular cavalry, were indeed sent to Cannakale – and then back.
It was even suggested by The Times, reflecting the well-known antagonism between the British and the Indian service, that “the truth is, as I have good grounds for believing, that there are persons in high places at Headquarters who do all in their power to deprive the Contingent of opportunities of distinction because it is commanded by an Indian officer; also, perhaps in a less degree, because numerous Indian officers hold appointments in it – some of them on the Staff.
For most of the time, the Contingent lingered more or less unemployed in its main base near Büyükdere, on the European side of the Bosphorus.
At one side of a pleasant town overlooking the Black Sea, at no great distance from the mouth of the Bosphorus, the camp of the Turkish Contingent, under General Vivian’s command, has been pitched. Rows of white and glistening tents extend in sharp and dazzling lines in the midst of a green landscape. Stray patches of barley grow scantily upon a somewhat arid and parched ground. A Turkish village, with its little minaret darting out of a grove of trees, nestles in a quiet nook and pretty woods afford shelter to the horses of officers and sutlers, after they have braved the noontide heat and the fierce rays of a perpendicular sun. The camp of the Turkish Contingent lies six miles distant from Büyükdere and about fifteen miles distant from Constantinople.... General Vivian’s quarters there are beautifully situated in a palace overlooking Beicos Bay. His presence is fenced around by the numerous protections usual amongst Eastern nations. There were double sentries everywhere, much clattering of flintlocks as I went in and curiosity insatiable apparently, since it seems not to have been met by our eighteen months’ occupation. [Illus. Lond. News 7.7.55]
Not until September 1855 was the Contingent transferred to Varna on the Black Sea and from there at last went on active service, with a role found for it to play. Its stated destination was the important advanced base at Eupatoria, north of Sebastopol, where a large Turkish force under Omar Pasha had been in garrison for some time, part of which was now to be relieved and replaced by the Contingent. However, its destination was quickly altered and it was dispatched finally, as late as October 1855, to the Kertchine Peninsula.
In May, a major Anglo-French-Turkish naval and military expedition had set out eastwards to seize the peninsula and straits of Kertch and its nearby towns, and especially to control the ports of Kertch and Yenikale. An Anglo-French fleet, which sailed on 22nd May, carried 3,800 British troops with artillery, 7,000 French infantry and artillery and 5,000 Ottoman troops. The occupation would be a preliminary to major naval operations in the Sea of Azoff, which ultimately turned out to be a staggering success. This powerful force, whose British element included the 42nd Highlanders, the 71st Highland Light Infantry, the 79th Highlanders, the 93rd Highlanders, some of the 8th Hussars, with artillery and engineer units, very quickly occupied the peninsula and the main ports (24th May) with no Russian resistance – the sizeable Russian forces which had garrisoned the region immediately retired, destroying stores, weapons and fortifications as they left.
There followed a disgraceful period of looting and pillaging all along the peninsula, but being particularly noted in what had been the beautiful and ancient city of Kertch. To be fair, even the inhabitants (or those who had not fled) said that it was nearby “Tartars” from outside the city who flocked into it when the Russian garrison left, who did most of the damage, but there is equally no doubt that French, Ottoman and British personnel were seriously involved in what was often not simply theft but wanton vandalism and attacks on the inhabitants, male and female.
After the successful occupation of the ports and the beginning of naval operations in the Sea of Azoff, it was understandably assumed that the British regiments on duty there were better employed back in the Crimea and the idea arose of replacing the main infantry garrison in the area and in the towns with the Turkish Contingent – a unified force which could be deployed along the whole of the peninsula and in the ports. It was duly agreed that the Turkish Contingent would be deployed to the Kertch peninsula and remain in garrison there, its headquarters initially at Yenikale but soon transferred to Kertch.
[Kertch], quite outside the real theatre of the war, had recently fallen, after a nominal resistance, before a mixed English and French force sent from Sebastopol. It was most unlikely that, whilst so fully occupied at Sebastopol, the Russians would make any great effort to retake the place, which at the moment had no military value. It really appeared as if f the English and French commanders could not make up their minds to trust soldiers of whose fighting qualities such proof had been given the year before on the Danube, and had sent the Contingent where it would have the least possible chance of being useful.
At any rate, it remained in that area – patrolling, doing reconnaissance work (e.g. towards Arabat), constructing defences, road blocks and roadways. There was little to do of an active service kind, apart from a few skirmishes with more adventurous Russian cavalry forces at a distance, in which the few British Hussars were involved: whether the Russians ever seriously thought of attacking us at Kertch may be doubted; but taking advantage of our deficiency in Cavalry, they constantly menaced us with attack and on several occasions caused us to stand to our arms for some hours at a time expecting it.
The Contingent remained in this role until the conclusion of the war in in the Spring of 1856: In the winter of 1855 General Vivian was summoned to Sebastopol to confer with the allied Generals and our hopes of more active service rose to fever height; but to the best of my recollection the arrangements then discussed had reference to a possible campaign in the country to the north of Sebastopol, which never came off, but in which, if the war had gone on, the Contingent was to have played an important part.
After the fall of Sebastopol, further operations into southern Russia – north from Eupatoria or along the Dnieper after the fall of Fort Kinburn – were contemplated. But in the end, the Russians chose to come to terms. Following the peace treaty at Paris, in March 1856, the Contingent was quickly sent back to Constantinople to be disbanded and dispersed. Vaughan remembered that the peace brought much disappointment to us of the Contingent, for it finally destroyed the hope of serving in the thick of the war which had led us to take service with the Osmanli troops.
Somewhat strangely, the influential and powerful British Ambassador at Constantinople, Lord Stratford de Redcliffe, actually suggested that the Contingent (remaining under British officers) should be retained as a permanent part of the Ottoman Army after the war, but the Sultan understandably refused this suggestion.
Unless they served away from the Contingent at some time, none of the officers received the British Crimea Medal – they were only awarded the Turkish Crimea Medal; as late as August 1859, it was reported that Permission has been granted by Her Majesty to the officers and men of the Turkish Contingent wear the Crimean medal conferred the Sultan.
Quite a number of officers of the Contingent also received Ottoman awards like the Medjidieh (of which over 1,000 were conferred on British forces in general); the Illustrated London News on 12th September 1857 reported of the Medjidieh that: one hundred officers, besides the medical officers, of the Turkish Contingent, are also recommended by the Secretary of War for decorations, namely: one of the first class, five of the second, ten of the third, sixty-nine of the fourth [this report does not mention any 5th Class awards to the Contingent, though 10 awards of this class were given to officers serving on its medical staff ].
Captain George Pasley R. A.
One of the officers chosen to work with the Turkish Contingent was Captain George Malcolm Pasley, Royal Artillery. Born in 1832, he was the son of a famous and highly-regarded Royal Engineer, Charles William Pasley (later General Sir, KCB, FRS, 1780-861). In 1839, at the age of only seven and a half, George Pasley fired the first submarine charge to be detonated by electricity, for the demolition of the remains of the Royal George which had foundered off Spithead in 1782 and whose substantial wreck formed a dangerous obstacle. When the charges were set, the officer who was tasked with firing them, Lieut. Symonds, in a very generous act turned to young George and asked if he’d like to do it. Since no seven year old boy known to science would turn down the chance to blow up a battleship, George stepped forward and duly threw the switch, creating a satisfyingly loud and effective explosion before a large assembled audience of dignitaries and onlookers.
Turning to a military career in his own right, George Pasley might have gone to Addiscombe College to train for the East India Company’s Service – his father was deeply involved with its administration and teaching there – but instead in 1847 he became a Gentleman Cadet in the Royal Artillery and gained a commission in 1849. His first posting was to the Cape Colony and here he became involved in the Eighth Frontier War and the campaign against Sandili. Pasley received the 1853 medal for South Africa, though he is not found on the roll – not uncommon with that award. His medal exists (and is rare as an award to an Artillery officer for that campaign, of which less than a dozen were issued), perfectly correctly named, and his presence in the campaign is easily attested – various dispatches and accounts mention his presence, at one stage in command of a party of Sappers and Miners.
It looks very much as if Pasley struck up a good professional relationship with Lt. Col. John Michel (1804-86), who served in the frontier war in command of the 6th (Warwickshire) Regt. and also as a brigadier “serving as commander of independent columns”. Pasley is referred to in a number of Michel’s dispatches and at one time was clearly serving as his A.D.C.
This contact seems to be what led to Pasley being selected as A.D.C. to Col. Michel when he was as appointed deputy to General Vivian in the Turkish Contingent in 1855. Pasley served through the campaign with the Contingent at Kertch – indeed, he left some fine watercolours of the region and in due course he received the Turkish Crimea medal and was awarded the Order of the Medjidieh in the 4th Class.
Pasley next served with 6/14 RA during the Indian Mutiny, in the Central India campaign with the Saugor Field Force under Genl. Whitlock. His former associate, General Michel, also served as a column commander in the Central India Field Force but there is no direct evidence that the two officers worked together in India as they had in South Africa and the Crimea. His heavy battery, with 4 x 18 pdrs, 2 x 8″ guns drawn by elephants and 2 howitzers, had arrived in Madras in November 1857 and saw action in the skirmish at Nygoan, the capture of the Penghali Pass and in the more well-known action in the storming of the heights of Punwaree. He was “Mentioned” by General Whitlock for Punwari [Calcutta Gazette 9.2.59].
Pasley’s career then took a sudden plunge into tragedy. In 1860, a medical board in India declared that Pasley was no longer fit for service and he was sent home on leave and soon onto Half Pay. In fact, it turned out that Pasley had “lost his mind” – in the parlance of the time – falling into periods of what was called “religious monomania” and severe depression. His “illness” was widely stated to be caused by severe sunstroke experienced in India, but more dubious diseases may have been at work.
He ended up living with briefly his retired father, General Sir Charles Pasley, in Richmond, where he proved to be a trial to his aged father, who in fact died shortly afterwards. Sir Charles wrote to a family member that George was quite out of his mind and was determined to become a priest, constantly pestering local clergymen. In the end, he was sent into the expensive private mental asylum at Ticehurst in Sussex (which is still operating as an institution) which at that time was something of a “five star hotel” for members of the wealthy upper classes who suffered mental problems and offered humane treatment and professional attention, plenty of facilities and extensive grounds for exercise and visits.
Extensive case notes survive for Pasley in the records of Ticehurst and they show initially the sad decline of an active and intelligent mind into withdrawal and depression – though the religious impulses seem to have gone. Gradually, however, Pasley began to get better, took up watercolour painting and languages and wrote to his friends and family.
The saddest fact of all is that though by September 1863 Pasley was stated by his doctors to have regained his mental faculties and was actually ready for release, he began to fail physically. His last letter expresses the hope that he would soon be out and about and able to visit his friends in person but instead, he contracted severe bronchitis and died at Ticehurst on 27th September 1863. He was buried on 1st October 1863 in All Souls, Kensal Green, where his father and several members of the family lie. | <urn:uuid:9634cf0e-fcfa-4d74-9923-f427dbc3cb22> | CC-MAIN-2022-33 | https://www.dcmmedals.co.uk/the-turkish-contingent-in-the-crimean-war-the-career-of-george-pasley-r-a/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00697.warc.gz | en | 0.984749 | 5,127 | 2.609375 | 3 |
Feelings of anxiety and worry are normal during significant transitions. It's expected that children and teenagers feel nervous before the first day of school. Some children, however, experience more intense fear and subsequently tantrum, refuse and lash out prior to the start of school. If this is occurring, and lasting into the first few weeks of the school year it may be time to seek expert help.
Below are some helpful reminders for your anxious kiddo leading up to the first day/week of school:
1. Avoidance is never the answer. Although the fear may be crippling, avoiding one's fear and anxiety often leads to more severe and long term problems. If your child is refusing, do what you can to get them to attend one class. If that seems impossible, can they make it to the counselor's office? It's important to identify small and realistic goals each step of the way.
2. Do not provide too much reassurance to the worry. Reassurance to worry is like gasoline on the fire. It provides short term relief but can rob your child of their confidence in their own decision making. One question or concern requires assurance, being asked the same question repeatedly is reassurance seeking. Instead, help your child identify their concern and attempt to generate solutions to their problem. If this seems impossible it's likely the concern is unrealistic. This process can help them acknowledge this.
3. Your child's behavior is predictable. If they were an anxious child last year, and the year before, and the year before that, and they have not had specialized treatment please do not expect they have outgrown their distress. It's wishful thinking that causes chaos when they predictably melt down during the days leading up to school. It's easier to avoid talking to your child about their worry. They are enjoying summer and you don't want to ruin it. Help them prepare for the first day by acknowledging their fears and articulating them. Talk about preparing for predictable feelings of anxiety and ways to manage it.
4. In the spirit of preparation see if there are opportunities for your child to practice being at school during the summer. This could be during new student orientation, sports try-outs, or even teacher in-services. Try and spend time at the unfamiliar school and allow your child's brain to recognize there is no danger. It may be helpful to walk the route to each class ahead of time. Anxiety is fueled by uncertainty. In this occasion, it's helpful to reduce this as much as possible.
5. Fitting in is terrifying. It will be helpful if your child has a friend, acquaintance, or familiar face when they arrive at school. Does the school have a program to help new students? Is there a neighbor kid they've made friends with? Some type of ally can help manage the initial worry of walking into a new place.
6. Manage your own anxiety. Are you actually terrified for your child? Does the uncertainty make you anxious, or do you feel responsible for your child's emotional experience? Your anxiety is contagious and can compound you child's worries. If you recognize it's your stuff that's driving your interventions to take a deep breath and stop it. You are not actually responsible for your child's emotional experience. You can walk side by side them but can't own their stuff.
Did you know that 1 in 100 adults likely have OCD? And up to 1 in 200 children? That’s a half a million children in the US alone. OCD can be a debilitating disorder, but there is treatment that can help. Unfortunately, it can take up to 14–17 years from the first onset of symptoms for people to get access to effective treatment, due to obstacles such as stigma and a lack of awareness about mental health, and OCD in particular.
OCD Awareness Week is an international effort to raise awareness and understanding about obsessive compulsive disorder and related disorders, with the goal of helping more people to get timely access to appropriate and effective treatment. Launched in 2009 by the IOCDF, OCD Awareness Week is now celebrated by a number of organizations across the US and around the world, with events such as OCD screening days, lectures, conferences, fundraisers, online Q&As, and more.
OCD Awareness Week takes place during the second full week of October each year, and this year it’s October 9–15, 2016.
#OCDweek YouTube Challenge
Each year, the IOCDF hosts a creative contest as part of OCD Awareness Week, inviting members of the OCD Community to help spread awareness and understanding through storytelling or art. In 2014, the IOCDF began hosting a video contest with the same theme. It has been a great success, and so they are doing it again for 2016! Check out last year’s finalists here for inspiration.
Click here to learn more.
Join the IOCDF's #OCDchat Series
This daily chat series is back for #OCDweek featuring different topics and guest experts every day. The chats will take place Monday–Friday at 1pm ET. A full chat schedule, list of guest experts, and more information on how to participate in the #OCDchat series will be available soon.
Promote OCD Awareness Week October 9-15th, 2016, on Social MediaFacebook, Twitter, Instagram, LinkedIn, and other social media networks are a great way to spread awareness about OCD and related disorders. By telling your story to your friends and family, you can help dispel myths about mental health disorders, eliminate stigma, and most importantly, raise awareness about OCD symptoms and available treatments. During OCD Awareness Week, there are a number of ways you can get involved online:
For OCD Awareness Week, I donate my status in support of anyone who has ever battled this disorder. May they find treatment, comfort, and hope. Together we can end the stigma around mental illness. Please copy and paste this as your status to promote the International OCD Foundation’s OCD Awareness Week efforts. Learn more at www.iocdf.org. #OCDweek
Following the adjustment to the potent smell of fruitful air fresher and classical music I spot the sign for the elevators. I’ve made it to the hotel. Fortunately, I don't have to push the button for “up” or even wait. There is a door open. Standing in front of the elevator door is a mother with her child. The child is frozen while she gazes into the elevator and watches her mother walk in first, uttering reassurances of safety to the young girl.
I recognize the fear immediately.
I hold the door for the young girl and reassure her we are not in any hurry. “Take your time,” I said. And, “try to move one foot in front of the other.” She cautiously walks in and shoots me a grin.
“Feel better?” I ask. “Yes,” she states with a smile.
Her mother gives me a look of gratitude and states, “I guess we are in the right place for this kind of stuff.”
The mother, child, two other strangers and myself share the elevator to the floor designated for registration for the International Obsessive Compulsive Disorder Foundation Conference (IOCDF). The IOCDF conference is an annual conference focused on the treatment, research, and support of individuals who struggle with OCD. This conference is unlike any other in that it welcomes world-renowned OCD researchers, expert clinicians, training clinicians, suffers of OCD, and their loved ones. Together, we all share in support and learning. It's remarkable.
This year’s conference was in Chicago, Illinois. It hosted to 1600 attendees and provided four days of learning. Several members of NW Anxiety Institute, LLC attended as learners, supporters, and even shared an exhibit. We joined daily discussions and connected with numerous families who shared with us their struggle of managing OCD in their lives. We met young adults who were excited to share their recovery and offer support. We admired others who attended as newly diagnosed individuals.
The conference provided many “nuggets" of education and inspiration, but the key note speaker, David Adam, stole the show. Adam, the author of The Man Who Couldn’t Stop, shared his experience of living with OCD. Adam is a journalist in the UK who has written scientific pieces for The Guardian. He eloquently, yet realistically described the torture OCD instills in its suffers while connecting deeply with the audience.
Overall, the conference highlighted the need for well trained therapists and reminded us that, not long ago, in our history was it grossly misunderstood and poorly treated. Thanks to the tireless research and collaboration of scientists and therapists over the years, we now know OCD is effectively treated with exposure response prevention (ERP) and medicine. And now the IOCDF provides a vast community to support one another.
We encourage anyone affected by OCD to attend this annual national conference. If you are interested in learning more about IOCDF or attending next year's conference, click here.
“How was your session?”
“Fine, I guess.”
“What did you do?”
“I don’t know, talked I guess.”
Is your child or teenager attending therapy? Can you tell if it’s helpful? Frequently, parents feel uncertain of what occurs within the therapy office. They are likely equally concerned as they are curious. The concern can potentially be a source of frustration particularly when it can feel as though they are paying for someone to “chat” with their child. When progress isn’t obvious and the child continues to suffer, concern may graduate to worry.
The process of therapy is dynamic, individualized, and unique. Therapists become trained in techniques to elicit emotion, identify ambivalence, challenge distorted thinking, and most importantly, provide empathy to one’s experience. Therapy is not advice-giving, but can be problem-solving. It’s not a panacea to your child’s problems and can take significant time. Progress at times can be quantified, but not always. Similar to life, therapy is uncertain and this can be incredibly challenging for parents.
The aforementioned concern and curiosity can lead parents to inquire about their child’s therapy session. Naturally, this occurs within two minutes into their car ride home. Parents ask, “How was it? What did you learn?” Unfortunately, these questions are often met with, “It was good. We chatted.” Neither answer provides more insight, or helps the parent identify whether therapy is helpful.
There may be a huge desire for a child to state,“Mom, Dad, in therapy we discussed problems I’m having with expressing myself. It seems my therapist believes these are related to areas of vulnerability likely caused by episodes of shame and guilt during my childhood and adolescence. Furthermore, we explored my support network and patterns I make when making friends. We identified the difficulties I have with attention and how I often isolate when I feel stupid.”
Although as parents, we would so appreciate a response such as this, we know it is unlikely and unrealistic. In fact, the expectation that a child is able to express himself/herself in a cohesive and articulate manner what they discussed in therapy is unattainable.
Of course, some of this depends on the reason your child is in therapy. With many of the clients I work with it’s pretty clear what we did, or discussed during the therapy hour. When working with teenage clients to overcome OCD, panic, or social phobia we engage in specific exposure activities to challenge their fears. This is concrete, objective, and parents often seen rather quick results. Sprinkled amongst these session, however, is “chatting” to build our relationship and develop insight.
Maybe the best response to your child climbing in the car after therapy is, “I’m glad you did that today. Good job.” And if you’re curious whether therapy is helpful, ask, “Do you want to come back next week?” If the answer is yes, they likely find value in it.
Talking with the Therapist
Therapists are not off the hook, however, in providing rationale for what they do.
I would ask the therapist either following the intake appointment or a few sessions in, what the treatment plan is?
I would inquire into what diagnosis the therapist believes best describes your child’s symptoms. “Depression and anxiety” are not clinical diagnoses and provide little understanding into how your child is suffering. Ask the therapist to be more specific.
Inquire how the therapist intends to track progress? For some symptoms, it is helpful to use specific, validated questionnaires that help with rating symptoms and allow one to track progress. This helps your child and you see weekly change (in one direction or another). Other ways can be specific behavioral changes such as improve sleep, school attendance, decrease isolation, etc. Sometimes, it is rather difficult to measure insight, the development of self-esteem, and ability to communicate. This is fine, but your therapist should be able to explain the rationale behind this.
What are the goals for therapy? Again, these can be very specific, but often the goal is to establish a strong relationship, provide support, and offer a safe-place for your child to express herself/himself. There aren’t “incorrect” goals for therapy, but there should be thoughtfulness on what your child is working on.
Lastly, request that the therapist provide a parent-session to educate you on your child’s symptoms, get recommendations on how to be helpful, or at minimum provide yourself with reassurance that someone understands what is going on.
Spring is here. The days are getting longer, flowers are blooming, and nature is waking from months of hibernation. The air is still crisp, though we have enjoyed a few unusually hot days in the Pacific Northwest thus far, and mornings are cool.
With spring break come and gone, parents are planning their child’s summer adventures. For many, the American tradition of summer camp is on the books. Unfortunately, for many others, camp is out of the question. These parents are dealing with one of the millions of American children who are plagued with anxiety, intense shyness, excessive worry, obsessive thoughts, panic attacks, and homesickness. For these families, convincing their child to spend a few nights away from home is almost impossible.
The benefits of attending a summer camp have been shared by Americans for generations. We have likely experienced this, or know someone who has returned to the same camp each year, and look forward to seeing their camp friends, participating in camp activities and “unplugging” from daily stress. We know this to be true anecdotally, and the mere amount of camps available is evidence to their growing popularity and enjoyment. The question is, why are they so great, and will the aforementioned anxious kids benefit too?
Researchers are interested in this topic as well and have conducted decades of studies showing evidence that summer camps are effective for building relationships, increasing self-esteem, and achieving mastery in outdoor activities. Camps are shown to help individuals who feel “different” feel included and bring together children with common illnesses or traumatic experiences.
When coping isn't enough:
Fight Fear Summer Camp, our camp for youth with OCD and/or other anxiety disorders, strives to build the same cohesiveness for children who may not have otherwise attended camp. The gold standard treatment for Anxiety Disorders is a type of Cognitive Behavioral Therapy (CBT): Exposure Response Prevention, commonly known as exposure therapy.
The premise of exposure therapy is to help individuals experience the feared stimuli (i.e., social event, being away from home, performing in public, or eating with peers) without avoiding. This teaches the brain that no actual or real harm will come and that the body’s alarm system can turn off.
Attending camp will be exposure enough for many children - an opportunity to feel anxious around other children while supported by licensed therapists. They eventually feel safe and calm, all the while retraining the brain. Children naturally cope. We hope to help build tolerance to those yucky feelings.
Other examples of exposure tasks while at the camp:
Why Attend Fight Fear Summer Camp?
Fight Fear Summer Camp will utilize highly effective cognitive-behavioral therapy (CBT) techniques to teach campers self-soothing skills, confidence building techniques, thought challenging tools and provide social opportunities.
Examples of these include:
Less is more:
Camp is about unplugging and being present with oneself. Attending camp is a vacation for the mind. A place where sleep, meals, and activities are scheduled. Campers can relax from the uncertainty of daily worry, breathe fresh air, and learn to tolerate their own thoughts (and themselves). These basic changes to a camper’s day can reduce overall stress and anxiety considerably. When there are no tv’s, iPads, or phones, campers engage with one another, have fun and laugh. Camp is about fun. And when we laugh and smile, we activate mirror neurons (part of the brain designed to identify what we see and copy) in those around us. This is contagious.
Few things build confidence like challenging oneself. The intensity and degree of the challenge is less important than the act of engaging in a novel situation. Fight Fear Summer Camp, led by therapists, provides daily opportunities for campers to challenge themselves and develop a sense of mastery in an activity. This could be socially asserting oneself to navigating a high ropes course, from playing a sport for the first time to asking a peer to sit with them.
Insight is derived from awareness and feedback. Insight is often the precursor to change. At Fight Fear Summer Camp, we provide ongoing feedback to our campers, regular coaching, group therapy, and individual attention to help campers reach specific goals. Staff are trained to help campers recognize changes in mood, physiology, and negative thinking patterns.
We all experience anxiety and distress. For some of us, it’s too intense and can take over our lives. We created Fight Fear Summer Camp for children who deserve the camp experience but, due to anxiety, may have not thought about attending. We also created this camp for experienced campers who want to be around other teens and peers who share similar fears. There are few better therapeutic interventions than feeling like you fit in!
Anxiety is a normative experience that we share with each other and it becomes problematic when it impacts our day-to-day function or we develop anticipatory worry of its reemergence. Anxiety is the brain’s interpretation of perceived threat in the absence of danger. The physiological changes we experience (e.g., increased heart rate, sweating, racing thoughts, numbness in extremities) when running from a bear are never thought of as an anxious response. They may be initiated by fear but are bloody necessary!
This need changes, however, when the same damaging symptoms arise before a public presentation, or networking opportunity. In these circumstances we are not actually in danger, but our brains get stuck in a loop between our physical symptoms and cognitive appraisals. This emotional reasoning, “If I feel bad it must be because there is reason to be” is commonly experienced by individuals who struggle with anxiety disorders. For example, individuals with social anxiety use their body’s physiology as cues for their social success or failure (e.g., “Sweating, blushing, and stomach knots are ‘proof’ I’m screwing this up!”).
Effective therapies (i.e., CBT, Exposure Response Prevention, Mindfulness, & Acceptance and Commitment Therapy) help individuals shift their relationship with their anxiety by challenging distorted thinking and breaking the anxiety brain-body loop through behavioral techniques. Methods to help individuals cope can sometimes be problematic. When utilizing coping skills, individuals continue to perceive their anxiety symptoms as dangerous and run the risk of temporary relief.
Although immediate symptom reduction can be seductive, it produces continued intolerance for distress - the major contributor to anxiety disorders. Treatment should focus on increasing a person’s tolerance for distressing feeling and separating feelings of anxiety from themselves. The feelings of panic will never be pleasurable, but tips to make them manageable and ultimately less significant do exist.
5 Tips for Shifting Your Relationship with Anxiety
Written by Kevin Ashworth, MA, LPC. Kevin is a licensed therapist and co-founder of NW Anxiety Institute in Portland, Oregon. He specializes in CBT and ERP treatments of anxiety disorders in children and adults.
OCD loves your smartphone.
If an intrusive thought is the seasoned cedar logs on a beautiful camp fire then a compulsion is a plumbed line of gasoline. This combination makes for a raging fire both dependent on each other to burn. No more wood, no more fire. Lots of wood and no additional fuel, fire eventually burns out. This is common analogy used when describing OCD, and part of one’s treatment is to reduce compulsions, essentially choking the gasoline line, and suffocating the fire. OCD is cunning, tricky, and apparently hip. Similar to many baby-boomers, OCD took a little time getting use to the mobile handheld device, understanding its potential, and being efficient in its operation. It’s not only cunning but relentless. Time is always on its side. It has become proficient in the use of search engines, and now the ability to take photos and video. OCD can now force its host to search anything, at any moment, about any concern - turning technology into more of a hinderance than a help.
Has asbestos been safely removed from this current building?
Could this sensation in my leg be a blood clot?
Could I really be a pedophile?
Did I turn off the stove, and unplug the lamps?
Could I have I hit someone while driving?
Video while walking around my car.
The trick OCD plays is convincing its host that the use of the device will ease the distress. Don’t want to worry? Take a quick pic and refer back later. Need evidence you didn’t post that rude comment? Take a quick screenshot? This trap is equivalent to hooking up another gasoline line to the fire. People get stuck reviewing their pictures and videos, and become unable to delete them filling up their devices and limiting their ability to take pictures of loved ones (and their lunch!) instead. Additionally, others fear that their use of their own device will become dangerous and actually have their phones turned off for fear they may accidentally post something inappropriate, or impulsively call and yell at someone. These devices are intended to make our lives easier, not worse.
If you feel comfortable sharing, what are some ways you've used technology to feed compulsions? How has it affected the way you use your devices?
But It's Mine. I Like It. Why Would You Try To Take It?
They come in different colors, but mine is white. It’s shiny. Not the kind of shine that catches your eye; more like it’s elegant, even sexy. I take it everywhere I go. I even take it with me from room to room just in case it needs me. Yes, it has practical utility. It’s a device for communicating, for taking pictures, checking email, seeing in the dark, playing music, or even making sure there is nothing stuck in my teeth. But it’s more than that. It’s comforting. It’s hard to explain but it feels good to know it’s near. It alerts me with a gentle buzz when it needs me. I find myself gently touching my back pocket just to make sure it’s there. Sometimes people touch it without permission. This bothers me a lot. It’s mine. Do not touch. My loved ones worry I care about it too much. “Put it down” they say, “It’s not going anywhere.” They don’t get it.
We are connected to our belongings. More than we’d like to admit. We all have a similar relationship to a possession, a memento, souvenir, or sentimental item as was describe above. And, it’s easy to understand this relationship when something has a high monetary value. The item described above is an iPhone 6 plus. It cost me around $600. You’d consider me mad if I were to throw it away, and likewise I’d share the same sentiment if you were to ask me to discard it.
But what if the item seemingly had no monetary value? What if it appeared to be junk?
You’d further consider me mad if I chose to keep it, correct? The problem with having too many items has nothing to do with the value of each item or the utility of these belongings. The saying, “One man’s trash is another man’s treasure,” refers to judgement of taste versus a literal interpretation; however, the latter can also be true.
Instead of infusing this judgement, when working with individuals who struggle with Hoarding Disorder, the focus should be on how their belongings (not trash or junk) impacts their life. Can they find all of their valued possessions? Can they enjoy them? Are they displayed? Can they use their homes and spaces for their intended purposes? If you're eating in the garage because the kitchen is cramped, maybe you’d like some help in organizing?
Hoarding Disorder is a serious psychological condition that causes suffering and has huge emotional and financial cost. Effective treatments are available that don't necessarily require giving up your beloved possessions, rather changing your relationship with them.
NW Anxiety Institute is extremely thrilled to be opening an Intensive Outpatient Program (IOP) for children and adolescents with Obsessive-Compulsive Disorder (OCD) and other anxiety disorders.
NW Anxiety Institute’s program is the first of its kind in the Pacific NW and will offer daily intensive therapy for kids struggling with debilitating anxiety because of their OCD. At NW Anxiety, we specialize in Exposure Response Prevention. Exposure Response Prevention (ERP) is well established as the treatment of choice for OCD, Panic Disorder, Social Anxiety, and other anxiety disorders, and many individuals improve greatly with traditional weekly therapy which is why it will be incorporated into the IOP.
The Intensive Outpatient Program is designed for children and adolescents whose anxiety is debilitating and significantly impairing their quality of life. Many of the adolescent patients at NWAI are unable to attend school regularly due to their symptoms and require intensive treatment.
The “intensive” in IOP refers to the frequency of therapies in a relatively short time frame. Patients will continue to systematically challenge their fears in a collaborative and supportive way, however, much faster than can be done in weekly therapy. Furthermore, the frequency allows us to maximize on therapy success and gains before a client can return to their obsessions or compulsions in the six days between appointments.
When a patient is enrolled in our IOP treatment becomes their focus. They are not distracted by friends, school, and family dynamics, and can put forth their energy to overcome their fears and make considerable behavioral change. Successful exposure therapy requires four principles:
These principles are necessary for an individual to challenge an irrational thought, learn and maintain new behaviors, and evoke emotional processing. Essentially, the more (frequency) a patient can engage in a difficult (intensity) exposure task, while maintaining that task until their anxiety diminishes (duration) the more successful the exercise will be. Ah yes, they also have to minimize any delay while initiating the task (latency).
The ideal patient for our IOP is a child/adolescent between the age of 10-18 who is suffering greatly with OCD or another anxiety disorder. This individual believes they would benefit from a three-week structured program to aggressively target their symptoms, and regain their independence.
Interested in registering your child? | <urn:uuid:6342f41c-c3a5-40dc-8cb7-cbe98e1522de> | CC-MAIN-2022-33 | https://www.nwanxiety.com/blog | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00098.warc.gz | en | 0.959307 | 5,871 | 2.84375 | 3 |
The Temple of Jupiter Optimus Maximus, also known as the Temple of Jupiter Capitolinus (Latin: Aedes Iovis Optimi Maximi Capitolini; Italian: Tempio di Giove Ottimo Massimo; lit. 'Temple of Jupiter, the Best and Greatest') was the most important temple in Ancient Rome, located on the Capitoline Hill. It was surrounded by the Area Capitolina, a precinct where numerous shrines, altars, statues and victory trophies were displayed.
The first building was the oldest large temple in Rome, and, like many temples in central Italy, shared features with Etruscan architecture. It was traditionally dedicated in 509 BC, and in 83 BC was destroyed by fire, and a replacement in Greek style completed in 69 BC (there were to be two more fires and new buildings). For the first temple sources report Etruscan specialists being brought in for various aspects of the building, including making and painting the extensive terracotta elements of the Temple of Zeus or upper parts, such as antefixes. But for the second building they were summoned from Greece, and the building was presumably essentially Greek in style, though like other Roman temples it retained many elements of Etruscan form. The two further buildings were evidently of contemporary Roman style, although of exceptional size.
The first version is the largest Etruscan-style temple recorded, and much larger than other Roman temples for centuries after. However, its size remains heavily disputed by specialists; based on an ancient visitor it has been claimed to have been almost 60 m × 60 m (200 ft × 200 ft), not far short of the largest Greek temples. Whatever its size, its influence on other early Roman temples was significant and long-lasting. Reconstructions usually show very wide eaves, and a wide colonnade stretching down the sides, though not round the back wall as it would have done in a typical Greek temple. A crude image on a coin of 78 BC shows only four columns, and a very busy roofline.
With two further fires, the third temple only lasted five years, to 80 AD, but the fourth survived until the fall of the empire. Remains of the last temple survived to be pillaged for spolia in the Middle Ages and Renaissance, but now only elements of the foundations and podium or base survive; as the subsequent temples apparently reused these, they may partly date to the first building. Much about the various buildings remains uncertain.
Much of what is known of the first Temple of Jupiter is from later Roman tradition. Lucius Tarquinius Priscus vowed this temple while battling with the Sabines and, according to Dionysius of Halicarnassus, began the terracing necessary to support the foundations of the temple. Much of the Cappellaccio tufa which forms the foundation of the Temple was probably mined directly from the site when it was excavated and levelled for the structure. Modern coring on the Capitoline has confirmed the extensive work needed just to create a level building site. According to Dionysius of Halicarnassus and Livy, the foundations and most of the superstructure of the temple were completed by Lucius Tarquinius Superbus, the last King of Rome.
Livy also records that before the temple's construction shrines to other gods occupied the site. When the augurs carried out the rites seeking permission to remove them, only Terminus and Juventas were believed to have refused. Their shrines were therefore incorporated into the new structure. Because he was the god of boundaries, Terminus's refusal to be moved was interpreted as a favorable omen for the future of the Roman state. A second portent was the appearance of the head of a man to workmen digging the foundations of the temple. This was said by the augurs (including augurs brought especially from Etruria) to mean that Rome was to be the head of a great empire.
The original temple may have measured almost 60 m × 60 m (200 ft × 200 ft), though this estimate is hotly disputed by some specialists. It was certainly considered the most important religious temple of the whole state of Rome. Each deity of the Triad had a separate cella, with Juno Regina on the left, Minerva on the right, and Jupiter Optimus Maximus in the middle. The first temple was decorated with many terra cotta sculptures. The most famous of these was of Jupiter driving a quadriga, a chariot drawn by four horses, which was on top of the roof as an acroterion. This sculpture, as well as the cult statue of Jupiter in the main cella, was said to have been the work of Etruscan artisan Vulca of Veii. An image of Summanus, a thunder god, was among the pedimental statues. The cult statue of Jupiter showed the god standing and wielding a thunderbolt, dressed in a tunica palmata (a tunic decorated with images of palm leaves), and the toga picta, dyed purple and bearing designs in gold thread. This costume became the standard dress for victorious generals celebrating a triumph.
The original temple decoration was discovered in 2014.The findings allowed the archaeologists to reconstruct for the first time the real appearance of the temple in the earliest phase. The wooden elements of the roof and lintels were lined with terracotta revetment plaques and other elements of exceptional size and richly decorated with painted reliefs, following the so-called Second Phase model (referring to the decorative systems of Etruscan and Latin temples), that had its first expression precisely with the Temple of Jupiter Optimus Maximus. The temple, which immediately rose to fame, established a new model for sacred architecture that was adopted in the terracotta decorations of many temples in Italy up to the 2nd century BC. The original elements were partially replaced with other elements in different style in the early 4th century BC and anew at the end of the 3rd – early 2nd century BC. The removed material was dumped into the layers forming the square in front of the temple, the so-called Area Capitolina, in the middle years the 2nd century BC.
Repairs and improvements were undertaken over the course of the temple's lifetime, including the re-stuccoing of the columns and walls in 179 BC, the addition of mosaic flooring in the cella after the Third Punic War, and the gilding of the coffered ceiling inside the cella in 142 BC. Over the years the temple accrued countless statues and trophies dedicated by victorious generals, and in 179 some of these attached to the columns were removed to lessen the clutter.
The plan and exact dimensions of the temple have been heavily debated. Five different plans of the temple have been published following recent excavations on the Capitoline Hill that revealed portions of the archaic foundations. According to Dionysius of Halicarnassus, the same plan and foundations were used for later rebuildings of the temple, but there is disagreement over what the dimensions he mentions referred to (the building itself or the podium).
In 437 BC Aulus Cornelius Cossus unhorsed the Veientes' King Lars Tolumnius and struck him down. After taking the linen cuirass off Tolumnius' body, he decapitated the corpse and put the head on a lance and paraded it in front of the enemy, who retreated in horror. Cossus donated the captured armour, shield and sword to the Temple of Jupiter Feretrius on the Capitoline Hill, where as late as the reign of Emperor Augustus it could be seen.
The first temple burned in 83 BC, during the civil wars under the dictatorship of Sulla. Also lost in this fire were the Sibylline Books, which were said to have been written by classical sibyls, and stored in the temple (to be guarded and consulted by the quindecimviri (council of fifteen) on matters of state only on emergencies).
Speculative plan of the first temple
During Lucius Cornelius Sulla's sack of Athens in 86 BC, while looting the city, Sulla seized some of the gigantic incomplete columns from the Temple of Zeus and transported them back to Rome, where they were re-used in the Temple of Jupiter. Sulla hoped to live until the temple was rebuilt, but Quintus Lutatius Catulus Capitolinus had the honor of dedicating the new structure in 69 BC. The new temple was built to the same plan on the same foundations, but with more expensive materials for the superstructure. Literary sources indicate that the temple was not entirely completed until the late 60s BC. Around 65 AD the three new cult statues were completed. The chryselephantine statue of Jupiter was sculpted by Apollonius of Athens; its appearance is generally known from replicas created for other temples of Jupiter in the Roman colonies. It featured Jupiter seated with a thunderbolt and scepter in either hand, and possibly an image of the goddess Roma in one hand as well.
Brutus and the other assassins locked themselves inside it after murdering Caesar. The new temple of Quintus Lutatius Catulus was renovated and repaired by Augustus. The second building burnt down during the course of fighting on the hill on 19 December of 69 AD, when an army loyal to Vespasian battled to enter the city in the Year of the Four Emperors.
The new emperor, Vespasian, rapidly rebuilt the temple on the same foundations but with a lavish superstructure. It was taller than the previous structures, with a Corinthian order and statuary including a quadriga atop the gable and bigae driven by figures of Victory on either side at the base of the roof. The third temple of Jupiter was dedicated in 75 AD. The third temple burned during the reign of Titus in 80 AD.
Domitian immediately began rebuilding the temple, again on the same foundations, but with the most lavish superstructure yet. According to Plutarch, Domitian used at least twelve thousand talents of gold for the gilding of the bronze roof tiles alone. Elaborate sculpture adorned the pediment. A Renaissance drawing of a damaged relief in the Louvre Museum shows a four-horse chariot (quadriga) beside a two-horse chariot (biga) to the right of the latter at the highest point of the pediment, the two statues serving as the central acroterion, and statues of the god Mars and goddess Venus surmounting the corners of the cornice, serving as acroteria. It was completed in A.D. 82. In the centre of the pediment the god Jupiter was flanked by Juno and Minerva, seated on thrones. Below was an eagle with wings spread out. A biga driven by the sun god and a biga driven by the moon were depicted either side of the three gods. After the emperor Theodosius I eliminated the public funding for upkeep of pagan temples in 392, it was spoliated several times through the Middle Ages. During the 16th century, it was subsumed into a large private residence, the Palazzo Caffarelli-Clementino, which became part of the current-day Capitoline Hill.
Decline and abandonmentEdit
The temple completed by Domitian is thought to have lasted more or less intact for over three hundred years, until all pagan temples were closed by emperor Theodosius I in 392 during the Persecution of pagans in the late Roman Empire. In the 4th century, Ammianus Marcellinus referred to the temple as "the Capitolium, with which revered Rome elevates herself to eternity, the whole world beholds nothing more magnificent." During the 5th century the temple was damaged by Stilicho (who according to Zosimus removed the gold that adorned the doors). Procopius states that the Vandals plundered the temple during the sack of Rome in 455, stripping away half of the gilded bronze tiles. Despite this, in the early 6th century Cassiodorus described the temple as one of the wonders of the world. In 571, Narses removed many of the statues and ornaments. The ruins were still well preserved in 1447 when the 15th-century humanist Poggio Bracciolini visited Rome. The remaining ruins were destroyed in the 16th century, when Giovanni Pietro Caffarelli built a palace (Palazzo Caffarelli) on the site.
Today, portions of the temple podium and foundations can be seen behind the Palazzo dei Conservatori, in an exhibition area built in the Caffarelli Garden, and within the Musei Capitolini. A part of the eastern corner is also visible in the via del Tempio di Giove.
The Area Capitolina was the precinct on the southern part of the Capitoline that surrounded the Temple of Jupiter, enclosing it with irregular retaining walls following the hillside contours. The precinct was enlarged in 388 BC, to about 3,000m2. The Clivus Capitolinus ended at the main entrance in the center of the southeast side, and the Porta Pandana seems to have been a secondary entrance; these gates were closed at night. The sacred geese of Juno, said to have sounded the alarm during the Gallic siege of Rome, were kept in the Area, which was guarded during the Imperial period by dogs kept by a temple attendant. Domitian hid in the dog handler's living quarters when the forces of Vitellius overtook the Capitoline.
Underground chambers called favissae held damaged building materials, old votive offerings, and dedicated objects that were not suitable for display. It was religiously prohibited to disturb these. The precinct held numerous shrines, altars, statues, and victory trophies. Some plebeian and tribal assemblies met there. In late antiquity, it was a market for luxury goods, and continued as such into the medieval period: in a letter from 468, Sidonius Apollinaris describes a shopper negotiating over the price of gems, silk, and fine fabrics.
|Capitoline Hill plan|
Inter duos lucos
- Ab urbe condita, 2.8
- Stamper, 12–13; Galluccio, 237–291
- Christofani; Boethius, 47
- Boethius, 47–48
- Stamper, 33 and all Chapters 1 and 2. Stamper is a leading protagonist of a smaller size, rejecting the larger size proposed by the late Einar Gjerstad.
- Denarius of 78 BC
- Dionysius of Halicarnassus, Roman Antiquities 3.69
- Richardson, 1992; p. 222
- Ammerman 2000, pp. 82–3
- Dionysius of Halicarnassus, Roman Antiquities 4.61; Livy History 1.55–56.1
- Livy Ab urbe condita 1.55
- Ab urbe condita, 2.8
- Tacitus, quoted in Aicher 2004, p. 51
- Livy, Ab urbe condita, 2.22
- Mura Sommella 2000, p. 25 fig. 26;Stamper 2005, pp. 28 fig. 16;Albertoni & Damiani 2008, pp. 11 fig. 2c;Cifani 2008, pp. 104 fig. 85;Mura Sommella 2009, pp. 367–8 figs. 17–19;Kaderka, Tucci 2021, pp. 151 fig. 4 harvnb error: no target: CITEREFKaderka,_Tucci2021 (help)
- Pliny the Elder, Encyclopedia 35.157
- Cicero, On Divination 1.16
- Galluccio 2016, 237–250, fig. 9
- Galluccio 2016, 250 – 256, figs. 10–13
- Ridley 2005
- Mura Sommella 2000, pp. 25 fig. 26;Stamper 2005, pp. 28 fig. 16;Albertoni & Damiani 2008, pp. 11 fig. 2c;Cifani 2008, pp. 104 fig. 85;Mura Sommella 2009, pp. 367–8 figs. 17–19.
- Dionysius of Halicarnassus, Roman Antiquities 4.61.4
- Pliny NH 7.138; Tacitus Hist. 3.72.3.
- Flower 2008, p. 85
- Coarelli, 2014; p. 34
- Richardson, 1992; p. 223
- Tacitus Hist. 3.71–72
- Darwall-Smith 1996, pp. 41–47
- Plutarch. Life of Pulicola. 15.3–4.
- "Coins: the Temple through Time". Omeka. Retrieved 24 January 2019.
- Findley, Dr. Andrew (13 August 2016). "Temple of Jupiter Optimus Maximus, Rome". Smarthistory. Retrieved 24 January 2019.
- "Palazzo Caffarelli-Clementino". Musei Capitolini. Retrieved 24 January 2019.
- Samuel Ball Platner & Thomas Ashby (1929). "A Topographical Dictionary of Ancient Rome". Oxford University Press. p. 297-302.
- Ammianus Marcellinus, The Roman History XXII.16.12
- Cassiodorus, Variae epistolae VII.6
- Claridge 1998, pp. 237–238; Albertoni & Damiani 2008
- Coarelli, 2014; p. 32
- Giovanna Giusti Galardi: The Statues of the Loggia Della Signoria in Florence: Masterpieces Restored, Florence 2002. ISBN 8809026209
- Livy 25.3.14; Velleius Paterculus 2.3.2; Aulus Gellius 2.102; Lawrence Richardson, A New Topographical Dictionary of Ancient Rome (Johns Hopkins University Press, 1992), p. 31.
- Livy 6.4.12; Richardson, A New Topographical Dictionary, p. 31.
- Adam Ziolkowski, "Civic Rituals and Political Spaces in Republican and Imperial Rome," in The Cambridge Companion to Ancient Rome (Cambridge University Press, 2013), p. 398.
- Cicero, Rosc. Am. 56; Gellius 6.1.6; Richardson, A New Topographical Dictionary, p. 31.
- Tacitus, Histories 3.75; Richardson, A New Topographical Dictionary, p. 31.
- Richardson, A New Topographical Dictionary, p. 32.
- Ziolkowski, "Civic Rituals and Political Spaces," p. 398.
- Sidonius Apollinaris, Epistulae 1.7.8; Claire Holleran, Shopping in Ancient Rome: The Retail Trade in the Late Republic and the Principate (Oxford University Press, 2012), 251.
- Aicher, Peter J. (2004), Rome Alive: A Source Guide to the Ancient City, Wauconda, IL: Bolchazy-Carducci, ISBN 0865164738.
- Albertoni, M.; Damiani, I. (2008), Il tempio di Giove e le origini del colle Capitolino, Milan: Electa.
- Ammerman, Albert (2000), "Coring Ancient Rome", Archaeology: 78–83.
- Axel Boëthius, Roger Ling, Tom Rasmussen, Etruscan and Early Roman Architecture, Yale University Press Pelican history of art, 1978, Yale University Press, ISBN 9780300052909, google books
- Cristofani, Mauro, et al. "Etruscan", Grove Art Online,Oxford Art Online. Oxford University Press, accessed April 9, 2016, subscription required
- Cifani, Gabriele (2008), Architettura romana arcaica: Edilizia e società tra Monarchia e Repubblica, Rome: "L'Erma" di Bretschneider.
- Darwall-Smith, R. H. (1996), Emperors and Architecture: A Study of Flavian Rome, Brussels: Latomus.
- Claridge, Amanda (1998), Rome, Oxford Archaeological Guides, Oxford Oxfordshire: Oxford University Press, ISBN 0-19-288003-9.
- Coarelli, Filippo (2014), Rome and Environs: An Archaeological Guide, Berkeley & Los Angeles: University of California Press, ISBN 978-0-520-28209-4.
- Flower, Harriet I. (2008), "Remembering and Forgetting Temple Destruction: The Destruction of the Temple of Jupiter Optimus Maximus in 83 BC", in G. Gardner and K. L. Osterloh (ed.), Antiquity in Antiquity, Tubingen: Mohr Siebeck, pp. 74–92, ISBN 978-3-16-149411-6.
- Galluccio, Francesco (2016), "Il mito torna realtà. Le decorazioni fittili del Tempio di Giove Capitolino dalla fondazione all'età medio repubblicana", Campidoglio Mito, Memoria, Archeologia (Exhibit Catalog, Rome 1 March-19 June 2016), Eds. Claudio Parisi Presicce – Alberto Danti: 237–291.
- Kaderka Karolina, Tucci Pier Luigi (2021), "The Capitoline Temple of Jupiter. The Best, the Greatest, but not Colossal", Römische Mitteilungen (Mitteilungen des Deutschen Archäologischen Instituts, Römische Abteilung), 127: 146–187. https://publications.dainst.org/journals/rm/article/view/3668/7359
- Mura Sommella, A. (2000), ""La grande Roma dei tarquini": Alterne vicende di una felice intuizione", Bullettino della Commissione Archeologica Comunale di Roma, 101: 7–26.
- Mura Sommella, A. (2009), "Il tempio di Giove Capitolino. Una nuova proposta di lettura", Annali della Fondazione per Il Museo Claudio Faina, 16: 333–372.
- Richardson, Lawrence (1992). A New Topographical Dictionary of Ancient Rome. The Johns Hopkins University Press. ISBN 0-8018-4300-6.
- Ridley, R.T. (2005), "Unbridgeable Gaps: the Capitoline temple at Rome", Bullettino della Commissione Archeologica Comunale di Roma, 106: 83–104.
- Stamper, John (2005), The architecture of Roman temples: the republic to the middle empire, New York: Cambridge University Press. | <urn:uuid:a5466177-c4f5-48dd-9801-f4e0ace9ab6c> | CC-MAIN-2022-33 | https://en.m.wikipedia.org/wiki/Temple_of_Capitoline_Jupiter | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00097.warc.gz | en | 0.908689 | 5,040 | 3.6875 | 4 |
Fall of Reach
From Halopedia, the Halo wiki
This article is about the battle. For the novel or comic book adaptation, see Halo: The Fall of Reach and Halo: Fall of Reach.
This article is part of a series on the
To check out information for other Human-Covenant War battles, see here!
The Fall of Reach, also known as the Battle of Reach and Reach Campaign, was one of the largest and bloodiest engagements in the Human-Covenant War. It was fought on and around the human colony world of Reach in the Epsilon Eridani system, between the United Nations Space Command and the Covenant from July 25 to August 30, 2552.
Initial skirmishes began on July 23, 2552, when a small Covenant task force led by the Sh'wada-pattern supercarrier Long Night of Solace covertly established a foothold on Reach. The Covenant began to solidify their hold and began searching for Forerunner relics as the UNSC counter-attacked, attempting to drive the Covenant off the surface before reinforcements arrived. Ultimately, the UNSC was unsuccessful, and substantial Covenant reinforcements under Thel 'Vadamee arrived in the system. After hard fighting, the Covenant breached the orbital defense grid and overran the UNSC's planetside forces. After overwhelming the defenders, the Covenant glassed the planet. Isolated skirmishes between UNSC remnants and the Covenant forces continued until their recovery by a small rescue party led by John-117.
Though the battle was a clear defeat for the UNSC, Noble Team preventing the Cortana fragment from falling into enemy hands subsequently led the Pillar of Autumn to Halo Installation 04, where they engaged the Covenant in a decisive battle. The experience gleaned on Installation 04, and the subsequent political schism within the Covenant while attacking Earth, would be invaluable to the ultimate defeat of the Covenant.
At some point before July 19, 2552, the Office of Naval Intelligence ran a series of calculations that predicted the future movement of the Covenant force sweeping throughout the galaxy. Based on these calculations, ONI determined that there was an 87% probability that the Epsilon Eridani system would be discovered by the Covenant within five months. Admiral Margaret Parangosky, the head of ONI, debated with Vice Admiral Michael Stanforth about using Reach as the stage for the planned Operation: RED FLAG; the pair considered that the presence of large-scale Forerunner artifacts on the planet would be a source of Covenant interest in the planet. The rest of the UNSC, however, was given no warning of the predicted attack.
ONI's prediction came true less than a week after Admiral Parangosky sent her message. Following his victory against the humans in the Beta Eridani system, Supreme Commander Rho 'Barutamee of the Fleet of Valiant Prudence was able to secure a Luminary that informed him of Reach's location. 'Barutamee cared little of the humans in the Epsilon Eridani system, and did not know the significance of Reach to the UNSC.
By July 23, the Fleet of Valiant Prudence consisting of Ceudar-pattern heavy corvettes, CCS-class battlecruisers, Varric-pattern heavy cruisers and a Sh'wada-pattern supercarrier, Long Night of Solace, arrived at Reach following the Luminary retrieved at Beta Eridani. The Luminary marked the presence of the Forerunner Rate of Builders across the system. After arriving at Reach, the fleet clandestinely deployed a large ground force to the Viery Territory. This invasion force was cloaked from UNSC sensors by sophisticated stealth pylons deployed around it. The Covenant also sent a small Sangheili Zealot strike team, of the Devoted Sentries, led by a Field Marshal to recover a Forerunner artifact from an excavation site near the Visegrád communications outpost.
The strike force disabled the communications array at Visegrád, cutting off communications from the northern parts of Alföld to the larger communications hubs further south, leaving a large portion of Reach isolated from the rest of the UNSC. This operation, however, went unnoticed by the UNSC and it was surmised by the Office of Naval Intelligence that the local insurrectionist organization, an ultra-radical cell of the People's Occupation, was responsible for the loss of contact with the facility.
The day after the Visegrád Relay was disabled, Spartan NOBLE Team was sent to investigate the disturbance in lieu of the missing Army fireteams. The group had been conducting anti-insurrection operations in the region, and were thus on-hand to assist. Upon landing in Visegrád and scouting the surrounding countryside, NOBLE Team quickly discovered that insurrectionists were not responsible for the attack; a small Covenant advance force had found Reach and landed at Visegrád, with the intent of disabling Reach's communications with other UNSC colonies. After wiping out the advance force, NOBLE Team proceeded to the relay station, where they were ambushed by a group of Sangheili Zealots, a team sent to hunt down vital data on a Forerunner artifact. WINTER CONTINGENCY was then declared on Reach.
Following the declaration of the WINTER CONTINGENCY, several ONI field agents were given supplemental orders to secure or - if necessary - terminate UNSC astronavigation personnel to ensure their knowledge would not fall into the hands of the Covenant.
Operating using cloaking technology and stealth tactics, the Covenant had created a fortified landing zone on Reach, with anti-air guns around the perimeter of their staging area in the Viery Territory where they had covertly inserted a large amount of forces, including several corvettes.
Two days after the skirmish at Visegrád, the Covenant attacked SWORD Base in an attempt to overrun the ONI facility and access the Forerunner structures there. However, these forces were repulsed after NOBLE Team reinforced the base and brought its M71 Anti-Aircraft Guns and communications back online. The Covenant corvette attacking it was forced to retreat and was destroyed by an orbital defense platform's Magnetic Accelerator Cannon.
Over two weeks after the defense of SWORD Base, the UNSC sent reconnaissance teams into Covenant-controlled territory in preparation for a counterattack and to investigate the enemy positions and forces they would be facing. With a Covenant staging ground on Reach, the UNSC Army, Marine Corps, and Navy launched a joint offensive in an attempt to defeat the Covenant forces before reinforcements could arrive. The assault was spearheaded by the Army and NOBLE Team, who attacked Covenant anti-air weapons to make way for GA-TL1 Longswords and Paris-class heavy frigates to make bombing runs. The attack, though initially successful, was thwarted when a cloaked Sh'wada-pattern supercarrier, Long Night of Solace, appeared over the area, attacking the UNSC frigates and reinforcing the Covenant troops. Meanwhile, Covenant forces made another strike on SWORD Base, this time with much greater success. Around this time, sixty percent of the Fleet was recalled to Reach from existing deployments; these reinforcements were scheduled to arrive by August 15.
While the battle began on the ground, Rho's forces in space began to enagage UNSC Naval forces across the system, engaging light battlegroups composed mainly of Paris-class heavy frigates and a Marathon-class heavy cruiser in space. Following the declaration of WINTER CONTINGENCY on Reach, the personnel of Teller Station - a research laboratory working at Site 17 in the system's Oort cloud - prepared to evacuate after being ordered to condense their last dig into barely a fortnight. These activities awakened the senescent Forerunner systems there, alerting the Fleet of Valiant Prudence to the site. Rho deployed a "significant portion of his fleet" there consisting of SDV-class corvettes and CCS-class battlecruisers, only to be engaged by more frigates and heavy cruisers deployed by UNSC admiral Michael Stanforth. The Covenant forces were able to overrun the defences there, though the humans were able to successfully extract Site 17's artifact. The Covenant attempted to board the human flagship to retrieve it, though Stanforth intended to use the artifact to lure the Covenant into a trap near Reach itself. The battle group was able to fight free of their pursuers, and Stanforth was able to vector in reinforcements to stop the pursuing Covenant squadrons.
The battle begins
The arrival of the supercarrier led to the UNSC concocting an audacious plan known as Operation: UPPER CUT, in which NOBLE Team were to use FSS-1000 Sabre starfighters and the corvette Ardent Prayer to deliver an improvised slipspace drive "bomb" to the supercarrier and destroy it. The team were deployed to fend off an attack at the Sabre research site at Farkas Lake, before deploying to refit station Anchor 9 in the fighters to prepare for the operation. Spartan-B312, Jorge-052, and a team of Army pilots, with the aid of the frigate Savannah, boarded the Ardent Prayer and took control of it.
By this time, Supreme Commander 'Barutamee now realised the threat of the human presence on the surface and the importance of the planet he was attacking. If this wasn't enough, the Ministry of Resolution had deployed a much larger fleet to the planet, and his artefact-hunting teams were still bogged down on the surface. If the commander wasn't able to produce something of value soon, he might be forced to retreat or face the wrath of the Hierarchs for his ineptitude. Stanforth had managed to spread his fleet thin, probing his defences from all sides. The UNSC fleet at large initiated a massive attack known as Operation: LEFT JAB to distract the supercarrier, allowing NOBLE Team's operation to go unnoticed following the jamming of their target corvette's communications. With the Covenant fleet stretched thin, UNSC task forces were able to pounce on cut-off Covenant scout fleets.
Noble Five then sacrificed himself to destroy Long Night of Solace. However, just as the supercarrier was destroyed, the Ministry of Resolution's massive fleet of warships exited slipspace in orbit over the planet. The supercarrier's wreckage landed near New Alexandria, killing Barutamee in the process. Shipmaster Kantar 'Utaralee assumed command of what remained of the now-splintered Fleet of Valiant Prudence, leading them once again to recover the Site 17 artifact, now relocated to a station near Turul.
Despite the mission's success, the destruction of the supercarrier was an inadvertent setback for Operation: RED FLAG. The Operation called for a "Class-Five" Covenant vessel such as a CAS-class assault carrier or a CSO-class supercarrier to be boarded by the Spartan-IIs. Colonel Holland informed Admiral Stanforth of the destruction of the supercarrier around August 27, several days following its destruction. The Admiral relayed his frustrations at the Unified Special Warfare Command's "trigger happiness", as the operation would now need to find a new target vessel.
Over the next week, Covenant ground forces landed on the planet and Covenant ships began glassing targeted areas. A major siege lasting between August 18–23 broke out in the city of New Alexandria, with large numbers of civilian evacuation transports being shot down by Covenant air and ground forces. Members of NOBLE Team assisted UNSC forces in clearing evacuation zones, succeeding in damaging a Covenant corvette in the process. Though the UNSCDF succeeded in clearing the city of Covenant troops, Covenant naval assets began to glass the city, and while retreating Catherine-B320 of NOBLE Team was killed by sniper fire. Elsewhere, the evacuation stations in Manassas and Esztergom were attacked and eventually stopped responding.
However, most of the planet's population, both civilian and military, (particularly those assigned to Operation: RED FLAG), were kept unaware of the invasion, as the Army repeatedly insisted they had the ground battle under control. HIGHCOM withheld major counteroffensives on the far side of the planet in the hope that a Class-5 Covenant vessel, which was needed for the operation, would be drawn in and make itself vulnerable for capture. Millions perished as a result of this relative inaction.
Due to his tendency to jump head-first into the action, Captain Jacob Keyes was left in the dark about the Covenant attack, as to ensure that he wouldn't engage before a target could make itself known. At some point following August 27, the Captain was informed of the ongoing battle, though under strict orders to keep the UNSC Pillar of Autumn's crew in the dark. Keyes felt these blackout orders to be ineffective due to the ability to look out of the window to see the battle raging on the surface, and wished to tell the crew to ensure they knew that Reach would likely no longer exist when their mission was complete. As the battle went on to August 30, the ship was stationed over Lábatlan, with the R. Abiad noticing an explosion over Eposz and William Lovell noticing strange electromagnetic readings coming from Turul. ONI used the downed Visegrad relay as an excuse to keep the Pillar of Autumn parked on the side of the planet not currently hit by the Covenant.
As the situation on the ground deteriorated, NOBLE Team and a small force of Orbital Drop Shock Troopers were ordered to destroy the Covenant-occupied SWORD Base in a "torch-and-burn" operation known as Operation: WHITE GLOVE. Proceeding into the base, the team encountered Dr. Catherine Halsey, who enlisted their help in defending her laboratory amid the ruins of a Forerunner complex from advancing Covenant forces. Halsey ordered Noble Six, Carter, and Emile to transport a fragment of the AI Cortana to the UNSC Pillar of Autumn, while Jun-A266 escorted her to CASTLE Base elsewhere on Reach.
Final space battle
As August 30 dawned, the Pillar of Autumn began making its way out-system to begin RED FLAG when a massive Covenant combined fleet — including Supreme Commander Thel 'Vadamee and his Fleet of Particular Justice — was detected exiting slipspace on the edge of the system. The fleet was first detected in slipspace by remote scanning outpost Fermion, who initially mistook the tightly knit formation as a large planetoid that had somehow entered slipspace. The station's commander, Chief Petty Officer McRobb, sent an emergency message to FLEETCOM and ordered self-destruct to prevent the station's data from falling into enemy hands. This fleet had been drawn to the system following a tracking device placed on the UNSC Iroquois during the earlier Battle of Sigma Octanus IV with the intent of using it to find human colony worlds. When the Iroquois returned to Reach, the human presence - and its extent - was revealed to this new Covenant faction.
A few minutes later, 315 Covenant ships exited slipspace at the edge of the Epsilon Eridani system. Admiral Roland Freemont issued UNSC Alpha Priority Transmission 04592Z-83, ordering all ships in the Epsilon Eridani system to rendezvous at Rally Point Zulu near Reach in preparation for the coming assault. The Covenant moved in on the orbital defenses before the UNSC fleet fully consolidated. Fifty-three belated UNSC ships, including the UNSC Pillar of Autumn, ran a gauntlet of screening Covenant warships as they attempted to link up with the main fleet. At this time, only about a hundred UNSC ships were readily available to defend Reach. Spartan John-117 recognised the battle as a potential opportunity to board a Covenant warship, and ordered the preparation of a Pelican dropship to potentially blast their way into a hull if needs be. The Autumn was able to fend off the encroaching Seraphs with its point-defense guns, before managing to destroy a Covenant carrier. Meanwhile, a hundred ships were already gathered at Rally Point Zulu; destroyers, frigates, three cruisers, two carriers and three refit stations.
The main Covenant force moved in on the orbital defenses. The initial salvo of plasma torpedoes was mostly absorbed by the sacrifice of three refit and repair stations, allowing the defenders to return fire. The orbital defense platforms and four nuclear mines combined to take down a full third of the Covenant fleet. The vaporized Titanium-A armor of the refit stations also served to block incoming plasma torpedoes. The UNSC force began to scatter, with fifty more ships falling to the trailing plasma torpedoes, while the MAC platforms were able to secure a further sixteen Covenant kills. The Covenant maneuvered for clear shots around the titanium dust cloud and moved in for the kill, with a further eighteen falling to the MAC guns, followed by yet another six. The final six were able to let loose a salvo of plasma rounds before their destruction, which were able to destroy five orbital guns. A previously unknown warship type, armed with a powerful energy projector, revealed itself and destroyed the UNSC Minotaur and four other ships before withdrawing temporarily.
The rest of the Covenant fleet withdrew to regroup after deploying hundreds of dropships to the surface. The opening attack had shattered the UNSC defenders, taking almost a hundred ships out and leaving only twenty ships left. Despite their surprise retreat, Captain Keyes ordered scans of the planet, realising that the Covenant had instead deployed dropships to Reach's poles to begin invading the planet. The Super MACs and remaining fleet split into two groups - one for each pole -and fired upon the smallcraft, each MAC shot managing to destroy dozens of transports. Hundreds of dropships were able to get through and disgorged thousands of troops. The Fleet Command Headquarters was quickly attacked and destroyed by the renewed ground attack, while the Covenant fleet began another assault.
Some ships made pinpoint slipspace jumps, putting them within the UNSC formation. This left them vulnerable for a short time but allowed them to strike the ODPs directly. Almost immediately following the departure of the Spartans aboard their Pelicans, a Covenant frigate jumped in behind the Pillar of Autumn, prompting the cruiser to turn around to face the ship. Two more frigates exited slipspace to flank the first, before the first was taken out by a Super MAC round. The Pillar of Autumn then engaged the starboard vessel, while the other was taken down by a second Super MAC. The two frigates were able to loose a salvo of plasma bolts, taking down a further two platforms and reducing the total number of MACs to thirteen. The remaining Archer missiles were used to take out a number of boarding craft heading to stop John-117 on Gamma Station.
Meanwhile, the Kewu-pattern battleship returned and destroyed the UNSC Herodotus and UNSC Musashi from beyond ODP range. The UNSC Pillar of Autumn engaged the warship, firing a salvo of missiles followed by MACs to impact at the same time and take down the vessel's shielding. The Covenant battleship fired back, hitting the Autumn and causing severe damage. The Autumn launched a Shiva nuclear device timed to explode on impact, and began to back away, with the nuclear weapon detonating inside the warship's now-recharged shielding and reverberating off the shields and disintegrating the vessel. The Pillar of Autumn withdrew to Beta Gabriel to slingshot back into the system while more Covenant ships exited slipspace. By this point, the orbital defense generators were being overrun.
Following this engagement, the Pillar of Autumn managed to execute a Class-L flash-dock and land at the Asźod ship breaking yards, the last-remaining off-world extraction point, as it awaited the arrival of NOBLE Team.
Attack on the orbital defense generators
On the final day of the battle for Reach, thousands of Covenant ground forces landed on Reach in an attempt to destroy the orbital defense generators, and were intercepted by Marine forces who were able to defend against the first few waves of Covenant forces while sustaining heavy casualties. While three Spartans (led by John-117) deployed to Gamma Station, the remaining supersoldiers were designated Red Team, and deployed to Orbital Defense Generator Facility A-331 to assist in the defense of the station. Their Pelican transport, Bravo 001, was escorted by several GA-TL1 Longsword strikecraft, with three of the four eventually peeling off to hold off some pursuing Seraphs. The Pelican was advised by "Golden Arrow" to head for Mount Törött, though the now badly-damaged Pelican was unlikely to be able to make its way back to the shipyard where the Autumn was docked. Bravo 001 had to explain that the Pelican was carrying Spartans, which consequently changed Golden Arrow's tone. The Pelican was eventually shot down, however, crash-landing in the Military Wilderness Training Preserve in the Highland Mountains. The Spartans were forced to jump out of the doomed Pelican, resulting in the deaths of four Spartans (including Malcolm-059), and some others were wounded as well. Due to their familiarity with the terrain, the Spartans were able to put up a meaningful resistance against the Covenant.
Near their crash site, the surviving Spartans discovered the shell-shocked remnants of Charlie Company's Gamma 1. Charlie Company had been assigned by Vice Admiral Danforth Whitcomb to recover prototype NOVA bombs from a base in the region. They were forced to help defend the orbital defense facilities on their way to their objective with Gamma 1 at Generator A-331. Unfortunately, as the Covenant neared the orbital defense generators, someone at HIGHCOM panicked and ordered Longswords to bomb everything within 500 meters, resulting in the destruction of the Covenant ground forces in that location, but also catastrophic friendly fire: Charlie Company was reduced to four men, and its leaders: Lieutenants Jake Chapman and Buckman were cut off from their subordinates.
The Spartans, after being briefed on the situation by Charlie Company, responded to a distress call from Whitcomb, who requested immediate evacuation. Frederic-104, commander of the Spartans, split the remains of Red Team into four groups: Team Alpha (Frederic-104, Kelly-087, and Joshua-029), which was tasked with eliminating an encampment of 10,000 Covenant and their hovering cruiser (without doing anything that could damage the orbital defense generators as the EMP from a nuclear weapon would render them inoperative and accomplish the Covenant's objective for them), Team Beta, tasked with defense of the orbital defense generators, Team Gamma (Li-008, Anton-044, and Grace-093), ordered to retrieve Whitcomb, and Team Delta (the Charlie Company Marines and six wounded Spartans which included William-043, Isaac-039, and Vinh-030), ordered to secure the Spartans' fallback point at CASTLE Base.
Team Alpha hijacked three Banshees and approached the Covenant encampment (they were ignored by the Grunt Zawaz, who assumed they were Elites on a secret mission) and used Fury Tactical Nuclear Weapons within the shields of the Covenant ship, destroying the encampment and negating the EMP effect that would have disabled the orbital defense generators. Joshua was killed in the process by mass light weapons salvos from the 10,000 Covenant ground forces encamped around the Cruiser. The remaining Spartans of Team Alpha then fell back to CASTLE Base, blasting their way through the remaining Covenant in the area with two commandeered Wraiths. Team Gamma accomplished its mission and fell back to Camp Independence with Whitcomb, where they survived the partial glassing of the planet.
Team Beta-Red were left to defend Generator A-331 alone, while Gamma 5 were holed up at Generator A-412. A-412 was attacked by two divisions of Covenant, who eventually managed to destroy the facility leaving only fourteen survivors. The remnants of Gamma 5 proceeded to mount up in two Warthogs and a radio van to rendezvous with Gamma 1. Recon 43 provided scouting for Gamma 1, initially spotting three Ghosts and twenty-four infantry. They were warned not to engage, but did so anyway upon realising that the small Covenant force was merely the advance of a massive armour convoy.
With a gargantuan ground force now closing on Facility A-331, Beta-Romeo Actual ordered her team and the remnants of Charlie Company to leave once the Covenant was distracted. The team set up automated turrets, while thirty-two Wraith tanks and six-hundred infantry began to close in. Gamma 5 observed the action through the lens of a drone as the fight started. The Spartans of Beta-Red went "hand to hand" with the enemy lines, and Team Delta and Charlie Company departed the site.
With Team Delta retreating alongside the remnants of Charlie Company, they contacted an operator with callsign "Iron Fist". Iron First informed the group that they were en route the extract Beta-Red, even as three Covenant cruisers dropped out of the sky and prepared to glass the area. Team Beta was unable to stop the Covenant, who attacked in swarms of thousands. The orbital defense generators were compromised, and the Covenant, after eliminating the powerless, immobile ODPs in geosynchronous orbit around the planet, began the glassing of Reach. A member of Beta-Red was able to get off one last communication to Fred-104 that the generators were overrun. Eventually, the destroyer UNSC Majestic was brought in by Alpha 20 and authorised to fire on the Covenant cruisers at Generator A-331, likely resulting in the deaths of Team Beta-Red. The blast of the orbital strike was felt by Charlie Company, who had narrowly managed to escape the blast. Team Delta fell back to CASTLE Base, but in the process lost the remaining Charlie Company marines and every Spartan save for Vinh, Isaac, and William. When the remnants of Team Alpha and Team Delta arrived at CASTLE Base, they found Catherine Halsey there.
The planet, having been steadily weakened during the invasion throughout the prior weeks, had fallen with relative ease to the might of the Covenant invaders.
Mission to Gamma Station
During the space battle, the AI Doppler, controller of Reach Station Gamma, was unable to destroy the vital information on board the UNSC Circumference, an ONI Prowler involved in Operation: HYPODERMIC. As the Covenant deployed troops to the station, Doppler reported this violation of the Cole Protocol to the Pillar of Autumn before self-destructing himself to keep from further violation. In response, Captain Jacob Keyes sent John-117, James-005, and Linda-058 to the station. They accomplished their task and destroyed the NAV database on board Reach Station Gamma, but at the cost of James and Linda. While on board, they rescued some Marines on board the station: Staff Sergeant Avery Junior Johnson and Privates Wallace Jenkins, Bisenti, and O'Brien. They were then evacuated by Pelican back to the Pillar of Autumn. Linda-058 was clinically dead, but there was still a chance of saving her, so she was placed in a cryo chamber. John-117 asked Lieutenant Hall to scan for James (who had been blown into space) but they were unable to find him.
Battle at Asźod
With the battle for Reach now nearing its conclusion, the Pillar of Autumn set down at the Asźod ship breaking yards for repairs as well as to receive the fragment of Cortana that Halsey had tasked NOBLE Team with delivering to Keyes. Covenant forces soon descended on the shipyards and began landing troops. Although the docked Autumn managed to hold off the Covenant attackers, most of the facility and the surrounding area was occupied.
The remnants of NOBLE Team, consisting of Carter-A259, Emile-A239, and Spartan-B312, arrived on the outskirts of the facility. Carter sent Noble Six and Emile to deliver the package to Keyes. He then sacrificed himself to destroy a Scarab. Six and Emile fought their way to the shipyards, where they linked up with some surviving UNSC ground troops. Keyes then contacted the Spartans and ordered them to clear Landing Platform Delta so that he could pick up the package. Upon arriving at the platform, Emile took charge of a mass driver and used it to provide cover fire for the Autumn. Six, meanwhile, cleared the platform of Covenant troops. With the platform cleared, Captain Keyes arrived in a Pelican and personally took possession of the package from Noble Six. Just then a CCS-class battlecruiser appeared and started heading toward the Autumn. Keyes ordered Emile to take out the cruiser before it destroyed the Autumn. Emile, however, was attacked by a pair of Sangheili Zealots. The Spartan killed the Zealots but was also killed, leaving the mass driver cannon unmanned. With the cruiser closing in on the Autumn, Noble Six volunteered to stay behind to take charge of the gun.
Keyes returned to the Autumn with the package and prepared the ship for take off. Meanwhile, Noble Six took control of the mass driver and destroyed several Phantoms and Banshees before the Covenant cruiser came within range. As the cruiser prepared to destroy the Autumn with its energy projector, Six fired directly into the ship's exposed glassing port. The resulting explosion crippled the cruiser.
With the skies cleared of hostiles, the Autumn took off and escaped Reach. Once in space, it made a slipspace jump away from the planet. A dozen Covenant ships pursued, while the remaining vessels stayed behind to finish glassing Reach. Back at the shipyards, the stranded Noble Six continued fighting against the Covenant invaders. Eventually, the lone Spartan was overwhelmed and killed by a swarm of Sangheili warriors.
The Fall of Reach was devastating for both sides as millions died and hundreds of starships were destroyed or rendered irreparably damaged. Many forces fled during the battle, ending up in various parts of space.
While Reach was the Covenant armada's main target, the other colonies in Epsilon Eridani would also come under attack. Tribute was besieged by Covenant forces for several months, ending only when the Great Schism caused the Jiralhanae and Sangheili to turn on each other. The small colony on Beta Gabriel was destroyed when Valorous Salvation deserted the Covenant fleet at Reach and fled there; ODSTs arrived shortly thereafter and killed all the Covenant deserters without incurring losses.
As for Reach itself, the planet would be glassed heavily, to the point that the burning could be seen from orbit. After this burning ceased, several missions took place on the planet, including a pioneer mission in 2553 and Operation: WOLFE in 2559. However, the damage to the surface and atmosphere caused by the initial glassing would stay for nearly 30 years with the planet only really becoming green again in 2589.
The Pillar of Autumn would actually arrive at Installation 04, a Halo ring. The Autumn would also confront its pursuers, devolving into a protracted battle on and around the ring. The survivors of the battle would eventually come back to Reach searching for survivors.
Officially, the Fall of Reach took place between July 25 and August 30, 2552. However, UNSC forces actually engaged Covenant forces as early as July 24, with Noble Team's deployment to the Visegrad area. UNSC fleet actions in the Reach Defense Coordination Zone also ceased several days after August 30, on September 5. Covenant glassing operations ended on September 27.
The Fall of Reach had some of the largest deployments of personnel and materiel in the Human-Covenant war. By August 30, there were 315 Covenant ships and 152 UNSC ships at Reach. Numerous Spartans of multiple generations, Marines, Army soldiers and Navy personnel also took part in the battle.
List of appearances | <urn:uuid:d5c3147f-c5b7-48b5-a56c-ca0b66a0c395> | CC-MAIN-2022-33 | https://www.halopedia.org/Fall_of_Reach | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00698.warc.gz | en | 0.960115 | 6,904 | 2.703125 | 3 |
Neighbors on the North Coast: Cleveland's Connection to the Mentor Shoreline
Hitchcock's Holdup by Sam Tamburro
The Western Reserve has long been one of the United States' most productive and distinct regions1 and is the home of industrial centers such as Akron, Cleveland, Lorain, and Youngstown.
Mentor, twenty miles east of Cleveland on Lake Erie's shore, nearly became one of these centers. Similar to Cleveland and Lorain, Mentor's lakefront position made it a likely site for industrial expansion during the Industrial Revolution, but no substantial growth occurred. One explanation would be that the community did not desire to become a dirty "smokestack" center of industry. Although this explanation sounds reasonable today, it simply was not the case in the early twentieth century. There were several efforts to industrialize Mentor throughout its early history, but the most significant attempt occurred in 1900. During 1900 to 1903, a business trust led by Calvary Morris, a Cleveland businessman, attempted to develop the Mentor Marsh into an industrial complex and attracted various rail lines and industries to the area. The plan was eventually stalled by problems with land acquisition caused by Peter M. Hitchcock.
Mentor's economy was closely tied to Cleveland's since Cleveland was the marketplace for many of the agricultural goods produced in Lake County. There were several rail lines connecting Mentor and Cleveland. As early as 1851, "The Lake Shore Road," running from Cleveland to Painesville, passed directly through Mentor.2 Like many other railroad lines at that time, it was mainly a passenger line. As the end of the nineteenth century approached, the rail system in the United States increasingly became industrially oriented, and Ohio experienced this trend.
Individuals like John D. Rockefeller, Andrew Carnegie, and Samuel Mather helped to build Ohio into the major steel-producing center in the United States. The steel industry needed railroads to carry raw materials to industrial plants and carry finished steel products away. In Cleveland, businessmen such as Calvary Morris saw this growth in industry as a business opportunity to expand into the suburbs. Entrepreneurs considered the Mentor Marsh to be a prime site because of its proximity to Cleveland. Three main railroad lines serviced Mentor in 1900: the Cleveland, Painesville, and Eastern (CP&E); the Lake Shore and Michigan Southern (LS&MS); and the New York Central and St. Louis (NYC&STL), which was also known as the "Nickel Plate Line." All of these lines played significant roles in the attempted industrialization of the Mentor Marsh. Both the LS&MS and the NYC&STL could handle heavy freight and reach industrial centers within the Great Lakes area. Both of these lines ran through Mentor and, with minor additions to the roads, could be directed north to service the marsh. As this information regarding the proposed industrial development reached the business community, land speculation in Mentor began. Smaller businesses, such as the Hoffman Hinge Company which manufactured metallic hinges and car couplers for railroad cars, were dependent on steel for production and were expected to follow the steel industry to the suburbs.3 Wealthy men from Cleveland were buying land in Mentor at a quick rate, banking on the possible migration of Cleveland manufacturing companies to Mentor.4
The Mentor Marsh is roughly seven miles long and represents the flow of the Grand River thousands of years ago. Geologic evolution redirected the Grand River north to Lake Erie and cut off the section of the river that flowed through the marsh area; only Black Creek remained.5 With the Grand River water flow cut off from the remaining portion of the river bed, silt and vegetation gradually filled the area to form the Mentor Marsh.
In the 1870s, there had been a plan to dredge out the channel and build a large inland dock to hold grain shipped in at a cheaper rate than the railroads were charging to haul it.6 The enacting of the Granger Laws in 1874, which established maximum rates railroads could charge on freight, negated the economic need for the dock at that time. By 1900, the feasibility of converting the marsh into a shipping channel for a shipyard or into a steel complex was well known. Articles began to appear in the Painesville Telegraph and Painesville Republican that suggested options on the marsh were being acquired. Although the allegations could not be substantiated at the time, they were proved to be correct months later.
In early December of 1900, Lake County Commissioner J. C. Campbell had begun securing land options on the east end of the marsh.7 Campbell had secured roughly 2,000 acres for the steel and coal trust of Cleveland.8 This trust organization was a collection of Cleveland companies' legal advisors headed by Calvary Morris and H. A. Garfield.9 Both Morris and Garfield were on the board of trustees of the Cleveland Trust Company, a newly formed lending institution in Cleveland.10 The Cleveland Trust Company were investors American Steel and Wire Company, which was a Cleveland based steel manufacturing company.11 The Cleveland Coal Trust represented the Pittsburgh Coal Company and Morris, who owned 8,000 acres of coal land in Jefferson County in southern Ohio.12 These two trusts merged and became known as the Cleveland Steel and Coal Trust. Their formation was caused by Andrew Carnegie's plan to industrialize the harbor in Conneaut, Ohio.
In 1896, Carnegie had bought the Pittsburgh, Butler, and Lake Erie (PB&LE) Railroad that ran from Pittsburgh, Pennsylvania to Conneaut, Ohio.13 Conneaut provided an excellent docking facility on Lake Erie where Carnegie would be able to ship coal and import ore for his Pittsburgh steel works. His plan was to transport ore by rail from Conneaut to Youngstown and Pittsburgh and return with coal to ship to other mills along the Great Lakes.14 By 1899, Carnegie began to consider building a steel tube manufacturing facility at Conneaut.15 Land was cheap and available. A manufacturing plant at Conneaut would be able to bring in iron ore from Minnesota and Michigan almost entirely by water, the cheapest form of transportation. The PB&LE could transport ore to Pittsburgh and, upon return, carry coke for the Conneaut plant at virtually no cost. Carnegie realized that finished products could be shipped by water or rail, from the Conneaut location, which would allow him to undercut the production cost of competitors and control the steel market.
It was the possibility of Carnegie cornering and controlling the steel and coal markets that caused the Cleveland Steel Trust and the Cleveland Coal Trust to consider building a complex at Mentor. At this time, American Steel and Wire's coal supply was shipped from Mahoning and Columbiana Counties and ore was brought in from Michigan.16 The land needed to build a large holding dock to compete with Carnegie was not available in the "Flats" area of Cleveland.17 The trust decided to build in Mentor. The trust obviously picked the marsh area for its access to Lake Erie and for its affordability. The price paid for the marsh acres was roughly $4,000.18 They proposed to dredge the delta of Black Creek at the west end of the marsh to form a harbor with ample dock space. Steam and electric rail systems connected Mentor to Cleveland, so it was practically a suburb. With the location decided upon, the trust moved to secure rail lines and an inexpensive coal supply from southeastern Ohio.
The trust's first action was to establish a rail line from Mentor to Pittsburgh. Their strategy was the same as Carnegie's?ship ore to Youngstown and Pittsburgh and return with coal for their steel operations. In February of 1901, the Morris trust purchased the Ohio River and Lake Erie (OR&LE) Railroad.19 It also purchased the Wheeling and Cleveland (W&C) Railroad, which included 21,000 acres of new coal fields. What is more important, the Morris syndicate formed a business relationship with the B&O in southern Ohio. The B&O allowed the Morris syndicate access to cross its Cleveland terminal and valley line to ship into Mentor.20 With an investment of $7,000,000 in these two rail lines, the trust was thought to have taken the final steps in the process to industrialize the marsh. But by March of 1901, the plan was all but canceled.
The reasons given for canceling the industrial development of the marsh vary, but two primary factors emerge: the formation of U.S. Steel and the inability to secure Peter M. Hitchcock's land option in the Mentor Marsh.
On 2 March 1901, Carnegie sold his Carnegie Steel Company to the U.S. Steel Corporation, which consisted of the National Steel Company, the American Steel and Wire Company, and the American Tin Plate Company.21 Both American Steel and Wire and American Tin Plate Company were heavily involved in the Cleveland Steel and Coal Trust.
The effect of the formation of U.S. Steel on the trust's plans for industrial development of the Mentor marsh is clear?it nullified them. Carnegie was now in alliance with the Cleveland Steel and Coal Trust. There was no longer a need to match his attempts in Conneaut with the development of the marsh, but the idea to industrialize the Mentor Marsh was not scrapped entirely in February of 1901.
The need for a large ore and coal dock on Lake Erie was still apparent. Mentor was a better site than Conneaut because of its proximity to Cleveland. Mentor is roughly 35 miles closer to Cleveland than Conneaut. U.S. Steel owned nine steel and wire plants in the Cleveland area, and the need for a large ore supply was evident. But the trust ran into difficulties with land acquisition in Mentor.
J.C. Campbell, the Lake County Commissioner mentioned earlier, now acted as the custodian on behalf of U.S. Steel's interests in Mentor. He was attempting to obtain marsh land for the project well into May of 1901, four months after the U.S. Steel's formation.22 This would lead one to believe that U.S. Steel still viewed the development of the marsh as viable. On 17 May 1901, Campbell was able to purchase land on the east side of the marsh from the Painesville Sportsman's Club.23 Campbell was also successful in buying acreage of marsh land from Alex Snell and the Brooks Family. The only section that needed to be purchased was located at the west end of the marsh where the Black Creek flows into Lake Erie. This area would be vital for industrial development of the marsh because it was the only section of the channel that connected with Lake Erie; it would provide the inland bay for the project. This valuable tract of land was owned by Peter M. Hitchcock, and he had no intention of selling it.
Hitchcock was a member of one of the most prominent families in Lake County. The Hitchcocks had long established careers in law. Hitchcock's father was a Lake County judge, and his grandfather was Chief Justice of the Ohio State Supreme Court.24 After law school, Peter M. Hitchcock became a member of the Cleveland law firm of Powers, Brown, and Company. He also began to invest in heavy industry such as the Mahoning Valley Iron Company and the Ontario Rolling Mill in Hamilton, Ontario. Additionally, he founded the Moon Run coal mines and the Reynoldsville coal field both in Western Pennsylvania.25 In 1898, these coal fields became collectively known as the Pittsburgh Coal Company. This company was part of the coal trust that entered into partnership with the steel interests in January of 1901 to block Carnegie's attempts in Conneaut and to build an industrial complex at the Mentor Marsh. When the merger between the Cleveland Steel and Coal Trust and U.S. Steel was finalized, Hitchcock refused to sell his option. Steel; the industrialization plan was thwarted.
The reasons for Hitchcock's decision not to sell his land is unknown. It is obvious that when Carnegie became involved in the Mentor Marsh project, Hitchcock lost interest in the deal. It cannot be determined if business or personal differences existed between Hitchcock and Carnegie. But, with his knowledge of the law and his personal financial fortune, Hitchcock was ready to withstand any attempt from U.S. Steel to acquire his land. U.S. Steel tried for three years to convince Hitchcock to sell his 52 acres. In May of 1903, U.S. Steel finally relinquished their attempt to purchase Hitchcock's land.26
On 24 May 1903, Hitchcock entered into negotiations to sell his land to the Cleveland, Ravenna, and Southern (CR&S) Railway.27 Although he never sold his land, the possibility of CR&S owning the inlet access drew a strong response from U.S. Steel. The Cleveland, Youngstown, and Pittsburgh (CY&P) Railroad, which was run by U.S. Steel, sold the eastern portion of the marsh to the B&O. The B&O owned a gravel and sand shipping dock in Fairport Harbor, which is located near the east end of the marsh. Even though they had no plans to expand their shipping docks, the B&O realized the importance of keeping this property out of the possession of a competitor. U.S. Steel took a similar strategy in the west end of the marsh. The CY&P retained ownership of a large tract of land in the western section of the marsh, and they no longer made an active attempt to develop the marsh. With the Hitchcock land dispute in the Mentor Marsh still unsettled, U.S. Steel decided to build their industrial complex in Lorain, Ohio.
U.S. Steel's choice of the Lorain site was a sound business decision. Lorain had an existing steel manufacturing base with established rail lines. There would be no land disputes in Lorain. Furthermore, Lorain's American Shipbuilding Company produced some of the largest ore boats on the Great Lakes and was an important customer of U.S. Steel.28 In 1903, U.S. Steel, in conjunction with the LS&MS, began construction of a rail line that connected Lorain with Youngstown, Ohio. This rail line connected the mill and ore docks of Lorain with the furnaces in the Mahoning and Shenango Valleys. The LS&MS also established a rail belt line around Cleveland. This new belt line began at Wickliffe, on the east side of Cleveland; ran southwest around Cleveland; and extended northwest to its terminus in Lorain.29 This new belt line interconnected the U.S. Steel mills in the surrounding Cleveland area with the port of Lorain. U.S. Steel also established a new tube mill at Lorain to take advantage of the low cost of production and transportation that existed at the port facility. U.S. Steel had firmly established their new industrial port on Lake Erie at Lorain. The year 1903 marked the end of the attempts to develop the Mentor Marsh.
The beginning of the twentieth century was the starting point for an enormous industrialization period in the United States. Northeast Ohio possessed natural resources and an advanced transportation system that made it a leading area for development. Mentor was nearly swept up in this industrialization. For a brief period, Mentor was considered to be the next site for "smokestack" expansion. Major railroad systems serviced Mentor, and its lakefront location would have made it an industrial port for U.S. Steel. But, delays caused by the inability to acquire all the land needed for the project prompted U.S. Steel to find another location for their ore port and tube mill. Clearly, Peter M. Hitchcock's refusal to sell his land to U.S. Steel is what caused the reevaluation of the industrialization plan in 1901. Hitchcock was not acting out of environmental or community concerns. In fact, He was initially involved in the marsh development project. There was also widespread support for this project in Mentor. The natural beauty of the marsh wetlands would come to be appreciated as a resource only later in the century. The long-term result of Hitchcock's decision is clear -- he prevented Mentor from becoming the Lorain of the east side of Cleveland.
- Primary Sources
- Unpublished Materials
- The Mentor Marsh Vertical File, Lake County HistoricalSociety, Mentor, Ohio.
- The Mentor Harbor Yacht Club Vertical File, Lake County Historical Society, Mentor, Ohio.
- The Peter M. Hitchcock Papers, Lake County Historical Society, Mentor, Ohio.
- Lake County Property Atlas, 1899, Lake County Historical Society, Mentor, Ohio.
- Lake County Property Deeds, Lake County Recorders Office, Painesville, Ohio.
- Cleveland Leader, 1901-1903
- Cleveland Plain Dealer, 1901-1903
- Cleveland Press, 1901-1903
- Cleveland World, 1901
- Painesville Republican, 1900-1903
- Painesville Telegraph, 1900-1903
- Unpublished Materials
- Secondary Sources
- Ahstrom, Janice M. et al. Here is Lake County, Ohio. Cleveland: Howard Allen Publishers, 1964.
- Branthoover, William. Fairport Harbor, Ohio. Fairport Harbor, Ohio: Lake Photo Engraving, 1976.
- Cotter, Arundel. The Authentic History of the United States Steel Corporation. New York: Moody Book Company, 1916.
- Hatcher, Harlan. The Western Reserve: The Story of New Connecticut in Ohio. Kent, Ohio: Kent State University Press, 1966.
- Johnson, Tom L. My Story Edited by Elizabeth Hauser. New York: B.W. Huebsch, 1913.
- Lupold, Harry F. The Latchstring is Out: A Pioneer History of Lake County, Ohio. Mentor, Ohio Lakeland Community College Press, 1974.
- Nash Gary B. and Julie Roy Jeffrey. The American People: Creating a Nation and a Society. New York: Harper Collins Publishers, 1994.
- Rose William Ganson. Cleveland: The Making of a City. Kent, Ohio: Kent State University Press, 1990.
- Swain, Sandra A. By The Buckeye. Mentor, Ohio: Center Street Publications, 1984.
- Wall, Joseph. Andrew Carnegie. New York: Oxford University Press, 1970.
- 1Harlan Hatcher, The Western Reserve (Kent, Ohio: Kent State University Press, 1966), 1.
- 2 Ibid. , 81.
- 3 Ibid.
- 4 Ibid.
- 5 Mentor Marsh Vertical File, Lake County Historical Society, Mentor, Ohio.
- 6 Mentor Harbor Yacht Club Vertical File, Lake County Historical Society
- 7 "Campbell Is Mum," Painesville Republican, 7 December 1900, p. 3.
- 8 "Steel and Coal Trusts Will Fight Carnegie," Cleveland Press, 12 January 1901, sec. A, p. 1.
- 9 Ibid.
- 10 Rose, Cleveland: The Making of a City, 555.
- 11 Arundel Cotter The Authentic History of The United States Steel Corporation (New York: Moody Book Company, 1916), 20.
- 12 "Clevelanders In Big Coal Deal," Cleveland Plain Dealer 27 January 1901, p. 2.
- 13 Joseph Wall, Andrew Carnegie New York: Oxford University Press, 1970) 614.
- 14 Ibid. , 614.
- 15 Ibid. , 774.
- 16 Philip D. Jordan, Ohio Comes of Age, 1873-1900 (Columbus: The Ohio Historical Society Press, 1946), 223.
- 17 Ibid. , 223.
- 18 "Steel and Coal Trusts will fight Carnegie," Cleveland Press, 12 January 1901, sec. A p. 1.
- 19 "Back of a Projected Railroad," Cleveland Leader 12 February 1901, p. 1.
- 20 Ibid.
- 21 Wall, Andrew Carnegie, 792.
- 22 "Marsh Option Sale," Painesville Telegraph 5 May 1901.
- 23 Lake County Property Deeds, Vol. 52, p. 263. Lake County Recorders Office, Painesville, Ohio.
- 24 Peter M, Hitchcock's Career Vita, Peter M. Hitchcock Papers, Lake County Historical Society, Box 1
- 25 Ibid.
- 26 "Lake Shore Railroad Belt," Cleveland Leader 27 May 1903, p. 1.
- 27 "B&O The Purchasers," Cleveland Leader 10 July 1903, p. 5.
- 28 Rose, Cleveland, 690.
- 29 "The Steel Trust Will Build a Railroad From Cleveland To Lorain" Cleveland Leader 4 June 1903, p. 1 | <urn:uuid:3e86a056-e4de-48a9-8814-4aece425ee12> | CC-MAIN-2022-33 | http://www.clevelandmemory.org/mentor/hitchcock.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00497.warc.gz | en | 0.958695 | 4,263 | 3.28125 | 3 |
Another positive side of the US education system is that . 1 Less Pressure. Updated: 07/04/2022 Table of Contents The student with the highest grade will be given the class average or median. Home Education Trends and Topics Assessment Fair Grades, Dropping Grades, Grading Versus Knowledge. 8. 1. First, this system enables poor and middle class households to live more comfortably while still paying taxes, a process investing them in citizenship. It Can Boost The Education Of Disadvantaged Students.
Like bringing in low-cost Chromebooks into the classroom on a regular basis. Distance learning has been difficult. 2. Students often see grades as the major obstacle to getting into college, earning their degree, or landing their dream job. old equipment that is barely running and doesnt pay that good but the owner is a very fair and nice guy to work for . This has two benefits. HGA Grading Email: firstname.lastname@example.org. Last month Coca-Cola disclosed payments of millions of dollars to 115 researchers and health professionals. Imagine the situation in which a few students are struggling to make sense of text and the teacher provides a matrix or similar graphic organizer to help them structure their thinking. Employment among African-American men in high-crime areas in 51 cities increased as much as 4 percent following the enactment of ban-the-box policies, according to a recent report from Daniel Shoag, a professor of public policy at Harvard's Kennedy School. Learning is a construct, and there is no perfect measure. The teacher is . In the beginning, the Internet was applied and created for the United States government to save essential data if there is a war and it could save relevant data around all the state instead of saving it in one location. But more classrooms around the country are looking a lot like Lubak's. It's an example of what's called project-based learning (PBL), hands-on instruction that's been around since the early 1900s. However, less quantifiable skills or values such as communication, creativity or teamwork may be more subjective. 3. Here are just three examples of grading-related practices that warrant our close scrutiny. The simplistic nature of the system makes it user-friendly for teachers, students, and parents. Dr. Thomas Guskey shares the current research on "fair" grading and what teachers should be doing instead. This new grading system is understandable as the world right now is in uncharted waters with the ongoing spread of COVID-19, and teachers are adjusting to the online learning platform. These professors care enough about their students to know that good attendance leads to success and they want students to succeed. Since letter grading tends to group students into bands, the difference between a 90 and 92 is not fussed over. Cultural factors, unfamiliarity with testing methods, test anxiety, and illness can wreak havoc with how well a student performs. If a student meets specific . Introduction. Importance and Advantages of Standardization from Consumers' Viewpoint The importance and advantages of standardization from a consumers' point of view are as follows: i.
Pros. For those unaware, standards-based grading is a popular evaluation system designed to simplify teaching, learning, and assessment. The idea is that students will learn more easily if teachers grade based upon very explicit and clear standards. Cons. Scores don't provide a true picture of a student's ability. It is a typical day at my New Hampshire high school, and I am observing a biology class. Also if you wanna ask me any questions about NYU or applying, this can be a casual AMA. 2.3 3. Pay. When I first started teaching as an adjunct instructor, I didn't think too much about the pros or cons of open-source or curated classroom materials.
Cons: cost and feeling of social isolation. HGA Grading Phone Number: 1-865-224-3783. Salary structures provide guidelines for pay decisions. Teachers are aware of what materials were taught in previous years and what will be taught in years to come. Because the impact of a plus/minus grading scale on student GPA was calculated retrospectively, impact on student motivation, effort, and performance could not be quantified. They're not seen as the best option for older cards with PSA often being preferred, but they do have a vintage service (BVS) for older cards. Pros. The pass/fail system arguably puts less pressure on students and allows them to relax and study without obsessing about achieving a high letter grade. This makes it easy for students to see where they stand in their academic performance. Some of the pros of grades include: Standardization and universally recognized: In virtually any corner of the globe, people will understand what an A, B, C, D, or F letter grade stands for. Through standardized testing, we can identify the areas of an educational system that need to evolve so we can put modern learning opportunities into the hands of our students. Pros and cons of standardization and adaption of market offers 1. Gives the students an obvious idea about their weaknesses and strengths. With a country, comprising 50 states spreading on an entire continent, you can imagine the overwhelming range of courses and majors that are at your disposal, and, with English being the universal language, you can practically choose any area of study in any university. 2.2 2.
This also permits students to become more exploratory in the courses they choose to take. . The Internet is a system that connects phones, gadgets that allow people around the globe to share information and communicate. This study has several limitations. Pros. Fairtrade Mark products reached 7.3 billion in 2015. Starting out, I was just grateful for the supporting content available from the publisher for the . Interview scoring sheets can require a lot of attention during interviews. On the qualitative side, there is an emphasis on providing good feedback because students' "peer score" is a combination of what others say about them and the quality of the feedback they provide. Below are some of the rationales for adopting this policy. Over time these myths surrounding the American dream have altered due to constantly evolving cultures . According to the research of international best practice insight and technology company CEB, 95% of managers aren't satisfied with how their companies go about performance reviews. Parents Can Be Demanding . There is a common theme in college classrooms today of teachers causing unjust academic harm for . You'll get top faculty as a freshman and in small classes! 4. Fair trade improves the lives of those living in developing countries by offering small scale producers fair trade relations and a guaranteed minimum price. Virtually everyone knows that earning an A is good while earning an F is associated with failure. Small producers spend 31% of the Fairtrade premium income on investments . At the end of the fall semester, I received this e-mail from a student in my MSC 1003 class who had recently earned a D grade: i am on academic probation. Wide variations in teacher grading practices may prevent students (and parents) from seeing a consistent and predictable pattern in grading performance. Con/Pro: Diversity. Robert Watrel, president of the Faculty Senate and associate . Professors who give grades based on attendance are trying to aid students by giving them an incentive to attend class. Answer (1 of 3): I'll add the following Pro: Often tiny class sizes. The Pros and Cons of Open-Sources Vs. Curated Content. Identify the types of grading rubrics with examples, examine the pros and cons of using rubrics, and discover how to create a rubric. When grading is on a typical 100-point scale, failing grades cover a disproportionate 3/5s of the scale while passing grades cover 2/5s. 4. Supporters point to a growing body of research that shows that hands-on instruction is . The customers need to inspect all the goods which are not standardized or graded. A grading contract is different in that it focuses on the content of the course and how that course (or each individual assignment) is evaluated and "graded.". 9.5 - Gem Mint. doesnt reach 2.0 by the end of next semester, im kicked out of baruch. Can a host offer isolation of a clients resources on a VPS, when they are using the same CPU(S) for all clients on a server? Interview scoring sheets limit eye contact. 3 If you decide that you do want to proceed with annual reviews as a way of benchmarking employee .
College instructors who favor extra credit tend to want to mitigate some of the potentially unfair subjectivity inherent in grading. control. Widening The Opportunity For Backward Classes. 2 Pros Of Positive Discrimination. Trading blocks have become increasingly influential for world trade. 2. It also makes me not dread meeting up because I feel as if it becomes fun when friends are involved. A CEO would find it outrageous to have an inventory . Taking some of the workload off of yourself. Already knowing your group members, their schedule, and how they work is honestly the best way to go about getting a good grade on a project. One of the main drawbacks to purchasing items online is the inability to physically inspect an item before making a purchase. Promoting The Education And Work From Communal Level. 124 writers online.
One of the biggest benefits is that teachers are often encouraged to think outside the box and are encouraged to be innovative and proactive in their classrooms.
Financial, responsibilities, emotional and physical exchanges are just some that could ultimately both benefit and cost. The most obvious effect of a letter grading system is that it takes the pressure of microscopic score comparisons between students. Brown focuses on undergraduates unlike most top-schools. It improves the mood of the students in the classroom. The controversial grading policywhich is rising in popularity across the countrysets the lowest possible grade for any assignment or test at 50 percent, even when students turn in no work at all. As a manager, you can objectively rate whether employees met quantifiable measures or goals. Far too many people wrongly assume that standardized testing data provides a neutral authoritative assessment of a child's intellectual ability. Pros of Being a Teacher Cons of Being a Teacher; You Can Learn a Variety of .
Fair and Consistent Grading Effective grading provides accurate information to students about their performance and also help them understand what they can improve on. H2 - Freshmen will be more supportive of +/- grades than upperclassmen. Managing in a forced ranking system reminds me a bit of the famous old line from Joe Louis before his fight with Billy Conn, who boasted he'd rely on his speed in the ring. 1. It based on partnership between consumers and producers. Curated Content. This makes students want to work less for a good grade. Sacrificing Class Time. This system also does a fair job of lowering the anxiety levels of both teachers and students but with this, they are eliminating important incentives. This is your ultimate guide to all of the bamboo flooring pros and cons. 9 - Mint. TLDR: Pros: New York City, the academic rigor, study abroad programs, and internship opportunities. It doesn't instil a sense of competition. More easily achievable But the policies can have unintended negative consequences as well. HGA Grading Address: Hybrid Grading Approach, 6518A Chapman Highway, Knoxville, Tennessee 37920. As well as the above, they also have a contact form on their website which can be used to get in touch with the company. Responded Louis: "He . Fair Trade Facts. 1. Students who have not been successful in the classroom tend to lean toward the idea of Fair Share Grading because there is a safety net that prevents them from failing ("Best"). It feels great when students finally understand a challenging concept or idea, or solve a problem they may have been struggling with. What are the pros and cons ''for a client'' to use Fair Share CPU versus a defined CPU limit? Is a VPS with Fair Share CPU basically a glorified shared hosting plan? Teachers are assigning work on a daily basis, with 45 minutes to hour-long assignments for .
Pros, cons of plus-minus. Thus, students would be expected to resist the change to a +/- grading system. Pros and Cons of Being a Teacher - Summary Table. Purchasing items online through live auctions has many advantages, but also much more risk compared to a local auction house. Taking detailed notes helps interviewers evaluate candidates' answers. Freeloading. Upperclassmen have more experience with the current grading system. Diversity. 1. The Pros And Cons Of Share Grading System.
Your access to top-faculty is much better than most top schools. Not an accurate representation of the performance and the knowledge gained. A plus-minus grading scale draws support from faculty and some dissent from students. Brian Stack Monday, February 12, 2018. Managing in a forced ranking system reminds me a bit of the famous old line from Joe Louis before his fight with Billy Conn, who boasted he'd rely on his speed in the ring. Location. As with most things in life, the reality is that education technologies come with their fair share of pros and cons. Takes the pressure off students. When one debates the pros and cons of marriage ultimately the decision is based upon the benefits and costs the marriage would have on the individual. 2. My inbox is also open for whatever you wanna ask too. Indicate how effective the project is in presenting different sides. The first con to fair share grading would simply be that it is an unfair advantage to those students who give their assignments all they have. Lack of eye contact might create an . 9.5 - Gem Mint. Schools systems like Fairfax County Public Schools and the Philadelphia School District have adopted similar approaches in recent years, arguing that they give all students a chance to succeed. i mathematically cant make 2.0 if i have a D on top of a F. please, im begging u. i need to retake music or i will end up in community college. It is important to inspect all photographs and ask as many questions as needed before bidding . Pros of a Traditional Grading Scale . Reviews from Andy Strange Grading employees about Andy Strange Grading culture, salaries, benefits, work-life balance, management, job security, and more. Another pro of standardized testing is that is gives a good picture for the student and their parent on . Supporters of progressive taxation point to several benefits. That approach can cause problems. Instead, students and faculty members were surveyed about their perception of plus/minus grading. We are all imperfect graders, so allowing extra credit is only fair. 10 - Pristine It's reasonably extensive, and you'll arguably have a better idea of the overall quality of the card compared to a PSA graded version. Grading Pattern description. This can create an aura of laziness because students will feel like they don't have to participate in class or even show up on time. Identify the types of grading rubrics with examples, examine the pros and cons of using rubrics, and discover how to create a rubric. Not all schools have behavioral issues, but some have more than their fair share. Disadvantages for Students. 1. Being able to share knowledge of and passion for certain topics is energizing and rewarding. 9 - Mint. Pros And Cons Of The American Dream Essay. What are the pros of letter grading system in K-12? Edward Gonsalves. This alone makes students want to try less. Pros.
This can lead to lower prices, increased export potential, higher growth, economies of scale and greater competition.
List of the Pros of Gifted Programs. 3. 2021 Student Grading for Controversial Topics presentation Date 11 AUG 2021 1= poor, 2= poor, 3=moderate, 4= good, 5=very good Indicate how interesting the presentation is. Class standing can also be expected to affect student responses to a change in grading system. One of the main drawbacks to purchasing items online is the inability to physically inspect an item before making a purchase.
2.4 4. These exchanges can influence the individual both positively and negatively. Standards-based instruction guides planning and instruction and helps teachers keep their focus on the learning target. With almost 9,000 downloads and counting, this show is the most popular episode on Every Classroom Matters in 2016 so far.
This is in contrast to the belief that many public school teachers are too traditional and rigid. ok lol bye now. Withholding assistance when students need it. A grading debate: Pros and cons of reassessments. This includes professional and balanced presentation and presenter demeanor. It Helps In Climbing The Socioeconomic Ladder. One of the main complaints from students is that of unequal participation.
10 - Pristine It's reasonably extensive, and you'll arguably have a better idea of the overall quality of the card compared to a PSA graded version. You will be grading papers, and you may even be required to attend various training seminars. With standardized tests, all the students, no matter which school, are graded the same way. Intelligence has begun to define individuals globally, but the goal of the minimum grading system is fairness and equality. Purchasing items online through live auctions has many advantages, but also much more risk compared to a local auction house. The project-based model: less lecturing, more doing. Pro 1 - Cost Control and Planning. 2.1 1. Not surprisingly, the workload of homeschoolingand kids home all dayis likely to leave you with less time for yourself. A French drain is a system for eliminating excess water from the soil. Those who have never tried e-learning are often surprised to find . The traditional grading scale in learning, as we know it, is facing an existential crisis. Universities and colleges have now moved most of their activities online and are currently engaged in distance learning. One con of fair share grading is that it softens competency requirements which contributes to grade inflation. The pass fail grading system allows us to be able to get credit for a class without necessarily putting in a lot of work. Supporters argue that grades hold students accountable for their work, and provide a simple frame of reference for their standing in class. Teachers usually receive decent benefits, including a steady paycheck, health insurance, and retirement . Fair trade is trade in which fair prices are paid to producers in developing countries. Less Time for Yourself. Extinguish Conflict. More than 1.65 million farmers and workers are involved in the 1,226 Fairtrade certified producer organizations. It Creates An Environment Of Laziness. Although gifted kids have taken labels like . Unequal Participation. It can offer intelligent children a boost in their self-esteem. It allows excess water to . Do not evaluate your own group. If I was at school, I would've learned much more due to a teacher paying more attention. It is important to inspect all photographs and ask as many questions as needed before bidding . When there is less stress in the classroom because of the pass-fail grading system, then there is an improved mood in the student body. . The classic French drain is simple just a trench filled with gravel, with sand on top. Additionally, grading fairly and consistently will help you both provide useful feedback to your students and will help minimize student grading complaints. The traditional grading scale is easy to interpret and understand. Was this review helpful? The traditional grading scale is universally recognized. It's been hard to manage, teaching yourself through assignmentsI try to do my work when everybody is asleep at night. Buying Facility Customers can buy standardized goods easily. Unfair Grading. if my G.P.A. Con #3. SDSU students could potentially see a change in grading if the Faculty Senate votes in favor of a plus-minus grading system next month. 6. It strips a student's grade down to their ability to meet the announced standards. They have advantages in enabling free trade between geographically close countries. Fair share grading is when all students in the class take an intended exam, but the class average score of the test is given to every student. But, taking notes can interrupt the natural flow (and eye contact) that most people expect in an interview setting. The required rigor of a charter school improves the overall quality of education. The Koles' method is more quantitatively and qualitatively thorough compared to the other methods. 1. It is not an exact scoring system. Responded Louis: "He .
3. It encourages kids to work together. Instead of focusing on an upcoming exam or paper, everyone can spend more time with the materials taught in the class. This goes hand in hand with freeloaders and unfair grading rubrics. Brown is EXTREMELY liber. With a no-zero grading policy, the glass is always half full. Some homeschooling parents say they don't have time to shower, let alone exercise or take care of their own needs. However, it can lead to compromise as countries pool economic sovereignty. 5am to 8pm work shift. Leaves possibility of subjectivity. 1. The first pro is that everyone succeeds ("Admin"). Wladimir Klitschko - Grading the Pros and Cons of This Era's Top Heavyweight: The all-time placement of Wladimir Klitschko is one of the more interesting conundrums in modern boxing history.His supporters feel he belongs right up there with the best of all-time, while others think he is a vulnerable champion who has taken full advantage of the worst heavyweight division in history. You can work night shifts and study in the afternoon or work a regular 9-to-5 job and begin your studies immediately after work. With e-learning, where you are free to choose your own time to study and set your own schedule, this problem is absolutely nonexistent. 2. Second, it enables the government to establish high upper tax brackets to generate revenue. Make class work easier. There are pros to Fair Share Grading though ("Best"). With the traditional grading system, many institutions and students can benefit in a variety of ways. They're not seen as the best option for older cards with PSA often being preferred, but they do have a vintage service (BVS) for older cards. November 15, 2011. The global bamboo flooring market is valued at $1,249 million and is expected to reach $1,549 by the end of 2026.
It is a biased origin that holds no key factorization in history. Tied up in both the pros and cons of annual reviews are plenty of opportunities for reviews to go wrong. Intelligence is something that gets equated to elitism in modern politics, so that attitude toward "being smart" filters down in the family environment until it eventually reaches the school. Updated: 07/04/2022 Table of Contents 26% of all farmers and workers are women, 48% in the larger plantations certified. In a pay-for-performance model, the compensation relies on the performance reviews received by employees. Pros of Grades. They are free to concentrate on the limited number of skills and concepts included in their grade-level standards. Seth Harris. Most students have a long list of cons when it comes to group work. Third, a progressive taxation system .
2. American Dream Essay Rough Draft A chance to re-establish oneself, an opportunity to earn one's fair share of wealth, a vision waiting to be created into a reality: The original American dream. Specifically, Coke spent $21.8 million to fund pro-industry research and $96.8 million on partnerships with health organizations, including $2.1 million directly paid to health experts. This discussion will be over both the pros and cons of fair share grading. For parents who are used to a quiet, kid-free environment during the day, this . Group Name/number: Group 4 Restrictions/Laws . They help to eliminate discretionary increases that are far higher than necessary. Plus, if management knows the minimum and maximum pay for each job, planning for future costs is a whole lot easier. . | <urn:uuid:d41896ab-fe11-41a5-9bec-8514f2dde964> | CC-MAIN-2022-33 | http://dunanscastle.tv/kenyatta/72763916f0692ad6ac9eff6e3cf8c0a3ae | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00494.warc.gz | en | 0.956992 | 4,893 | 3.046875 | 3 |